id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1702.02471
Adrien Bizeray
Adrien M. Bizeray, Jin-Ho Kim, Stephen R. Duncan and David A. Howey
Identifiability and parameter estimation of the single particle lithium-ion battery model
16 pages, 9 figures, pre-print submitted to the IEEE Transactions on Control Systems Technology
null
10.1109/TCST.2018.2838097
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the identifiability and estimation of the parameters of the single particle model (SPM) for lithium-ion battery simulation. Identifiability is addressed both in principle and in practice. The approach begins by grouping parameters and partially non-dimensionalising the SPM to determine the maximum expected degrees of freedom in the problem. We discover that, excluding open circuit voltage, there are only six independent parameters. We then examine the structural identifiability by considering whether the transfer function of the linearised SPM is unique. It is found that the model is unique provided that the electrode open circuit voltage functions have a known non-zero gradient, the parameters are ordered, and the electrode kinetics are lumped into a single charge transfer resistance parameter. We then demonstrate the practical estimation of model parameters from measured frequency-domain experimental electrochemical impedance spectroscopy (EIS) data, and show additionally that the parametrised model provides good predictive capabilities in the time domain, exhibiting a maximum voltage error of 20 mV between model and experiment over a 10 minute dynamic discharge.
[ { "created": "Wed, 8 Feb 2017 15:28:15 GMT", "version": "v1" }, { "created": "Wed, 24 Jan 2018 17:33:05 GMT", "version": "v2" } ]
2018-10-03
[ [ "Bizeray", "Adrien M.", "" ], [ "Kim", "Jin-Ho", "" ], [ "Duncan", "Stephen R.", "" ], [ "Howey", "David A.", "" ] ]
This paper investigates the identifiability and estimation of the parameters of the single particle model (SPM) for lithium-ion battery simulation. Identifiability is addressed both in principle and in practice. The approach begins by grouping parameters and partially non-dimensionalising the SPM to determine the maximum expected degrees of freedom in the problem. We discover that, excluding open circuit voltage, there are only six independent parameters. We then examine the structural identifiability by considering whether the transfer function of the linearised SPM is unique. It is found that the model is unique provided that the electrode open circuit voltage functions have a known non-zero gradient, the parameters are ordered, and the electrode kinetics are lumped into a single charge transfer resistance parameter. We then demonstrate the practical estimation of model parameters from measured frequency-domain experimental electrochemical impedance spectroscopy (EIS) data, and show additionally that the parametrised model provides good predictive capabilities in the time domain, exhibiting a maximum voltage error of 20 mV between model and experiment over a 10 minute dynamic discharge.
2403.12976
Vincenzo Barbuto
Vincenzo Barbuto, Claudio Savaglio, Roberto Minerva, Noel Crespi, Giancarlo Fortino
Towards an Edge Intelligence-Based Traffic Monitoring System
null
null
10.1109/SMC53992.2023.10393907
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Cities have undergone significant changes due to the rapid increase in urban population, heightened demand for resources, and growing concerns over climate change. To address these challenges, digital transformation has become a necessity. Recent advancements in Artificial Intelligence (AI) and sensing techniques, such as synthetic sensing, can elevate Digital Twins (DTs) from digital copies of physical objects to effective and efficient platforms for data collection and in-situ processing. In such a scenario, this paper presents a compre-hensive approach for developing a Traffic Monitoring System (TMS) based on Edge Intelligence (EI), specifically designed for smart cities. Our approach prioritizes the placement of intelligence as close as possible to data sources, and leverages an "opportunistic" interpretation of DT (ODT), resulting in a novel and interdisciplinary strategy to re-engineering large-scale distributed smart systems. The preliminary results of the proposed system have shown that moving computation to the edge of the network provides several benefits, including (i) enhanced inference performance, (ii) reduced bandwidth and power consumption, (iii) and decreased latencies with respect to the classic cloud -centric approach.
[ { "created": "Mon, 5 Feb 2024 17:08:18 GMT", "version": "v1" } ]
2024-03-21
[ [ "Barbuto", "Vincenzo", "" ], [ "Savaglio", "Claudio", "" ], [ "Minerva", "Roberto", "" ], [ "Crespi", "Noel", "" ], [ "Fortino", "Giancarlo", "" ] ]
Cities have undergone significant changes due to the rapid increase in urban population, heightened demand for resources, and growing concerns over climate change. To address these challenges, digital transformation has become a necessity. Recent advancements in Artificial Intelligence (AI) and sensing techniques, such as synthetic sensing, can elevate Digital Twins (DTs) from digital copies of physical objects to effective and efficient platforms for data collection and in-situ processing. In such a scenario, this paper presents a compre-hensive approach for developing a Traffic Monitoring System (TMS) based on Edge Intelligence (EI), specifically designed for smart cities. Our approach prioritizes the placement of intelligence as close as possible to data sources, and leverages an "opportunistic" interpretation of DT (ODT), resulting in a novel and interdisciplinary strategy to re-engineering large-scale distributed smart systems. The preliminary results of the proposed system have shown that moving computation to the edge of the network provides several benefits, including (i) enhanced inference performance, (ii) reduced bandwidth and power consumption, (iii) and decreased latencies with respect to the classic cloud -centric approach.
2404.14248
Zongwei Wu
Xiaoning Liu, Zongwei Wu, Ao Li, Florin-Alexandru Vasluianu, Yulun Zhang, Shuhang Gu, Le Zhang, Ce Zhu, Radu Timofte, Zhi Jin, Hongjun Wu, Chenxi Wang, Haitao Ling, Yuanhao Cai, Hao Bian, Yuxin Zheng, Jing Lin, Alan Yuille, Ben Shao, Jin Guo, Tianli Liu, Mohao Wu, Yixu Feng, Shuo Hou, Haotian Lin, Yu Zhu, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang, Qingsen Yan, Wenbin Zou, Weipeng Yang, Yunxiang Li, Qiaomu Wei, Tian Ye, Sixiang Chen, Zhao Zhang, Suiyi Zhao, Bo Wang, Yan Luo, Zhichao Zuo, Mingshen Wang, Junhu Wang, Yanyan Wei, Xiaopeng Sun, Yu Gao, Jiancheng Huang, Hongming Chen, Xiang Chen, Hui Tang, Yuanbin Chen, Yuanbo Zhou, Xinwei Dai, Xintao Qiu, Wei Deng, Qinquan Gao, Tong Tong, Mingjia Li, Jin Hu, Xinyu He, Xiaojie Guo, Sabarinathan, K Uma, A Sasithradevi, B Sathya Bama, S. Mohamed Mansoor Roomi, V.Srivatsav, Jinjuan Wang, Long Sun, Qiuying Chen, Jiahong Shao, Yizhi Zhang, Marcos V. Conde, Daniel Feijoo, Juan C. Benito, Alvaro Garc\'ia, Jaeho Lee, Seongwan Kim, Sharif S M A, Nodirkhuja Khujaev, Roman Tsoy, Ali Murtaza, Uswah Khairuddin, Ahmad 'Athif Mohd Faudzi, Sampada Malagi, Amogh Joshi, Nikhil Akalwadi, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudenagudi, Wenyi Lian, Wenjing Lian, Jagadeesh Kalyanshetti, Vijayalaxmi Ashok Aralikatti, Palani Yashaswini, Nitish Upasi, Dikshit Hegde, Ujwala Patil, Sujata C, Xingzhuo Yan, Wei Hao, Minghan Fu, Pooja choksy, Anjali Sarvaiya, Kishor Upla, Kiran Raja, Hailong Yan, Yunkai Zhang, Baiang Li, Jingyi Zhang, Huan Zheng
NTIRE 2024 Challenge on Low Light Image Enhancement: Methods and Results
NTIRE 2024 Challenge Report
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper reviews the NTIRE 2024 low light image enhancement challenge, highlighting the proposed solutions and results. The aim of this challenge is to discover an effective network design or solution capable of generating brighter, clearer, and visually appealing results when dealing with a variety of conditions, including ultra-high resolution (4K and beyond), non-uniform illumination, backlighting, extreme darkness, and night scenes. A notable total of 428 participants registered for the challenge, with 22 teams ultimately making valid submissions. This paper meticulously evaluates the state-of-the-art advancements in enhancing low-light images, reflecting the significant progress and creativity in this field.
[ { "created": "Mon, 22 Apr 2024 15:01:12 GMT", "version": "v1" } ]
2024-04-23
[ [ "Liu", "Xiaoning", "" ], [ "Wu", "Zongwei", "" ], [ "Li", "Ao", "" ], [ "Vasluianu", "Florin-Alexandru", "" ], [ "Zhang", "Yulun", "" ], [ "Gu", "Shuhang", "" ], [ "Zhang", "Le", "" ], [ "Zhu", "Ce", "" ], [ "Timofte", "Radu", "" ], [ "Jin", "Zhi", "" ], [ "Wu", "Hongjun", "" ], [ "Wang", "Chenxi", "" ], [ "Ling", "Haitao", "" ], [ "Cai", "Yuanhao", "" ], [ "Bian", "Hao", "" ], [ "Zheng", "Yuxin", "" ], [ "Lin", "Jing", "" ], [ "Yuille", "Alan", "" ], [ "Shao", "Ben", "" ], [ "Guo", "Jin", "" ], [ "Liu", "Tianli", "" ], [ "Wu", "Mohao", "" ], [ "Feng", "Yixu", "" ], [ "Hou", "Shuo", "" ], [ "Lin", "Haotian", "" ], [ "Zhu", "Yu", "" ], [ "Wu", "Peng", "" ], [ "Dong", "Wei", "" ], [ "Sun", "Jinqiu", "" ], [ "Zhang", "Yanning", "" ], [ "Yan", "Qingsen", "" ], [ "Zou", "Wenbin", "" ], [ "Yang", "Weipeng", "" ], [ "Li", "Yunxiang", "" ], [ "Wei", "Qiaomu", "" ], [ "Ye", "Tian", "" ], [ "Chen", "Sixiang", "" ], [ "Zhang", "Zhao", "" ], [ "Zhao", "Suiyi", "" ], [ "Wang", "Bo", "" ], [ "Luo", "Yan", "" ], [ "Zuo", "Zhichao", "" ], [ "Wang", "Mingshen", "" ], [ "Wang", "Junhu", "" ], [ "Wei", "Yanyan", "" ], [ "Sun", "Xiaopeng", "" ], [ "Gao", "Yu", "" ], [ "Huang", "Jiancheng", "" ], [ "Chen", "Hongming", "" ], [ "Chen", "Xiang", "" ], [ "Tang", "Hui", "" ], [ "Chen", "Yuanbin", "" ], [ "Zhou", "Yuanbo", "" ], [ "Dai", "Xinwei", "" ], [ "Qiu", "Xintao", "" ], [ "Deng", "Wei", "" ], [ "Gao", "Qinquan", "" ], [ "Tong", "Tong", "" ], [ "Li", "Mingjia", "" ], [ "Hu", "Jin", "" ], [ "He", "Xinyu", "" ], [ "Guo", "Xiaojie", "" ], [ "Sabarinathan", "", "" ], [ "Uma", "K", "" ], [ "Sasithradevi", "A", "" ], [ "Bama", "B Sathya", "" ], [ "Roomi", "S. Mohamed Mansoor", "" ], [ "Srivatsav", "V.", "" ], [ "Wang", "Jinjuan", "" ], [ "Sun", "Long", "" ], [ "Chen", "Qiuying", "" ], [ "Shao", "Jiahong", "" ], [ "Zhang", "Yizhi", "" ], [ "Conde", "Marcos V.", "" ], [ "Feijoo", "Daniel", "" ], [ "Benito", "Juan C.", "" ], [ "García", "Alvaro", "" ], [ "Lee", "Jaeho", "" ], [ "Kim", "Seongwan", "" ], [ "A", "Sharif S M", "" ], [ "Khujaev", "Nodirkhuja", "" ], [ "Tsoy", "Roman", "" ], [ "Murtaza", "Ali", "" ], [ "Khairuddin", "Uswah", "" ], [ "Faudzi", "Ahmad 'Athif Mohd", "" ], [ "Malagi", "Sampada", "" ], [ "Joshi", "Amogh", "" ], [ "Akalwadi", "Nikhil", "" ], [ "Desai", "Chaitra", "" ], [ "Tabib", "Ramesh Ashok", "" ], [ "Mudenagudi", "Uma", "" ], [ "Lian", "Wenyi", "" ], [ "Lian", "Wenjing", "" ], [ "Kalyanshetti", "Jagadeesh", "" ], [ "Aralikatti", "Vijayalaxmi Ashok", "" ], [ "Yashaswini", "Palani", "" ], [ "Upasi", "Nitish", "" ], [ "Hegde", "Dikshit", "" ], [ "Patil", "Ujwala", "" ], [ "C", "Sujata", "" ], [ "Yan", "Xingzhuo", "" ], [ "Hao", "Wei", "" ], [ "Fu", "Minghan", "" ], [ "choksy", "Pooja", "" ], [ "Sarvaiya", "Anjali", "" ], [ "Upla", "Kishor", "" ], [ "Raja", "Kiran", "" ], [ "Yan", "Hailong", "" ], [ "Zhang", "Yunkai", "" ], [ "Li", "Baiang", "" ], [ "Zhang", "Jingyi", "" ], [ "Zheng", "Huan", "" ] ]
This paper reviews the NTIRE 2024 low light image enhancement challenge, highlighting the proposed solutions and results. The aim of this challenge is to discover an effective network design or solution capable of generating brighter, clearer, and visually appealing results when dealing with a variety of conditions, including ultra-high resolution (4K and beyond), non-uniform illumination, backlighting, extreme darkness, and night scenes. A notable total of 428 participants registered for the challenge, with 22 teams ultimately making valid submissions. This paper meticulously evaluates the state-of-the-art advancements in enhancing low-light images, reflecting the significant progress and creativity in this field.
1702.01394
Jeffrey Shallit
J\"org Endrullis, Jeffrey Shallit, Tim Smith
Undecidability and Finite Automata
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a novel rewriting problem, we show that several natural decision problems about finite automata are undecidable (i.e., recursively unsolvable). In contrast, we also prove three related problems are decidable. We apply one result to prove the undecidability of a related problem about k-automatic sets of rational numbers.
[ { "created": "Sun, 5 Feb 2017 12:37:36 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2017 20:47:33 GMT", "version": "v2" } ]
2017-03-01
[ [ "Endrullis", "Jörg", "" ], [ "Shallit", "Jeffrey", "" ], [ "Smith", "Tim", "" ] ]
Using a novel rewriting problem, we show that several natural decision problems about finite automata are undecidable (i.e., recursively unsolvable). In contrast, we also prove three related problems are decidable. We apply one result to prove the undecidability of a related problem about k-automatic sets of rational numbers.
1009.0072
Wei Yang
Wei Yang, Lihua Li, Gang Wu, and Haifeng Wang
Joint Relay Selection and Link Adaptation for Distributed Beamforming in Regenerative Cooperative Networks
Accepted by 2010 International Symposium on Information Theory and its Applications (ISITA)
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relay selection enhances the performance of the cooperative networks by selecting the links with higher capacity. Meanwhile link adaptation improves the spectral efficiency of wireless data-centric networks through adapting the modulation and coding schemes (MCS) to the current link condition. In this paper, relay selection is combined with link adaptation for distributed beamforming in a two-hop regenerative cooperative system. A novel signaling mechanism and related optimal algorithms are proposed for joint relay selection and link adaptation. In the proposed scheme, there is no need to feedback the relay selection results to each relay. Instead, by broadcasting the link adaptation results from the destination, each relay will automatically understand whether it is selected or not. The lower and upper bounds of the throughput of the proposed scheme are derived. The analysis and simulation results indicate that the proposed scheme provides synergistic gains compared to the pure relay selection and link adaptation schemes.
[ { "created": "Wed, 1 Sep 2010 02:02:32 GMT", "version": "v1" } ]
2010-09-02
[ [ "Yang", "Wei", "" ], [ "Li", "Lihua", "" ], [ "Wu", "Gang", "" ], [ "Wang", "Haifeng", "" ] ]
Relay selection enhances the performance of the cooperative networks by selecting the links with higher capacity. Meanwhile link adaptation improves the spectral efficiency of wireless data-centric networks through adapting the modulation and coding schemes (MCS) to the current link condition. In this paper, relay selection is combined with link adaptation for distributed beamforming in a two-hop regenerative cooperative system. A novel signaling mechanism and related optimal algorithms are proposed for joint relay selection and link adaptation. In the proposed scheme, there is no need to feedback the relay selection results to each relay. Instead, by broadcasting the link adaptation results from the destination, each relay will automatically understand whether it is selected or not. The lower and upper bounds of the throughput of the proposed scheme are derived. The analysis and simulation results indicate that the proposed scheme provides synergistic gains compared to the pure relay selection and link adaptation schemes.
2310.07393
Matteo El Hariry
Matteo El-Hariry, Antoine Richard, Miguel Olivares-Mendez
RANS: Highly-Parallelised Simulator for Reinforcement Learning based Autonomous Navigating Spacecrafts
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, realistic simulation environments are essential to validate and build reliable robotic solutions. This is particularly true when using Reinforcement Learning (RL) based control policies. To this end, both robotics and RL developers need tools and workflows to create physically accurate simulations and synthetic datasets. Gazebo, MuJoCo, Webots, Pybullets or Isaac Sym are some of the many tools available to simulate robotic systems. Developing learning-based methods for space navigation is, due to the highly complex nature of the problem, an intensive data-driven process that requires highly parallelized simulations. When it comes to the control of spacecrafts, there is no easy to use simulation library designed for RL. We address this gap by harnessing the capabilities of NVIDIA Isaac Gym, where both physics simulation and the policy training reside on GPU. Building on this tool, we provide an open-source library enabling users to simulate thousands of parallel spacecrafts, that learn a set of maneuvering tasks, such as position, attitude, and velocity control. These tasks enable to validate complex space scenarios, such as trajectory optimization for landing, docking, rendezvous and more.
[ { "created": "Wed, 11 Oct 2023 11:21:45 GMT", "version": "v1" } ]
2023-10-12
[ [ "El-Hariry", "Matteo", "" ], [ "Richard", "Antoine", "" ], [ "Olivares-Mendez", "Miguel", "" ] ]
Nowadays, realistic simulation environments are essential to validate and build reliable robotic solutions. This is particularly true when using Reinforcement Learning (RL) based control policies. To this end, both robotics and RL developers need tools and workflows to create physically accurate simulations and synthetic datasets. Gazebo, MuJoCo, Webots, Pybullets or Isaac Sym are some of the many tools available to simulate robotic systems. Developing learning-based methods for space navigation is, due to the highly complex nature of the problem, an intensive data-driven process that requires highly parallelized simulations. When it comes to the control of spacecrafts, there is no easy to use simulation library designed for RL. We address this gap by harnessing the capabilities of NVIDIA Isaac Gym, where both physics simulation and the policy training reside on GPU. Building on this tool, we provide an open-source library enabling users to simulate thousands of parallel spacecrafts, that learn a set of maneuvering tasks, such as position, attitude, and velocity control. These tasks enable to validate complex space scenarios, such as trajectory optimization for landing, docking, rendezvous and more.
2303.15012
Senmao Li
Senmao Li, Joost van de Weijer, Yaxing Wang, Fahad Shahbaz Khan, Meiqin Liu, Jian Yang
3D-Aware Multi-Class Image-to-Image Translation with NeRFs
Accepted by CVPR2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in 3D-aware generative models (3D-aware GANs) combined with Neural Radiance Fields (NeRF) have achieved impressive results. However no prior works investigate 3D-aware GANs for 3D consistent multi-class image-to-image (3D-aware I2I) translation. Naively using 2D-I2I translation methods suffers from unrealistic shape/identity change. To perform 3D-aware multi-class I2I translation, we decouple this learning process into a multi-class 3D-aware GAN step and a 3D-aware I2I translation step. In the first step, we propose two novel techniques: a new conditional architecture and an effective training strategy. In the second step, based on the well-trained multi-class 3D-aware GAN architecture, that preserves view-consistency, we construct a 3D-aware I2I translation system. To further reduce the view-consistency problems, we propose several new techniques, including a U-net-like adaptor network design, a hierarchical representation constrain and a relative regularization loss. In extensive experiments on two datasets, quantitative and qualitative results demonstrate that we successfully perform 3D-aware I2I translation with multi-view consistency.
[ { "created": "Mon, 27 Mar 2023 08:54:51 GMT", "version": "v1" } ]
2023-03-28
[ [ "Li", "Senmao", "" ], [ "van de Weijer", "Joost", "" ], [ "Wang", "Yaxing", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Liu", "Meiqin", "" ], [ "Yang", "Jian", "" ] ]
Recent advances in 3D-aware generative models (3D-aware GANs) combined with Neural Radiance Fields (NeRF) have achieved impressive results. However no prior works investigate 3D-aware GANs for 3D consistent multi-class image-to-image (3D-aware I2I) translation. Naively using 2D-I2I translation methods suffers from unrealistic shape/identity change. To perform 3D-aware multi-class I2I translation, we decouple this learning process into a multi-class 3D-aware GAN step and a 3D-aware I2I translation step. In the first step, we propose two novel techniques: a new conditional architecture and an effective training strategy. In the second step, based on the well-trained multi-class 3D-aware GAN architecture, that preserves view-consistency, we construct a 3D-aware I2I translation system. To further reduce the view-consistency problems, we propose several new techniques, including a U-net-like adaptor network design, a hierarchical representation constrain and a relative regularization loss. In extensive experiments on two datasets, quantitative and qualitative results demonstrate that we successfully perform 3D-aware I2I translation with multi-view consistency.
1102.1265
Joakim Jalden
Joakim Jalden and Petros Elia
Sphere decoding complexity exponent for decoding full rate codes over the quasi-static MIMO channel
19 Pages, 4 figures. Submitted to the IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the setting of quasi-static multiple-input multiple-output (MIMO) channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic division algebra (CDA) based codes -- the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity exponent is shown to take a particularly concise form as a non-monotonic function of the multiplexing gain. To date, the SD complexity exponent also describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood (ML) performance which vanishes in the high SNR limit.
[ { "created": "Mon, 7 Feb 2011 10:24:09 GMT", "version": "v1" } ]
2011-02-08
[ [ "Jalden", "Joakim", "" ], [ "Elia", "Petros", "" ] ]
In the setting of quasi-static multiple-input multiple-output (MIMO) channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic division algebra (CDA) based codes -- the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity exponent is shown to take a particularly concise form as a non-monotonic function of the multiplexing gain. To date, the SD complexity exponent also describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood (ML) performance which vanishes in the high SNR limit.
1806.04346
Ruidan He
Ruidan He and Wee Sun Lee and Hwee Tou Ng and Daniel Dahlmeier
Exploiting Document Knowledge for Aspect-level Sentiment Classification
Accepted to ACL 2018 (short paper)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification. However, due to the difficulties in annotating aspect-level data, existing public datasets for this task are all relatively small, which largely limits the effectiveness of those neural models. In this paper, we explore two approaches that transfer knowledge from document- level data, which is much less expensive to obtain, to improve the performance of aspect-level sentiment classification. We demonstrate the effectiveness of our approaches on 4 public datasets from SemEval 2014, 2015, and 2016, and we show that attention-based LSTM benefits from document-level knowledge in multiple ways.
[ { "created": "Tue, 12 Jun 2018 06:04:11 GMT", "version": "v1" } ]
2018-06-13
[ [ "He", "Ruidan", "" ], [ "Lee", "Wee Sun", "" ], [ "Ng", "Hwee Tou", "" ], [ "Dahlmeier", "Daniel", "" ] ]
Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification. However, due to the difficulties in annotating aspect-level data, existing public datasets for this task are all relatively small, which largely limits the effectiveness of those neural models. In this paper, we explore two approaches that transfer knowledge from document- level data, which is much less expensive to obtain, to improve the performance of aspect-level sentiment classification. We demonstrate the effectiveness of our approaches on 4 public datasets from SemEval 2014, 2015, and 2016, and we show that attention-based LSTM benefits from document-level knowledge in multiple ways.
2006.02163
Xuan Phi Nguyen
Xuan-Phi Nguyen, Shafiq Joty, Thanh-Tung Nguyen, Wu Kui, Ai Ti Aw
Cross-model Back-translated Distillation for Unsupervised Machine Translation
Accepted to a conference paper at ICML 2021
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT'14 English-French, WMT'16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5-3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
[ { "created": "Wed, 3 Jun 2020 10:57:21 GMT", "version": "v1" }, { "created": "Mon, 5 Oct 2020 14:07:14 GMT", "version": "v2" }, { "created": "Sat, 6 Feb 2021 18:02:41 GMT", "version": "v3" }, { "created": "Mon, 24 May 2021 16:07:26 GMT", "version": "v4" } ]
2021-05-25
[ [ "Nguyen", "Xuan-Phi", "" ], [ "Joty", "Shafiq", "" ], [ "Nguyen", "Thanh-Tung", "" ], [ "Kui", "Wu", "" ], [ "Aw", "Ai Ti", "" ] ]
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT'14 English-French, WMT'16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5-3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
1912.11194
Qi Qi
Qi Qi, Yan Yan, Xiaoyu Wang, Tianbao Yang
A Simple and Effective Framework for Pairwise Deep Metric Learning
16 pages, 5 figures
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep metric learning (DML) has received much attention in deep learning due to its wide applications in computer vision. Previous studies have focused on designing complicated losses and hard example mining methods, which are mostly heuristic and lack of theoretical understanding. In this paper, we cast DML as a simple pairwise binary classification problem that classifies a pair of examples as similar or dissimilar. It identifies the most critical issue in this problem--imbalanced data pairs. To tackle this issue, we propose a simple and effective framework to sample pairs in a batch of data for updating the model. The key to this framework is to define a robust loss for all pairs over a mini-batch of data, which is formulated by distributionally robust optimization. The flexibility in constructing the uncertainty decision set of the dual variable allows us to recover state-of-the-art complicated losses and also to induce novel variants. Empirical studies on several benchmark data sets demonstrate that our simple and effective method outperforms the state-of-the-art results. Codes are available at: https://github.com/qiqi-helloworld/A-Simple-and-Effective-Framework-for-Pairewise-Distance-Metric-Learning
[ { "created": "Tue, 24 Dec 2019 03:47:25 GMT", "version": "v1" }, { "created": "Sat, 11 Jan 2020 15:58:11 GMT", "version": "v2" }, { "created": "Thu, 18 Jun 2020 15:44:21 GMT", "version": "v3" } ]
2020-06-19
[ [ "Qi", "Qi", "" ], [ "Yan", "Yan", "" ], [ "Wang", "Xiaoyu", "" ], [ "Yang", "Tianbao", "" ] ]
Deep metric learning (DML) has received much attention in deep learning due to its wide applications in computer vision. Previous studies have focused on designing complicated losses and hard example mining methods, which are mostly heuristic and lack of theoretical understanding. In this paper, we cast DML as a simple pairwise binary classification problem that classifies a pair of examples as similar or dissimilar. It identifies the most critical issue in this problem--imbalanced data pairs. To tackle this issue, we propose a simple and effective framework to sample pairs in a batch of data for updating the model. The key to this framework is to define a robust loss for all pairs over a mini-batch of data, which is formulated by distributionally robust optimization. The flexibility in constructing the uncertainty decision set of the dual variable allows us to recover state-of-the-art complicated losses and also to induce novel variants. Empirical studies on several benchmark data sets demonstrate that our simple and effective method outperforms the state-of-the-art results. Codes are available at: https://github.com/qiqi-helloworld/A-Simple-and-Effective-Framework-for-Pairewise-Distance-Metric-Learning
2003.08951
Yuya Obinata
Yuya Obinata and Takuma Yamamoto
Temporal Extension Module for Skeleton-Based Action Recognition
Accepted on ICPR2020, 7 pages, 4 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a module that extends the temporal graph of a graph convolutional network (GCN) for action recognition with a sequence of skeletons. Existing methods attempt to represent a more appropriate spatial graph on an intra-frame, but disregard optimization of the temporal graph on the interframe. Concretely, these methods connect between vertices corresponding only to the same joint on the inter-frame. In this work, we focus on adding connections to neighboring multiple vertices on the inter-frame and extracting additional features based on the extended temporal graph. Our module is a simple yet effective method to extract correlated features of multiple joints in human movement. Moreover, our module aids in further performance improvements, along with other GCN methods that optimize only the spatial graph. We conduct extensive experiments on two large datasets, NTU RGB+D and Kinetics-Skeleton, and demonstrate that our module is effective for several existing models and our final model achieves state-of-the-art performance.
[ { "created": "Thu, 19 Mar 2020 18:00:04 GMT", "version": "v1" }, { "created": "Mon, 19 Oct 2020 02:39:04 GMT", "version": "v2" } ]
2020-10-20
[ [ "Obinata", "Yuya", "" ], [ "Yamamoto", "Takuma", "" ] ]
We present a module that extends the temporal graph of a graph convolutional network (GCN) for action recognition with a sequence of skeletons. Existing methods attempt to represent a more appropriate spatial graph on an intra-frame, but disregard optimization of the temporal graph on the interframe. Concretely, these methods connect between vertices corresponding only to the same joint on the inter-frame. In this work, we focus on adding connections to neighboring multiple vertices on the inter-frame and extracting additional features based on the extended temporal graph. Our module is a simple yet effective method to extract correlated features of multiple joints in human movement. Moreover, our module aids in further performance improvements, along with other GCN methods that optimize only the spatial graph. We conduct extensive experiments on two large datasets, NTU RGB+D and Kinetics-Skeleton, and demonstrate that our module is effective for several existing models and our final model achieves state-of-the-art performance.
2402.18419
Shubham Vatsal
Shubham Vatsal, Ayush Singh and Shabnam Tafreshi
Can GPT Improve the State of Prior Authorization via Guideline Based Automated Question Answering?
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Health insurance companies have a defined process called prior authorization (PA) which is a health plan cost-control process that requires doctors and other healthcare professionals to get clearance in advance from a health plan before performing a particular procedure on a patient in order to be eligible for payment coverage. For health insurance companies, approving PA requests for patients in the medical domain is a time-consuming and challenging task. One of those key challenges is validating if a request matches up to certain criteria such as age, gender, etc. In this work, we evaluate whether GPT can validate numerous key factors, in turn helping health plans reach a decision drastically faster. We frame it as a question answering task, prompting GPT to answer a question from patient electronic health record. We experiment with different conventional prompting techniques as well as introduce our own novel prompting technique. Moreover, we report qualitative assessment by humans on the natural language generation outputs from our approach. Results show that our method achieves superior performance with the mean weighted F1 score of 0.61 as compared to its standard counterparts.
[ { "created": "Wed, 28 Feb 2024 15:39:53 GMT", "version": "v1" } ]
2024-02-29
[ [ "Vatsal", "Shubham", "" ], [ "Singh", "Ayush", "" ], [ "Tafreshi", "Shabnam", "" ] ]
Health insurance companies have a defined process called prior authorization (PA) which is a health plan cost-control process that requires doctors and other healthcare professionals to get clearance in advance from a health plan before performing a particular procedure on a patient in order to be eligible for payment coverage. For health insurance companies, approving PA requests for patients in the medical domain is a time-consuming and challenging task. One of those key challenges is validating if a request matches up to certain criteria such as age, gender, etc. In this work, we evaluate whether GPT can validate numerous key factors, in turn helping health plans reach a decision drastically faster. We frame it as a question answering task, prompting GPT to answer a question from patient electronic health record. We experiment with different conventional prompting techniques as well as introduce our own novel prompting technique. Moreover, we report qualitative assessment by humans on the natural language generation outputs from our approach. Results show that our method achieves superior performance with the mean weighted F1 score of 0.61 as compared to its standard counterparts.
1803.02101
Frank Meyer
Wissam Siblini and Frank Meyer and Pascale Kuntz
VIPE: A new interactive classification framework for large sets of short texts - application to opinion mining
8 pages
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new interactive opinion mining tool that helps users to classify large sets of short texts originated from Web opinion polls, technical forums or Twitter. From a manual multi-label pre-classification of a very limited text subset, a learning algorithm predicts the labels of the remaining texts of the corpus and the texts most likely associated to a selected label. Using a fast matrix factorization, the algorithm is able to handle large corpora and is well-adapted to interactivity by integrating the corrections proposed by the users on the fly. Experimental results on classical datasets of various sizes and feedbacks of users from marketing services of the telecommunication company Orange confirm the quality of the obtained results.
[ { "created": "Tue, 6 Mar 2018 10:45:27 GMT", "version": "v1" } ]
2018-03-07
[ [ "Siblini", "Wissam", "" ], [ "Meyer", "Frank", "" ], [ "Kuntz", "Pascale", "" ] ]
This paper presents a new interactive opinion mining tool that helps users to classify large sets of short texts originated from Web opinion polls, technical forums or Twitter. From a manual multi-label pre-classification of a very limited text subset, a learning algorithm predicts the labels of the remaining texts of the corpus and the texts most likely associated to a selected label. Using a fast matrix factorization, the algorithm is able to handle large corpora and is well-adapted to interactivity by integrating the corrections proposed by the users on the fly. Experimental results on classical datasets of various sizes and feedbacks of users from marketing services of the telecommunication company Orange confirm the quality of the obtained results.
2108.08987
Cl\'ement Canonne
Cl\'ement L. Canonne and Hongyi Lyu
Uniformity Testing in the Shuffle Model: Simpler, Better, Faster
Accepted to the SIAM Symposium on Simplicity in Algorithms (SOSA 2022). Added some details and discussions
null
null
null
cs.DS cs.CR cs.DM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uniformity testing, or testing whether independent observations are uniformly distributed, is the prototypical question in distribution testing. Over the past years, a line of work has been focusing on uniformity testing under privacy constraints on the data, and obtained private and data-efficient algorithms under various privacy models such as central differential privacy (DP), local privacy (LDP), pan-privacy, and, very recently, the shuffle model of differential privacy. In this work, we considerably simplify the analysis of the known uniformity testing algorithm in the shuffle model, and, using a recent result on "privacy amplification via shuffling," provide an alternative algorithm attaining the same guarantees with an elementary and streamlined argument.
[ { "created": "Fri, 20 Aug 2021 03:43:12 GMT", "version": "v1" }, { "created": "Mon, 18 Oct 2021 08:20:42 GMT", "version": "v2" } ]
2021-10-19
[ [ "Canonne", "Clément L.", "" ], [ "Lyu", "Hongyi", "" ] ]
Uniformity testing, or testing whether independent observations are uniformly distributed, is the prototypical question in distribution testing. Over the past years, a line of work has been focusing on uniformity testing under privacy constraints on the data, and obtained private and data-efficient algorithms under various privacy models such as central differential privacy (DP), local privacy (LDP), pan-privacy, and, very recently, the shuffle model of differential privacy. In this work, we considerably simplify the analysis of the known uniformity testing algorithm in the shuffle model, and, using a recent result on "privacy amplification via shuffling," provide an alternative algorithm attaining the same guarantees with an elementary and streamlined argument.
2406.05631
Sana Ayromlou
Sana Ayromlou, Teresa Tsang, Purang Abolmaesumi, Xiaoxiao Li
CCSI: Continual Class-Specific Impression for Data-free Class Incremental Learning
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
In real-world clinical settings, traditional deep learning-based classification methods struggle with diagnosing newly introduced disease types because they require samples from all disease classes for offline training. Class incremental learning offers a promising solution by adapting a deep network trained on specific disease classes to handle new diseases. However, catastrophic forgetting occurs, decreasing the performance of earlier classes when adapting the model to new data. Prior proposed methodologies to overcome this require perpetual storage of previous samples, posing potential practical concerns regarding privacy and storage regulations in healthcare. To this end, we propose a novel data-free class incremental learning framework that utilizes data synthesis on learned classes instead of data storage from previous classes. Our key contributions include acquiring synthetic data known as Continual Class-Specific Impression (CCSI) for previously inaccessible trained classes and presenting a methodology to effectively utilize this data for updating networks when introducing new classes. We obtain CCSI by employing data inversion over gradients of the trained classification model on previous classes starting from the mean image of each class inspired by common landmarks shared among medical images and utilizing continual normalization layers statistics as a regularizer in this pixel-wise optimization process. Subsequently, we update the network by combining the synthesized data with new class data and incorporate several losses, including an intra-domain contrastive loss to generalize the deep network trained on the synthesized data to real data, a margin loss to increase separation among previous classes and new ones, and a cosine-normalized cross-entropy loss to alleviate the adverse effects of imbalanced distributions in training data.
[ { "created": "Sun, 9 Jun 2024 03:52:21 GMT", "version": "v1" } ]
2024-06-11
[ [ "Ayromlou", "Sana", "" ], [ "Tsang", "Teresa", "" ], [ "Abolmaesumi", "Purang", "" ], [ "Li", "Xiaoxiao", "" ] ]
In real-world clinical settings, traditional deep learning-based classification methods struggle with diagnosing newly introduced disease types because they require samples from all disease classes for offline training. Class incremental learning offers a promising solution by adapting a deep network trained on specific disease classes to handle new diseases. However, catastrophic forgetting occurs, decreasing the performance of earlier classes when adapting the model to new data. Prior proposed methodologies to overcome this require perpetual storage of previous samples, posing potential practical concerns regarding privacy and storage regulations in healthcare. To this end, we propose a novel data-free class incremental learning framework that utilizes data synthesis on learned classes instead of data storage from previous classes. Our key contributions include acquiring synthetic data known as Continual Class-Specific Impression (CCSI) for previously inaccessible trained classes and presenting a methodology to effectively utilize this data for updating networks when introducing new classes. We obtain CCSI by employing data inversion over gradients of the trained classification model on previous classes starting from the mean image of each class inspired by common landmarks shared among medical images and utilizing continual normalization layers statistics as a regularizer in this pixel-wise optimization process. Subsequently, we update the network by combining the synthesized data with new class data and incorporate several losses, including an intra-domain contrastive loss to generalize the deep network trained on the synthesized data to real data, a margin loss to increase separation among previous classes and new ones, and a cosine-normalized cross-entropy loss to alleviate the adverse effects of imbalanced distributions in training data.
2012.00267
Hongyang Du
Hongyang Du, Jiayi Zhang, Ke Guan, Dusit Niyato, Huiying Jiao, Zhiqin Wang, and Thomas K\"urner
Performance and Optimization of Reconfigurable Intelligent Surface Aided THz Communications
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
TeraHertz (THz) communications can satisfy the high data rate demand with massive bandwidth. However, severe path attenuation and hardware imperfection greatly alleviate its performance. Therefore, we utilize the reconfigurable intelligent surface (RIS) technology and investigate the RIS-aided THz communications. We first prove that the small-scale amplitude fading of THz signals can be accurately modeled by the fluctuating two-ray distribution based on two THz signal measurement experiments conducted in a variety of different scenarios. To optimize the phase-shifts at the RIS elements, we propose a novel swarm intelligence-based method that does not require full channel estimation. We then derive exact statistical characterizations of end-to-end signal-to-noise plus distortion ratio (SNDR) and signal-to-noise ratio (SNR). Moreover, we present asymptotic analysis to obtain more insights when the SNDR or the number of RIS's elements is high. Finally, we derive analytical expressions for the outage probability and ergodic capacity. The tight upper bounds of ergodic capacity for both ideal and nonideal radio frequency chains are obtained. It is interesting to find that increasing the number of RIS's elements can significantly improve the THz communications system performance. For example, the ergodic capacity can increase up to 25% when the number of elements increases from 40 to 80, which incurs only insignificant costs to the system.
[ { "created": "Tue, 1 Dec 2020 05:00:38 GMT", "version": "v1" }, { "created": "Sun, 20 Mar 2022 08:43:14 GMT", "version": "v2" } ]
2022-03-22
[ [ "Du", "Hongyang", "" ], [ "Zhang", "Jiayi", "" ], [ "Guan", "Ke", "" ], [ "Niyato", "Dusit", "" ], [ "Jiao", "Huiying", "" ], [ "Wang", "Zhiqin", "" ], [ "Kürner", "Thomas", "" ] ]
TeraHertz (THz) communications can satisfy the high data rate demand with massive bandwidth. However, severe path attenuation and hardware imperfection greatly alleviate its performance. Therefore, we utilize the reconfigurable intelligent surface (RIS) technology and investigate the RIS-aided THz communications. We first prove that the small-scale amplitude fading of THz signals can be accurately modeled by the fluctuating two-ray distribution based on two THz signal measurement experiments conducted in a variety of different scenarios. To optimize the phase-shifts at the RIS elements, we propose a novel swarm intelligence-based method that does not require full channel estimation. We then derive exact statistical characterizations of end-to-end signal-to-noise plus distortion ratio (SNDR) and signal-to-noise ratio (SNR). Moreover, we present asymptotic analysis to obtain more insights when the SNDR or the number of RIS's elements is high. Finally, we derive analytical expressions for the outage probability and ergodic capacity. The tight upper bounds of ergodic capacity for both ideal and nonideal radio frequency chains are obtained. It is interesting to find that increasing the number of RIS's elements can significantly improve the THz communications system performance. For example, the ergodic capacity can increase up to 25% when the number of elements increases from 40 to 80, which incurs only insignificant costs to the system.
2405.08200
Rolando Garcia
Rolando Garcia
Interactive Lab Notebooks for Robotics Researchers
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive notebooks, such as Jupyter, have revolutionized the field of data science by providing an integrated environment for data, code, and documentation. However, their adoption by robotics researchers and model developers has been limited. This study investigates the logging and record-keeping practices of robotics researchers, drawing parallels to the pre-interactive notebook era of data science. Through interviews with robotics researchers, we identified the reliance on diverse and often incompatible tools for managing experimental data, leading to challenges in reproducibility and data traceability. Our findings reveal that robotics researchers can benefit from a specialized version of interactive notebooks that supports comprehensive data entry, continuous context capture, and agile data staging. We propose extending interactive notebooks to better serve the needs of robotics researchers by integrating features akin to traditional lab notebooks. This adaptation aims to enhance the organization, analysis, and reproducibility of experimental data in robotics, fostering a more streamlined and efficient research workflow.
[ { "created": "Mon, 13 May 2024 21:33:58 GMT", "version": "v1" } ]
2024-05-15
[ [ "Garcia", "Rolando", "" ] ]
Interactive notebooks, such as Jupyter, have revolutionized the field of data science by providing an integrated environment for data, code, and documentation. However, their adoption by robotics researchers and model developers has been limited. This study investigates the logging and record-keeping practices of robotics researchers, drawing parallels to the pre-interactive notebook era of data science. Through interviews with robotics researchers, we identified the reliance on diverse and often incompatible tools for managing experimental data, leading to challenges in reproducibility and data traceability. Our findings reveal that robotics researchers can benefit from a specialized version of interactive notebooks that supports comprehensive data entry, continuous context capture, and agile data staging. We propose extending interactive notebooks to better serve the needs of robotics researchers by integrating features akin to traditional lab notebooks. This adaptation aims to enhance the organization, analysis, and reproducibility of experimental data in robotics, fostering a more streamlined and efficient research workflow.
2310.18615
Xiangchen Song
Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang
Temporally Disentangled Representation Learning under Unknown Nonstationarity
NeurIPS 2023. arXiv admin note: text overlap with arXiv:2210.13647
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure. However, in nonstationary setting, existing work only partially addressed the problem by either utilizing observed auxiliary variables (e.g., class labels and/or domain indexes) as side information or assuming simplified latent causal dynamics. Both constrain the method to a limited range of scenarios. In this study, we further explored the Markov Assumption under time-delayed causally related process in nonstationary setting and showed that under mild conditions, the independent latent components can be recovered from their nonlinear mixture up to a permutation and a component-wise transformation, without the observation of auxiliary variables. We then introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables and identify their relations from measured sequential data only. Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts.
[ { "created": "Sat, 28 Oct 2023 06:46:03 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2024 09:43:57 GMT", "version": "v2" } ]
2024-08-02
[ [ "Song", "Xiangchen", "" ], [ "Yao", "Weiran", "" ], [ "Fan", "Yewen", "" ], [ "Dong", "Xinshuai", "" ], [ "Chen", "Guangyi", "" ], [ "Niebles", "Juan Carlos", "" ], [ "Xing", "Eric", "" ], [ "Zhang", "Kun", "" ] ]
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure. However, in nonstationary setting, existing work only partially addressed the problem by either utilizing observed auxiliary variables (e.g., class labels and/or domain indexes) as side information or assuming simplified latent causal dynamics. Both constrain the method to a limited range of scenarios. In this study, we further explored the Markov Assumption under time-delayed causally related process in nonstationary setting and showed that under mild conditions, the independent latent components can be recovered from their nonlinear mixture up to a permutation and a component-wise transformation, without the observation of auxiliary variables. We then introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables and identify their relations from measured sequential data only. Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts.
1312.7793
Zhao Tan
Zhao Tan, Yonina C. Eldar, Arye Nehorai
Direction of Arrival Estimation Using Co-prime Arrays: A Super Resolution Viewpoint
Submitted on December 17th, 2013
null
10.1109/TSP.2014.2354316
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of direction of arrival (DOA) estimation using a newly proposed structure of non-uniform linear arrays, referred to as co-prime arrays, in this paper. By exploiting the second order statistical information of the received signals, co-prime arrays exhibit O(MN) degrees of freedom with only M + N sensors. A sparsity based recovery method is proposed to fully utilize these degrees of freedom. Unlike traditional sparse recovery methods, the proposed method is based on the developing theory of super resolution, which considers a continuous range of possible sources instead of discretizing this range into a discrete grid. With this approach, off-grid effects inherited in traditional sparse recovery can be neglected, thus improving the accuracy of DOA estimation. In this paper we show that in the noiseless case one can theoretically detect up to M N sources with only 2M + N sensors. The noise 2 statistics of co-prime arrays are also analyzed to demonstrate the robustness of the proposed optimization scheme. A source number detection method is presented based on the spectrum reconstructed from the sparse method. By extensive numerical examples, we show the superiority of the proposed method in terms of DOA estimation accuracy, degrees of freedom, and resolution ability compared with previous methods, such as MUSIC with spatial smoothing and the discrete sparse recovery method.
[ { "created": "Mon, 30 Dec 2013 17:42:58 GMT", "version": "v1" } ]
2015-06-18
[ [ "Tan", "Zhao", "" ], [ "Eldar", "Yonina C.", "" ], [ "Nehorai", "Arye", "" ] ]
We consider the problem of direction of arrival (DOA) estimation using a newly proposed structure of non-uniform linear arrays, referred to as co-prime arrays, in this paper. By exploiting the second order statistical information of the received signals, co-prime arrays exhibit O(MN) degrees of freedom with only M + N sensors. A sparsity based recovery method is proposed to fully utilize these degrees of freedom. Unlike traditional sparse recovery methods, the proposed method is based on the developing theory of super resolution, which considers a continuous range of possible sources instead of discretizing this range into a discrete grid. With this approach, off-grid effects inherited in traditional sparse recovery can be neglected, thus improving the accuracy of DOA estimation. In this paper we show that in the noiseless case one can theoretically detect up to M N sources with only 2M + N sensors. The noise 2 statistics of co-prime arrays are also analyzed to demonstrate the robustness of the proposed optimization scheme. A source number detection method is presented based on the spectrum reconstructed from the sparse method. By extensive numerical examples, we show the superiority of the proposed method in terms of DOA estimation accuracy, degrees of freedom, and resolution ability compared with previous methods, such as MUSIC with spatial smoothing and the discrete sparse recovery method.
2305.06732
Eleonore Bach
Eleonore Bach, Friedrich Eisenbrand, Rom Pinchasi
Integer points in the degree-sequence polytope
14 pages
null
null
null
cs.DM
http://creativecommons.org/licenses/by-sa/4.0/
An integer vector $b \in \mathbb{Z}^d$ is a degree sequence if there exists a hypergraph with vertices $\{1,\dots,d\}$ such that each $b_i$ is the number of hyperedges containing $i$. The degree-sequence polytope $\mathscr{Z}^d$ is the convex hull of all degree sequences. We show that all but a $2^{-\Omega(d)}$ fraction of integer vectors in the degree sequence polytope are degree sequences. Furthermore, the corresponding hypergraph of these points can be computed in time $2^{O(d)}$ via linear programming techniques. This is substantially faster than the $2^{O(d^2)}$ running time of the current-best algorithm for the degree-sequence problem. We also show that for $d\geq 98$, the degree-sequence polytope $\mathscr{Z}^d$ contains integer points that are not degree sequences. Furthermore, we prove that the linear optimization problem over $\mathscr{Z}^d$ is $\mathrm{NP}$-hard. This complements a recent result of Deza et al. (2018) who provide an algorithm that is polynomial in $d$ and the number of hyperedges.
[ { "created": "Thu, 11 May 2023 11:20:40 GMT", "version": "v1" } ]
2023-05-12
[ [ "Bach", "Eleonore", "" ], [ "Eisenbrand", "Friedrich", "" ], [ "Pinchasi", "Rom", "" ] ]
An integer vector $b \in \mathbb{Z}^d$ is a degree sequence if there exists a hypergraph with vertices $\{1,\dots,d\}$ such that each $b_i$ is the number of hyperedges containing $i$. The degree-sequence polytope $\mathscr{Z}^d$ is the convex hull of all degree sequences. We show that all but a $2^{-\Omega(d)}$ fraction of integer vectors in the degree sequence polytope are degree sequences. Furthermore, the corresponding hypergraph of these points can be computed in time $2^{O(d)}$ via linear programming techniques. This is substantially faster than the $2^{O(d^2)}$ running time of the current-best algorithm for the degree-sequence problem. We also show that for $d\geq 98$, the degree-sequence polytope $\mathscr{Z}^d$ contains integer points that are not degree sequences. Furthermore, we prove that the linear optimization problem over $\mathscr{Z}^d$ is $\mathrm{NP}$-hard. This complements a recent result of Deza et al. (2018) who provide an algorithm that is polynomial in $d$ and the number of hyperedges.
1602.07720
Renato Paes Leme
Renato Paes Leme, Martin Pal, Sergei Vassilvitskii
A Field Guide to Personalized Reserve Prices
Accepted to WWW'16
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
[ { "created": "Wed, 24 Feb 2016 21:39:17 GMT", "version": "v1" } ]
2016-02-26
[ [ "Leme", "Renato Paes", "" ], [ "Pal", "Martin", "" ], [ "Vassilvitskii", "Sergei", "" ] ]
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint.
1906.09029
Vincenzo Matta
Augusto Santos, Vincenzo Matta, and Ali H. Sayed
Topology Inference over Networks with Nonlinear Coupling
Submitted for publication
null
null
null
cs.MA cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work examines the problem of topology inference over discrete-time nonlinear stochastic networked dynamical systems. The goal is to recover the underlying digraph linking the network agents, from observations of their state-evolution. The dynamical law governing the state-evolution of the interacting agents might be nonlinear, i.e., the next state of an agent can depend nonlinearly on its current state and on the states of its immediate neighbors. We establish sufficient conditions that allow consistent graph learning over a special class of networked systems, namely, logistic-type dynamical systems.
[ { "created": "Fri, 21 Jun 2019 09:44:29 GMT", "version": "v1" } ]
2019-06-24
[ [ "Santos", "Augusto", "" ], [ "Matta", "Vincenzo", "" ], [ "Sayed", "Ali H.", "" ] ]
This work examines the problem of topology inference over discrete-time nonlinear stochastic networked dynamical systems. The goal is to recover the underlying digraph linking the network agents, from observations of their state-evolution. The dynamical law governing the state-evolution of the interacting agents might be nonlinear, i.e., the next state of an agent can depend nonlinearly on its current state and on the states of its immediate neighbors. We establish sufficient conditions that allow consistent graph learning over a special class of networked systems, namely, logistic-type dynamical systems.
2201.06539
Keuntaek Lee
Keuntaek Lee, David Isele, Evangelos A. Theodorou, Sangjae Bae
Spatiotemporal Costmap Inference for MPC via Deep Inverse Reinforcement Learning
IEEE Robotics and Automation Letters (RA-L)
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
It can be difficult to autonomously produce driver behavior so that it appears natural to other traffic participants. Through Inverse Reinforcement Learning (IRL), we can automate this process by learning the underlying reward function from human demonstrations. We propose a new IRL algorithm that learns a goal-conditioned spatiotemporal reward function. The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task without any hand-designing or hand-tuning of the cost function. We evaluate our proposed Goal-conditioned SpatioTemporal Zeroing Maximum Entropy Deep IRL (GSTZ)-MEDIRL framework together with MPC in the CARLA simulator for autonomous driving, lane keeping, and lane changing tasks in a challenging dense traffic highway scenario. Our proposed methods show higher success rates compared to other baseline methods including behavior cloning, state-of-the-art RL policies, and MPC with a learning-based behavior prediction model.
[ { "created": "Mon, 17 Jan 2022 17:36:29 GMT", "version": "v1" } ]
2022-01-19
[ [ "Lee", "Keuntaek", "" ], [ "Isele", "David", "" ], [ "Theodorou", "Evangelos A.", "" ], [ "Bae", "Sangjae", "" ] ]
It can be difficult to autonomously produce driver behavior so that it appears natural to other traffic participants. Through Inverse Reinforcement Learning (IRL), we can automate this process by learning the underlying reward function from human demonstrations. We propose a new IRL algorithm that learns a goal-conditioned spatiotemporal reward function. The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task without any hand-designing or hand-tuning of the cost function. We evaluate our proposed Goal-conditioned SpatioTemporal Zeroing Maximum Entropy Deep IRL (GSTZ)-MEDIRL framework together with MPC in the CARLA simulator for autonomous driving, lane keeping, and lane changing tasks in a challenging dense traffic highway scenario. Our proposed methods show higher success rates compared to other baseline methods including behavior cloning, state-of-the-art RL policies, and MPC with a learning-based behavior prediction model.
1903.06473
Zerong Zheng
Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu
DeepHuman: 3D Human Reconstruction from a Single Image
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.
[ { "created": "Fri, 15 Mar 2019 11:38:15 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 05:44:21 GMT", "version": "v2" } ]
2019-03-29
[ [ "Zheng", "Zerong", "" ], [ "Yu", "Tao", "" ], [ "Wei", "Yixuan", "" ], [ "Dai", "Qionghai", "" ], [ "Liu", "Yebin", "" ] ]
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.
2404.18721
Shreya Santra
Shreya Santra, Kentaro Uno, Gen Kudo, and Kazuya Yoshida
Risk-Aware Coverage Path Planning for Lunar Micro-Rovers Leveraging Global and Local Environmental Data
6 pages, 11 figures. Manuscript accepted at the IEEE International Conference on Space Robotics 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a novel 3D myopic coverage path planning algorithm for lunar micro-rovers that can explore unknown environments with limited sensing and computational capabilities. The algorithm expands upon traditional non-graph path planning methods to accommodate the complexities of lunar terrain, utilizing global data with local topographic features into motion cost calculations. The algorithm also integrates localization and mapping to update the rover's pose and map the environment. The resulting environment map's accuracy is evaluated and tested in a 3D simulator. Outdoor field tests were conducted to validate the algorithm's efficacy in sim-to-real scenarios. The results showed that the algorithm could achieve high coverage with low energy consumption and computational cost, while incrementally exploring the terrain and avoiding obstacles. This study contributes to the advancement of path planning methodologies for space exploration, paving the way for efficient, scalable and autonomous exploration of lunar environments by small rovers.
[ { "created": "Mon, 29 Apr 2024 14:10:13 GMT", "version": "v1" } ]
2024-04-30
[ [ "Santra", "Shreya", "" ], [ "Uno", "Kentaro", "" ], [ "Kudo", "Gen", "" ], [ "Yoshida", "Kazuya", "" ] ]
This paper presents a novel 3D myopic coverage path planning algorithm for lunar micro-rovers that can explore unknown environments with limited sensing and computational capabilities. The algorithm expands upon traditional non-graph path planning methods to accommodate the complexities of lunar terrain, utilizing global data with local topographic features into motion cost calculations. The algorithm also integrates localization and mapping to update the rover's pose and map the environment. The resulting environment map's accuracy is evaluated and tested in a 3D simulator. Outdoor field tests were conducted to validate the algorithm's efficacy in sim-to-real scenarios. The results showed that the algorithm could achieve high coverage with low energy consumption and computational cost, while incrementally exploring the terrain and avoiding obstacles. This study contributes to the advancement of path planning methodologies for space exploration, paving the way for efficient, scalable and autonomous exploration of lunar environments by small rovers.
1610.03738
Daniel Wesierski
Daniel Wesierski
Exploring the Entire Regularization Path for the Asymmetric Cost Linear Support Vector Machine
8 pages, 2 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an algorithm for exploring the entire regularization path of asymmetric-cost linear support vector machines. Empirical evidence suggests the predictive power of support vector machines depends on the regularization parameters of the training algorithms. The algorithms exploring the entire regularization paths have been proposed for single-cost support vector machines thereby providing the complete knowledge on the behavior of the trained model over the hyperparameter space. Considering the problem in two-dimensional hyperparameter space though enables our algorithm to maintain greater flexibility in dealing with special cases and sheds light on problems encountered by algorithms building the paths in one-dimensional spaces. We demonstrate two-dimensional regularization paths for linear support vector machines that we train on synthetic and real data.
[ { "created": "Wed, 12 Oct 2016 14:57:10 GMT", "version": "v1" } ]
2016-10-13
[ [ "Wesierski", "Daniel", "" ] ]
We propose an algorithm for exploring the entire regularization path of asymmetric-cost linear support vector machines. Empirical evidence suggests the predictive power of support vector machines depends on the regularization parameters of the training algorithms. The algorithms exploring the entire regularization paths have been proposed for single-cost support vector machines thereby providing the complete knowledge on the behavior of the trained model over the hyperparameter space. Considering the problem in two-dimensional hyperparameter space though enables our algorithm to maintain greater flexibility in dealing with special cases and sheds light on problems encountered by algorithms building the paths in one-dimensional spaces. We demonstrate two-dimensional regularization paths for linear support vector machines that we train on synthetic and real data.
1504.05451
Jing Yang
Qingshan Liu, Jing Yang, Kaihua Zhang, Yi Wu
Adaptive Compressive Tracking via Online Vector Boosting Feature Selection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the compressive tracking (CT) method has attracted much attention due to its high efficiency, but it cannot well deal with the large scale target appearance variations due to its data-independent random projection matrix that results in less discriminative features. To address this issue, in this paper we propose an adaptive CT approach, which selects the most discriminative features to design an effective appearance model. Our method significantly improves CT in three aspects: Firstly, the most discriminative features are selected via an online vector boosting method. Secondly, the object representation is updated in an effective online manner, which preserves the stable features while filtering out the noisy ones. Finally, a simple and effective trajectory rectification approach is adopted that can make the estimated location more accurate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of our algorithm compared over state-of-the-art tracking algorithms.
[ { "created": "Tue, 21 Apr 2015 14:55:07 GMT", "version": "v1" }, { "created": "Wed, 22 Apr 2015 01:27:08 GMT", "version": "v2" } ]
2015-04-23
[ [ "Liu", "Qingshan", "" ], [ "Yang", "Jing", "" ], [ "Zhang", "Kaihua", "" ], [ "Wu", "Yi", "" ] ]
Recently, the compressive tracking (CT) method has attracted much attention due to its high efficiency, but it cannot well deal with the large scale target appearance variations due to its data-independent random projection matrix that results in less discriminative features. To address this issue, in this paper we propose an adaptive CT approach, which selects the most discriminative features to design an effective appearance model. Our method significantly improves CT in three aspects: Firstly, the most discriminative features are selected via an online vector boosting method. Secondly, the object representation is updated in an effective online manner, which preserves the stable features while filtering out the noisy ones. Finally, a simple and effective trajectory rectification approach is adopted that can make the estimated location more accurate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of our algorithm compared over state-of-the-art tracking algorithms.
2111.04310
Byeongjun Park
Byeongjun Park, Taekyung Kim, Hyojun Go, Changick Kim
Residual-Guided Learning Representation for Self-Supervised Monocular Depth Estimation
5 pages, 2 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Photometric consistency loss is one of the representative objective functions commonly used for self-supervised monocular depth estimation. However, this loss often causes unstable depth predictions in textureless or occluded regions due to incorrect guidance. Recent self-supervised learning approaches tackle this issue by utilizing feature representations explicitly learned from auto-encoders, expecting better discriminability than the input image. Despite the use of auto-encoded features, we observe that the method does not embed features as discriminative as auto-encoded features. In this paper, we propose residual guidance loss that enables the depth estimation network to embed the discriminative feature by transferring the discriminability of auto-encoded features. We conducted experiments on the KITTI benchmark and verified our method's superiority and orthogonality on other state-of-the-art methods.
[ { "created": "Mon, 8 Nov 2021 07:44:31 GMT", "version": "v1" } ]
2021-11-09
[ [ "Park", "Byeongjun", "" ], [ "Kim", "Taekyung", "" ], [ "Go", "Hyojun", "" ], [ "Kim", "Changick", "" ] ]
Photometric consistency loss is one of the representative objective functions commonly used for self-supervised monocular depth estimation. However, this loss often causes unstable depth predictions in textureless or occluded regions due to incorrect guidance. Recent self-supervised learning approaches tackle this issue by utilizing feature representations explicitly learned from auto-encoders, expecting better discriminability than the input image. Despite the use of auto-encoded features, we observe that the method does not embed features as discriminative as auto-encoded features. In this paper, we propose residual guidance loss that enables the depth estimation network to embed the discriminative feature by transferring the discriminability of auto-encoded features. We conducted experiments on the KITTI benchmark and verified our method's superiority and orthogonality on other state-of-the-art methods.
1912.12467
Alcardo Alex Barakabitze
Alcardo Alex Barakabitze, Nabajeet Barman, Arslan Ahmad, Saman Zadtootaghaj, Lingfen Sun, Maria G. Martini, Luigi Atzori
QoE Management of Multimedia Streaming Services in Future Networks: A Tutorial and Survey
42 pages, 21 figures, 10 tables
null
10.1109/COMST.2019.2958784
null
cs.NI cs.MM eess.SP
http://creativecommons.org/licenses/by/4.0/
We provide in this paper a tutorial and a comprehensive survey of QoE management solutions in current and future networks. We start with a high level description of QoE management for multimedia services, which integrates QoE modelling, monitoring, and optimization. This followed by a discussion of HTTP Adaptive Streaming (HAS) solutions as the dominant technique for streaming videos over the best-effort Internet. We then summarize the key elements in SDN/NFV along with an overview of ongoing research projects, standardization activities and use cases related to SDN, NFV, and other emerging applications. We provide a survey of the state-of-the-art of QoE management techniques categorized into three different groups: a) QoE-aware/driven strategies using SDN and/or NFV; b) QoE-aware/driven approaches for adaptive streaming over emerging architectures such as multi-access edge computing, cloud/fog computing, and information-centric networking; and c) extended QoE management approaches in new domains such as immersive augmented and virtual reality, mulsemedia and video gaming applications. Based on the review, we present a list of identified future QoE management challenges regarding emerging multimedia applications, network management and orchestration, network slicing and collaborative service management in softwarized networks. Finally, we provide a discussion on future research directions with a focus on emerging research areas in QoE management, such as QoE-oriented business models, QoE-based big data strategies, and scalability issues in QoE optimization.
[ { "created": "Sat, 28 Dec 2019 14:50:48 GMT", "version": "v1" } ]
2020-01-01
[ [ "Barakabitze", "Alcardo Alex", "" ], [ "Barman", "Nabajeet", "" ], [ "Ahmad", "Arslan", "" ], [ "Zadtootaghaj", "Saman", "" ], [ "Sun", "Lingfen", "" ], [ "Martini", "Maria G.", "" ], [ "Atzori", "Luigi", "" ] ]
We provide in this paper a tutorial and a comprehensive survey of QoE management solutions in current and future networks. We start with a high level description of QoE management for multimedia services, which integrates QoE modelling, monitoring, and optimization. This followed by a discussion of HTTP Adaptive Streaming (HAS) solutions as the dominant technique for streaming videos over the best-effort Internet. We then summarize the key elements in SDN/NFV along with an overview of ongoing research projects, standardization activities and use cases related to SDN, NFV, and other emerging applications. We provide a survey of the state-of-the-art of QoE management techniques categorized into three different groups: a) QoE-aware/driven strategies using SDN and/or NFV; b) QoE-aware/driven approaches for adaptive streaming over emerging architectures such as multi-access edge computing, cloud/fog computing, and information-centric networking; and c) extended QoE management approaches in new domains such as immersive augmented and virtual reality, mulsemedia and video gaming applications. Based on the review, we present a list of identified future QoE management challenges regarding emerging multimedia applications, network management and orchestration, network slicing and collaborative service management in softwarized networks. Finally, we provide a discussion on future research directions with a focus on emerging research areas in QoE management, such as QoE-oriented business models, QoE-based big data strategies, and scalability issues in QoE optimization.
cs/0702085
Vincenzo Nicosia
V. Carchiolo, M. Malgeri, G. Mangioni and V. Nicosia
Social Behaviours Applied to P2P Systems: An efficient Algorithm for Resource Organisation
5 Pages; 8 Figures; Presented at COPS 2006 -- WETICE -- Manchester (UK)
15th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2006. WETICE '06. June 2006 Page(s):65 - 72
null
null
cs.DC cs.IR
null
P2P systems are a great solution to the problem of distributing resources. The main issue of P2P networks is that searching and retrieving resources shared by peers is usually expensive and does not take into account similarities among peers. In this paper we present preliminary simulations of PROSA, a novel algorithm for P2P network structuring, inspired by social behaviours. Peers in PROSA self--organise in social groups of similar peers, called ``semantic--groups'', depending on the resources they are sharing. Such a network smoothly evolves to a small--world graph, where queries for resources are efficiently and effectively routed.
[ { "created": "Wed, 14 Feb 2007 11:53:14 GMT", "version": "v1" } ]
2007-05-23
[ [ "Carchiolo", "V.", "" ], [ "Malgeri", "M.", "" ], [ "Mangioni", "G.", "" ], [ "Nicosia", "V.", "" ] ]
P2P systems are a great solution to the problem of distributing resources. The main issue of P2P networks is that searching and retrieving resources shared by peers is usually expensive and does not take into account similarities among peers. In this paper we present preliminary simulations of PROSA, a novel algorithm for P2P network structuring, inspired by social behaviours. Peers in PROSA self--organise in social groups of similar peers, called ``semantic--groups'', depending on the resources they are sharing. Such a network smoothly evolves to a small--world graph, where queries for resources are efficiently and effectively routed.
0801.0677
Jad Saklawi
Paul C. Attie
Finite-state concurrent programs can be expressed pairwise
14 pages
null
null
null
cs.LO
null
We present a \emph{pairwise normal form} for finite-state shared memory concurrent programs: all variables are shared between exactly two processes, and the guards on transitions are conjunctions of conditions over this pairwise shared state. This representation has been used to efficiently (in polynomial time) synthesize and model-check correctness properties of concurrent programs. Our main result is that any finite state concurrent program can be transformed into pairwise normal form. Specifically, if $Q$ is an arbitrary finite-state shared memory concurrent program, then there exists a finite-state shared memory concurrent program $P$ expressed in pairwise normal form such that $P$ is strongly bisimilar to $Q$. Our result is constructive: we give an algorithm for producing $P$, given $Q$.
[ { "created": "Fri, 4 Jan 2008 13:14:31 GMT", "version": "v1" } ]
2008-01-07
[ [ "Attie", "Paul C.", "" ] ]
We present a \emph{pairwise normal form} for finite-state shared memory concurrent programs: all variables are shared between exactly two processes, and the guards on transitions are conjunctions of conditions over this pairwise shared state. This representation has been used to efficiently (in polynomial time) synthesize and model-check correctness properties of concurrent programs. Our main result is that any finite state concurrent program can be transformed into pairwise normal form. Specifically, if $Q$ is an arbitrary finite-state shared memory concurrent program, then there exists a finite-state shared memory concurrent program $P$ expressed in pairwise normal form such that $P$ is strongly bisimilar to $Q$. Our result is constructive: we give an algorithm for producing $P$, given $Q$.
2212.04692
Toshihiro Ota
Toshihiro Ota, Ryo Karakida
Attention in a family of Boltzmann machines emerging from modern Hopfield networks
15 pages, 3 figures. v2: added figures and various corrections/improvements especially in Introduction and Section 3. Published version
null
null
RIKEN-iTHEMS-Report-22
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hopfield networks and Boltzmann machines (BMs) are fundamental energy-based neural network models. Recent studies on modern Hopfield networks have broaden the class of energy functions and led to a unified perspective on general Hopfield networks including an attention module. In this letter, we consider the BM counterparts of modern Hopfield networks using the associated energy functions, and study their salient properties from a trainability perspective. In particular, the energy function corresponding to the attention module naturally introduces a novel BM, which we refer to as the attentional BM (AttnBM). We verify that AttnBM has a tractable likelihood function and gradient for certain special cases and is easy to train. Moreover, we reveal the hidden connections between AttnBM and some single-layer models, namely the Gaussian--Bernoulli restricted BM and the denoising autoencoder with softmax units coming from denoising score matching. We also investigate BMs introduced by other energy functions and show that the energy function of dense associative memory models gives BMs belonging to Exponential Family Harmoniums.
[ { "created": "Fri, 9 Dec 2022 06:52:36 GMT", "version": "v1" }, { "created": "Wed, 29 Mar 2023 02:36:58 GMT", "version": "v2" } ]
2023-03-30
[ [ "Ota", "Toshihiro", "" ], [ "Karakida", "Ryo", "" ] ]
Hopfield networks and Boltzmann machines (BMs) are fundamental energy-based neural network models. Recent studies on modern Hopfield networks have broaden the class of energy functions and led to a unified perspective on general Hopfield networks including an attention module. In this letter, we consider the BM counterparts of modern Hopfield networks using the associated energy functions, and study their salient properties from a trainability perspective. In particular, the energy function corresponding to the attention module naturally introduces a novel BM, which we refer to as the attentional BM (AttnBM). We verify that AttnBM has a tractable likelihood function and gradient for certain special cases and is easy to train. Moreover, we reveal the hidden connections between AttnBM and some single-layer models, namely the Gaussian--Bernoulli restricted BM and the denoising autoencoder with softmax units coming from denoising score matching. We also investigate BMs introduced by other energy functions and show that the energy function of dense associative memory models gives BMs belonging to Exponential Family Harmoniums.
2404.15886
Eman Alqahtani
Eman Alqahtani and Mustafa A. Mustafa
Privacy-Preserving Billing for Local Energy Markets (Long Version)
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a privacy-preserving billing protocol for local energy markets (PBP-LEMs) that takes into account market participants' energy volume deviations from their bids. PBP-LEMs enables a group of market entities to jointly compute participants' bills in a decentralized and privacy-preserving manner without sacrificing correctness. It also mitigates risks on individuals' privacy arising from any potential internal collusion. We first propose a novel, efficient, and privacy-preserving individual billing scheme, achieving information-theoretic security, which serves as a building block. PBP-LEMs utilizes this scheme, along with other techniques such as multiparty computation, Pedersen commitments and inner product functional encryption, to ensure data confidentiality and accuracy. Additionally, we present three approaches, resulting in different levels of privacy and performance. We prove that the protocol meets its security and privacy requirements and is feasible for deployment in real LEMs. Our analysis also shows variations in overall performance and identifies areas where overhead is concentrated based on the applied approach.
[ { "created": "Wed, 24 Apr 2024 14:12:56 GMT", "version": "v1" } ]
2024-04-25
[ [ "Alqahtani", "Eman", "" ], [ "Mustafa", "Mustafa A.", "" ] ]
We propose a privacy-preserving billing protocol for local energy markets (PBP-LEMs) that takes into account market participants' energy volume deviations from their bids. PBP-LEMs enables a group of market entities to jointly compute participants' bills in a decentralized and privacy-preserving manner without sacrificing correctness. It also mitigates risks on individuals' privacy arising from any potential internal collusion. We first propose a novel, efficient, and privacy-preserving individual billing scheme, achieving information-theoretic security, which serves as a building block. PBP-LEMs utilizes this scheme, along with other techniques such as multiparty computation, Pedersen commitments and inner product functional encryption, to ensure data confidentiality and accuracy. Additionally, we present three approaches, resulting in different levels of privacy and performance. We prove that the protocol meets its security and privacy requirements and is feasible for deployment in real LEMs. Our analysis also shows variations in overall performance and identifies areas where overhead is concentrated based on the applied approach.
1302.6677
Ashish Sabharwal
Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman
Taming the Curse of Dimensionality: Discrete Integration by Hashing and Optimization
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integration is affected by the curse of dimensionality and quickly becomes intractable as the dimensionality of the problem grows. We propose a randomized algorithm that, with high probability, gives a constant-factor approximation of a general discrete integral defined over an exponentially large set. This algorithm relies on solving only a small number of instances of a discrete combinatorial optimization problem subject to randomly generated parity constraints used as a hash function. As an application, we demonstrate that with a small number of MAP queries we can efficiently approximate the partition function of discrete graphical models, which can in turn be used, for instance, for marginal computation or model selection.
[ { "created": "Wed, 27 Feb 2013 06:45:28 GMT", "version": "v1" } ]
2013-02-28
[ [ "Ermon", "Stefano", "" ], [ "Gomes", "Carla P.", "" ], [ "Sabharwal", "Ashish", "" ], [ "Selman", "Bart", "" ] ]
Integration is affected by the curse of dimensionality and quickly becomes intractable as the dimensionality of the problem grows. We propose a randomized algorithm that, with high probability, gives a constant-factor approximation of a general discrete integral defined over an exponentially large set. This algorithm relies on solving only a small number of instances of a discrete combinatorial optimization problem subject to randomly generated parity constraints used as a hash function. As an application, we demonstrate that with a small number of MAP queries we can efficiently approximate the partition function of discrete graphical models, which can in turn be used, for instance, for marginal computation or model selection.
2408.06707
JunYong Choi
JunYong Choi, SeokYeong Lee, Haesol Park, Seung-Won Jung, Ig-Jae Kim, Junghyun Cho
MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting Representation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, SVBRDF, and 3D spatially-varying lighting. While multi-view images have been widely used for object-level inverse rendering, scene-level inverse rendering has primarily been studied using single-view images due to the lack of a dataset containing high dynamic range multi-view images with ground-truth geometry, material, and spatially-varying lighting. To improve the quality of scene-level inverse rendering, a novel framework called Multi-view Attention Inverse Rendering (MAIR) was recently introduced. MAIR performs scene-level multi-view inverse rendering by expanding the OpenRooms dataset, designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Although MAIR showed impressive results, its lighting representation is fixed to spherical Gaussians, which limits its ability to render images realistically. Consequently, MAIR cannot be directly used in applications such as material editing. Moreover, its multi-view aggregation networks have difficulties extracting rich features because they only focus on the mean and variance between multi-view features. In this paper, we propose its extended version, called MAIR++. MAIR++ addresses the aforementioned limitations by introducing an implicit lighting representation that accurately captures the lighting conditions of an image while facilitating realistic rendering. Furthermore, we design a directional attention-based multi-view aggregation network to infer more intricate relationships between views. Experimental results show that MAIR++ not only achieves better performance than MAIR and single-view-based methods, but also displays robust performance on unseen real-world scenes.
[ { "created": "Tue, 13 Aug 2024 08:04:23 GMT", "version": "v1" } ]
2024-08-14
[ [ "Choi", "JunYong", "" ], [ "Lee", "SeokYeong", "" ], [ "Park", "Haesol", "" ], [ "Jung", "Seung-Won", "" ], [ "Kim", "Ig-Jae", "" ], [ "Cho", "Junghyun", "" ] ]
In this paper, we propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, SVBRDF, and 3D spatially-varying lighting. While multi-view images have been widely used for object-level inverse rendering, scene-level inverse rendering has primarily been studied using single-view images due to the lack of a dataset containing high dynamic range multi-view images with ground-truth geometry, material, and spatially-varying lighting. To improve the quality of scene-level inverse rendering, a novel framework called Multi-view Attention Inverse Rendering (MAIR) was recently introduced. MAIR performs scene-level multi-view inverse rendering by expanding the OpenRooms dataset, designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Although MAIR showed impressive results, its lighting representation is fixed to spherical Gaussians, which limits its ability to render images realistically. Consequently, MAIR cannot be directly used in applications such as material editing. Moreover, its multi-view aggregation networks have difficulties extracting rich features because they only focus on the mean and variance between multi-view features. In this paper, we propose its extended version, called MAIR++. MAIR++ addresses the aforementioned limitations by introducing an implicit lighting representation that accurately captures the lighting conditions of an image while facilitating realistic rendering. Furthermore, we design a directional attention-based multi-view aggregation network to infer more intricate relationships between views. Experimental results show that MAIR++ not only achieves better performance than MAIR and single-view-based methods, but also displays robust performance on unseen real-world scenes.
1802.01920
Vincent Neiger
Pascal Giorgi and Vincent Neiger
Certification of minimal approximant bases
ISSAC 2018. 8 pages, 3 algorithms, acmart sigconf style
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a given computational problem, a certificate is a piece of data that one (the prover) attaches to the output with the aim of allowing efficient verification (by the verifier) that this output is correct. Here, we consider the minimal approximant basis problem, for which the fastest known algorithms output a polynomial matrix of dimensions $m \times m$ and average degree $D/m$ using $O\tilde{~}(m^\omega \frac{D}{m})$ field operations. We propose a certificate which, for typical instances of the problem, is computed by the prover using $O(m^\omega \frac{D}{m})$ additional field operations and allows verification of the approximant basis by a Monte Carlo algorithm with cost bound $O(m^\omega + m D)$. Besides theoretical interest, our motivation also comes from the fact that approximant bases arise in most of the fastest known algorithms for linear algebra over the univariate polynomials; thus, this work may help in designing certificates for other polynomial matrix computations. Furthermore, cryptographic challenges such as breaking records for discrete logarithm computations or for integer factorization rely in particular on computing minimal approximant bases for large instances: certificates can then be used to provide reliable computation on outsourced and error-prone clusters.
[ { "created": "Tue, 6 Feb 2018 12:57:47 GMT", "version": "v1" }, { "created": "Thu, 17 May 2018 20:35:04 GMT", "version": "v2" } ]
2018-05-21
[ [ "Giorgi", "Pascal", "" ], [ "Neiger", "Vincent", "" ] ]
For a given computational problem, a certificate is a piece of data that one (the prover) attaches to the output with the aim of allowing efficient verification (by the verifier) that this output is correct. Here, we consider the minimal approximant basis problem, for which the fastest known algorithms output a polynomial matrix of dimensions $m \times m$ and average degree $D/m$ using $O\tilde{~}(m^\omega \frac{D}{m})$ field operations. We propose a certificate which, for typical instances of the problem, is computed by the prover using $O(m^\omega \frac{D}{m})$ additional field operations and allows verification of the approximant basis by a Monte Carlo algorithm with cost bound $O(m^\omega + m D)$. Besides theoretical interest, our motivation also comes from the fact that approximant bases arise in most of the fastest known algorithms for linear algebra over the univariate polynomials; thus, this work may help in designing certificates for other polynomial matrix computations. Furthermore, cryptographic challenges such as breaking records for discrete logarithm computations or for integer factorization rely in particular on computing minimal approximant bases for large instances: certificates can then be used to provide reliable computation on outsourced and error-prone clusters.
1904.07154
Jaehun Kim
Jaehun Kim, Juli\'an Urbano, Cynthia C. S. Liem, Alan Hanjalic
Are Nearby Neighbors Relatives?: Testing Deep Music Embeddings
this work was accepted for publication in the "Frontiers in Applied Mathematics and Statistics (Deep Learning: Status, Applications and Algorithms)"
null
null
null
cs.LG cs.SD eess.AS stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep neural networks have frequently been used to directly learn representations useful for a given task from raw input data. In terms of overall performance metrics, machine learning solutions employing deep representations frequently have been reported to greatly outperform those using hand-crafted feature representations. At the same time, they may pick up on aspects that are predominant in the data, yet not actually meaningful or interpretable. In this paper, we therefore propose a systematic way to test the trustworthiness of deep music representations, considering musical semantics. The underlying assumption is that in case a deep representation is to be trusted, distance consistency between known related points should be maintained both in the input audio space and corresponding latent deep space. We generate known related points through semantically meaningful transformations, both considering imperceptible and graver transformations. Then, we examine within- and between-space distance consistencies, both considering audio space and latent embedded space, the latter either being a result of a conventional feature extractor or a deep encoder. We illustrate how our method, as a complement to task-specific performance, provides interpretable insight into what a network may have captured from training data signals.
[ { "created": "Mon, 15 Apr 2019 16:08:41 GMT", "version": "v1" }, { "created": "Tue, 15 Oct 2019 21:42:36 GMT", "version": "v2" }, { "created": "Thu, 17 Oct 2019 23:34:04 GMT", "version": "v3" } ]
2019-10-21
[ [ "Kim", "Jaehun", "" ], [ "Urbano", "Julián", "" ], [ "Liem", "Cynthia C. S.", "" ], [ "Hanjalic", "Alan", "" ] ]
Deep neural networks have frequently been used to directly learn representations useful for a given task from raw input data. In terms of overall performance metrics, machine learning solutions employing deep representations frequently have been reported to greatly outperform those using hand-crafted feature representations. At the same time, they may pick up on aspects that are predominant in the data, yet not actually meaningful or interpretable. In this paper, we therefore propose a systematic way to test the trustworthiness of deep music representations, considering musical semantics. The underlying assumption is that in case a deep representation is to be trusted, distance consistency between known related points should be maintained both in the input audio space and corresponding latent deep space. We generate known related points through semantically meaningful transformations, both considering imperceptible and graver transformations. Then, we examine within- and between-space distance consistencies, both considering audio space and latent embedded space, the latter either being a result of a conventional feature extractor or a deep encoder. We illustrate how our method, as a complement to task-specific performance, provides interpretable insight into what a network may have captured from training data signals.
2106.02809
Kaihao Zhang
Lirong Zheng, Yanshan Li, Kaihao Zhang, Wenhan Luo
T-Net: Deep Stacked Scale-Iteration Network for Image Dehazing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hazy images reduce the visibility of the image content, and haze will lead to failure in handling subsequent computer vision tasks. In this paper, we address the problem of image dehazing by proposing a dehazing network named T-Net, which consists of a backbone network based on the U-Net architecture and a dual attention module. And it can achieve multi-scale feature fusion by using skip connections with a new fusion strategy. Furthermore, by repeatedly unfolding the plain T-Net, Stack T-Net is proposed to take advantage of the dependence of deep features across stages via a recursive strategy. In order to reduce network parameters, the intra-stage recursive computation of ResNet is adopted in our Stack T-Net. And we take both the stage-wise result and the original hazy image as input to each T-Net and finally output the prediction of clean image. Experimental results on both synthetic and real-world images demonstrate that our plain T-Net and the advanced Stack T-Net perform favorably against the state-of-the-art dehazing algorithms, and show that our Stack T-Net could further improve the dehazing effect, demonstrating the effectiveness of the recursive strategy.
[ { "created": "Sat, 5 Jun 2021 06:01:05 GMT", "version": "v1" } ]
2021-06-08
[ [ "Zheng", "Lirong", "" ], [ "Li", "Yanshan", "" ], [ "Zhang", "Kaihao", "" ], [ "Luo", "Wenhan", "" ] ]
Hazy images reduce the visibility of the image content, and haze will lead to failure in handling subsequent computer vision tasks. In this paper, we address the problem of image dehazing by proposing a dehazing network named T-Net, which consists of a backbone network based on the U-Net architecture and a dual attention module. And it can achieve multi-scale feature fusion by using skip connections with a new fusion strategy. Furthermore, by repeatedly unfolding the plain T-Net, Stack T-Net is proposed to take advantage of the dependence of deep features across stages via a recursive strategy. In order to reduce network parameters, the intra-stage recursive computation of ResNet is adopted in our Stack T-Net. And we take both the stage-wise result and the original hazy image as input to each T-Net and finally output the prediction of clean image. Experimental results on both synthetic and real-world images demonstrate that our plain T-Net and the advanced Stack T-Net perform favorably against the state-of-the-art dehazing algorithms, and show that our Stack T-Net could further improve the dehazing effect, demonstrating the effectiveness of the recursive strategy.
cs/0603067
Priya Sivakumar
Priya Sivakumar
Implementing the Three-Stage Quantum Cryptography Protocol
4 pages, 1 figure
null
null
null
cs.CR
null
We present simple implementations of Kak's three-stage quantum cryptography protocol. The case where the transformation is applied to more than one qubit at the same time is also considered.
[ { "created": "Thu, 16 Mar 2006 22:20:44 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sivakumar", "Priya", "" ] ]
We present simple implementations of Kak's three-stage quantum cryptography protocol. The case where the transformation is applied to more than one qubit at the same time is also considered.
2209.06452
Luigy Alex Machaca Arcana
Luigy Machaca, F. Oliver Sumari H, Jose Huaman, Esteban Clua, Joris Guerin
TrADe Re-ID -- Live Person Re-Identification using Tracking and Anomaly Detection
6 pages, 4 figures, Accepted on ICMLA 2022
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person Re-Identification (Re-ID) aims to search for a person of interest (query) in a network of cameras. In the classic Re-ID setting the query is sought in a gallery containing properly cropped images of entire bodies. Recently, the live Re-ID setting was introduced to represent the practical application context of Re-ID better. It consists in searching for the query in short videos, containing whole scene frames. The initial live Re-ID baseline used a pedestrian detector to build a large search gallery and a classic Re-ID model to find the query in the gallery. However, the galleries generated were too large and contained low-quality images, which decreased the live Re-ID performance. Here, we present a new live Re-ID approach called TrADe, to generate lower high-quality galleries. TrADe first uses a Tracking algorithm to identify sequences of images of the same individual in the gallery. Following, an Anomaly Detection model is used to select a single good representative of each tracklet. TrADe is validated on the live Re-ID version of the PRID-2011 dataset and shows significant improvements over the baseline.
[ { "created": "Wed, 14 Sep 2022 07:00:35 GMT", "version": "v1" } ]
2022-09-15
[ [ "Machaca", "Luigy", "" ], [ "H", "F. Oliver Sumari", "" ], [ "Huaman", "Jose", "" ], [ "Clua", "Esteban", "" ], [ "Guerin", "Joris", "" ] ]
Person Re-Identification (Re-ID) aims to search for a person of interest (query) in a network of cameras. In the classic Re-ID setting the query is sought in a gallery containing properly cropped images of entire bodies. Recently, the live Re-ID setting was introduced to represent the practical application context of Re-ID better. It consists in searching for the query in short videos, containing whole scene frames. The initial live Re-ID baseline used a pedestrian detector to build a large search gallery and a classic Re-ID model to find the query in the gallery. However, the galleries generated were too large and contained low-quality images, which decreased the live Re-ID performance. Here, we present a new live Re-ID approach called TrADe, to generate lower high-quality galleries. TrADe first uses a Tracking algorithm to identify sequences of images of the same individual in the gallery. Following, an Anomaly Detection model is used to select a single good representative of each tracklet. TrADe is validated on the live Re-ID version of the PRID-2011 dataset and shows significant improvements over the baseline.
2105.00328
Marshall Ho
Marshall Ho, Zhipeng Zhou, Judith He
When to Fold'em: How to answer Unanswerable questions
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present 3 different question-answering models trained on the SQuAD2.0 dataset -- BIDAF, DocumentQA and ALBERT Retro-Reader -- demonstrating the improvement of language models in the past three years. Through our research in fine-tuning pre-trained models for question-answering, we developed a novel approach capable of achieving a 2% point improvement in SQuAD2.0 F1 in reduced training time. Our method of re-initializing select layers of a parameter-shared language model is simple yet empirically powerful.
[ { "created": "Sat, 1 May 2021 19:08:40 GMT", "version": "v1" } ]
2021-05-04
[ [ "Ho", "Marshall", "" ], [ "Zhou", "Zhipeng", "" ], [ "He", "Judith", "" ] ]
We present 3 different question-answering models trained on the SQuAD2.0 dataset -- BIDAF, DocumentQA and ALBERT Retro-Reader -- demonstrating the improvement of language models in the past three years. Through our research in fine-tuning pre-trained models for question-answering, we developed a novel approach capable of achieving a 2% point improvement in SQuAD2.0 F1 in reduced training time. Our method of re-initializing select layers of a parameter-shared language model is simple yet empirically powerful.
2302.07396
Marcelo Magnasco
Marcelo O. Magnasco
Convolutional unitary or orthogonal recurrent neural networks
null
null
null
null
cs.LG cond-mat.stat-mech cs.AI q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Recurrent neural networks are extremely powerful yet hard to train. One of their issues is the vanishing gradient problem, whereby propagation of training signals may be exponentially attenuated, freezing training. Use of orthogonal or unitary matrices, whose powers neither explode nor decay, has been proposed to mitigate this issue, but their computational expense has hindered their use. Here we show that in the specific case of convolutional RNNs, we can define a convolutional exponential and that this operation transforms antisymmetric or anti-Hermitian convolution kernels into orthogonal or unitary convolution kernels. We explicitly derive FFT-based algorithms to compute the kernels and their derivatives. The computational complexity of parametrizing this subspace of orthogonal transformations is thus the same as the networks' iteration.
[ { "created": "Tue, 14 Feb 2023 23:36:21 GMT", "version": "v1" } ]
2023-02-16
[ [ "Magnasco", "Marcelo O.", "" ] ]
Recurrent neural networks are extremely powerful yet hard to train. One of their issues is the vanishing gradient problem, whereby propagation of training signals may be exponentially attenuated, freezing training. Use of orthogonal or unitary matrices, whose powers neither explode nor decay, has been proposed to mitigate this issue, but their computational expense has hindered their use. Here we show that in the specific case of convolutional RNNs, we can define a convolutional exponential and that this operation transforms antisymmetric or anti-Hermitian convolution kernels into orthogonal or unitary convolution kernels. We explicitly derive FFT-based algorithms to compute the kernels and their derivatives. The computational complexity of parametrizing this subspace of orthogonal transformations is thus the same as the networks' iteration.
2006.05203
Travis LaCroix
Travis LaCroix and Aydin Mohseni
The Tragedy of the AI Commons
40 Pages, 5 Figures
null
null
null
cs.CY cs.AI cs.GT cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma; namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.
[ { "created": "Tue, 9 Jun 2020 12:01:01 GMT", "version": "v1" }, { "created": "Mon, 18 Jan 2021 19:07:13 GMT", "version": "v2" } ]
2021-01-20
[ [ "LaCroix", "Travis", "" ], [ "Mohseni", "Aydin", "" ] ]
Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma; namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.
2204.02181
Hyeonbin Hwang
Hyeonbin Hwang, Soyeon Kim, Wei-Jin Park, Jiho Seo, Kyungtae Ko, Hyeon Yeo
Vision Transformer Equipped with Neural Resizer on Facial Expression Recognition Task
Accepted to IEEE ICASSP 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
When it comes to wild conditions, Facial Expression Recognition is often challenged with low-quality data and imbalanced, ambiguous labels. This field has much benefited from CNN based approaches; however, CNN models have structural limitation to see the facial regions in distant. As a remedy, Transformer has been introduced to vision fields with global receptive field, but requires adjusting input spatial size to the pretrained models to enjoy their strong inductive bias at hands. We herein raise a question whether using the deterministic interpolation method is enough to feed low-resolution data to Transformer. In this work, we propose a novel training framework, Neural Resizer, to support Transformer by compensating information and downscaling in a data-driven manner trained with loss function balancing the noisiness and imbalance. Experiments show our Neural Resizer with F-PDLS loss function improves the performance with Transformer variants in general and nearly achieves the state-of-the-art performance.
[ { "created": "Tue, 5 Apr 2022 13:04:04 GMT", "version": "v1" } ]
2022-04-06
[ [ "Hwang", "Hyeonbin", "" ], [ "Kim", "Soyeon", "" ], [ "Park", "Wei-Jin", "" ], [ "Seo", "Jiho", "" ], [ "Ko", "Kyungtae", "" ], [ "Yeo", "Hyeon", "" ] ]
When it comes to wild conditions, Facial Expression Recognition is often challenged with low-quality data and imbalanced, ambiguous labels. This field has much benefited from CNN based approaches; however, CNN models have structural limitation to see the facial regions in distant. As a remedy, Transformer has been introduced to vision fields with global receptive field, but requires adjusting input spatial size to the pretrained models to enjoy their strong inductive bias at hands. We herein raise a question whether using the deterministic interpolation method is enough to feed low-resolution data to Transformer. In this work, we propose a novel training framework, Neural Resizer, to support Transformer by compensating information and downscaling in a data-driven manner trained with loss function balancing the noisiness and imbalance. Experiments show our Neural Resizer with F-PDLS loss function improves the performance with Transformer variants in general and nearly achieves the state-of-the-art performance.
1712.01337
Hsiao-Yu Tung
Hsiao-Yu Fish Tung, Hsiao-Wei Tung, Ersin Yumer, Katerina Fragkiadaki
Self-supervised Learning of Motion Capture
Neural Information Processing Systems (NIPS) 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.
[ { "created": "Mon, 4 Dec 2017 20:25:47 GMT", "version": "v1" } ]
2017-12-06
[ [ "Tung", "Hsiao-Yu Fish", "" ], [ "Tung", "Hsiao-Wei", "" ], [ "Yumer", "Ersin", "" ], [ "Fragkiadaki", "Katerina", "" ] ]
Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.
2308.09284
Shaleen Deep
Paraschos Koutris, Shaleen Deep
The Fine-Grained Complexity of CFL Reachability
Appeared in POPL 2023. Please note the erratum on the first page
null
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
Many problems in static program analysis can be modeled as the context-free language (CFL) reachability problem on directed labeled graphs. The CFL reachability problem can be generally solved in time $O(n^3)$, where $n$ is the number of vertices in the graph, with some specific cases that can be solved faster. In this work, we ask the following question: given a specific CFL, what is the exact exponent in the monomial of the running time? In other words, for which cases do we have linear, quadratic or cubic algorithms, and are there problems with intermediate runtimes? This question is inspired by recent efforts to classify classic problems in terms of their exact polynomial complexity, known as {\em fine-grained complexity}. Although recent efforts have shown some conditional lower bounds (mostly for the class of combinatorial algorithms), a general picture of the fine-grained complexity landscape for CFL reachability is missing. Our main contribution is lower bound results that pinpoint the exact running time of several classes of CFLs or specific CFLs under widely believed lower bound conjectures (Boolean Matrix Multiplication and $k$-Clique). We particularly focus on the family of Dyck-$k$ languages (which are strings with well-matched parentheses), a fundamental class of CFL reachability problems. We present new lower bounds for the case of sparse input graphs where the number of edges $m$ is the input parameter, a common setting in the database literature. For this setting, we show a cubic lower bound for Andersen's Pointer Analysis which significantly strengthens prior known results.
[ { "created": "Fri, 18 Aug 2023 03:52:27 GMT", "version": "v1" } ]
2023-08-21
[ [ "Koutris", "Paraschos", "" ], [ "Deep", "Shaleen", "" ] ]
Many problems in static program analysis can be modeled as the context-free language (CFL) reachability problem on directed labeled graphs. The CFL reachability problem can be generally solved in time $O(n^3)$, where $n$ is the number of vertices in the graph, with some specific cases that can be solved faster. In this work, we ask the following question: given a specific CFL, what is the exact exponent in the monomial of the running time? In other words, for which cases do we have linear, quadratic or cubic algorithms, and are there problems with intermediate runtimes? This question is inspired by recent efforts to classify classic problems in terms of their exact polynomial complexity, known as {\em fine-grained complexity}. Although recent efforts have shown some conditional lower bounds (mostly for the class of combinatorial algorithms), a general picture of the fine-grained complexity landscape for CFL reachability is missing. Our main contribution is lower bound results that pinpoint the exact running time of several classes of CFLs or specific CFLs under widely believed lower bound conjectures (Boolean Matrix Multiplication and $k$-Clique). We particularly focus on the family of Dyck-$k$ languages (which are strings with well-matched parentheses), a fundamental class of CFL reachability problems. We present new lower bounds for the case of sparse input graphs where the number of edges $m$ is the input parameter, a common setting in the database literature. For this setting, we show a cubic lower bound for Andersen's Pointer Analysis which significantly strengthens prior known results.
2312.14280
Sepideh Koohfar
Sepideh Koohfar and Laura Dietz
Fine-grained Forecasting Models Via Gaussian Process Blurring Effect
10 pages
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time series forecasting is a challenging task due to the existence of complex and dynamic temporal dependencies. This can lead to incorrect predictions by even the best forecasting models. Using more training data is one way to improve the accuracy, but this source is often limited. In contrast, we are building on successful denoising approaches for image generation by advocating for an end-to-end forecasting and denoising paradigm. We propose an end-to-end forecast-blur-denoise forecasting framework by encouraging a division of labors between the forecasting and the denoising models. The initial forecasting model is directed to focus on accurately predicting the coarse-grained behavior, while the denoiser model focuses on capturing the fine-grained behavior that is locally blurred by integrating a Gaussian Process model. All three parts are interacting for the best end-to-end performance. Our extensive experiments demonstrate that our proposed approach is able to improve the forecasting accuracy of several state-of-the-art forecasting models as well as several other denoising approaches.
[ { "created": "Thu, 21 Dec 2023 20:25:16 GMT", "version": "v1" } ]
2023-12-25
[ [ "Koohfar", "Sepideh", "" ], [ "Dietz", "Laura", "" ] ]
Time series forecasting is a challenging task due to the existence of complex and dynamic temporal dependencies. This can lead to incorrect predictions by even the best forecasting models. Using more training data is one way to improve the accuracy, but this source is often limited. In contrast, we are building on successful denoising approaches for image generation by advocating for an end-to-end forecasting and denoising paradigm. We propose an end-to-end forecast-blur-denoise forecasting framework by encouraging a division of labors between the forecasting and the denoising models. The initial forecasting model is directed to focus on accurately predicting the coarse-grained behavior, while the denoiser model focuses on capturing the fine-grained behavior that is locally blurred by integrating a Gaussian Process model. All three parts are interacting for the best end-to-end performance. Our extensive experiments demonstrate that our proposed approach is able to improve the forecasting accuracy of several state-of-the-art forecasting models as well as several other denoising approaches.
1812.07072
Nathana\"el Fijalkow
Nathana\"el Fijalkow and Pawe{\l} Gawrychowski and Pierre Ohlmann
The complexity of mean payoff games using universal graphs
null
null
null
null
cs.GT cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the computational complexity of solving mean payoff games. This class of games can be seen as an extension of parity games, and they have similar complexity status: in both cases solving them is in $\textbf{NP} \cap \textbf{coNP}$ and not known to be in $\textbf{P}$. In a breakthrough result Calude, Jain, Khoussainov, Li, and Stephan constructed in 2017 a quasipolynomial time algorithm for solving parity games, which was quickly followed by two other algorithms with the same complexity. Our objective is to investigate how these techniques can be extended to the study of mean payoff games. The starting point is the notion of separating automata, which has been used to present all three quasipolynomial time algorithms for parity games and gives the best complexity to date. The notion naturally extends to mean payoff games and yields a class of algorithms for solving mean payoff games. The contribution of this paper is to prove tight bounds on the complexity of algorithms in this class. We construct two new algorithms for solving mean payoff games. Our first algorithm depends on the largest weight $N$ (in absolute value) appearing in the graph and runs in sublinear time in $N$, improving over the previously known linear dependence in $N$. Our second algorithm runs in polynomial time for a fixed number $k$ of weights. We complement our upper bounds by providing in both cases almost matching lower bounds, showing the limitations of the separating automata approach. We show that we cannot hope to improve on the dependence in $N$ nor break the linear dependence in the exponent in the number $k$ of weights. In particular, this shows that separating automata do not yield a quasipolynomial algorithm for solving mean payoff games.
[ { "created": "Mon, 17 Dec 2018 22:13:33 GMT", "version": "v1" }, { "created": "Mon, 4 Feb 2019 19:04:26 GMT", "version": "v2" } ]
2019-02-06
[ [ "Fijalkow", "Nathanaël", "" ], [ "Gawrychowski", "Paweł", "" ], [ "Ohlmann", "Pierre", "" ] ]
We study the computational complexity of solving mean payoff games. This class of games can be seen as an extension of parity games, and they have similar complexity status: in both cases solving them is in $\textbf{NP} \cap \textbf{coNP}$ and not known to be in $\textbf{P}$. In a breakthrough result Calude, Jain, Khoussainov, Li, and Stephan constructed in 2017 a quasipolynomial time algorithm for solving parity games, which was quickly followed by two other algorithms with the same complexity. Our objective is to investigate how these techniques can be extended to the study of mean payoff games. The starting point is the notion of separating automata, which has been used to present all three quasipolynomial time algorithms for parity games and gives the best complexity to date. The notion naturally extends to mean payoff games and yields a class of algorithms for solving mean payoff games. The contribution of this paper is to prove tight bounds on the complexity of algorithms in this class. We construct two new algorithms for solving mean payoff games. Our first algorithm depends on the largest weight $N$ (in absolute value) appearing in the graph and runs in sublinear time in $N$, improving over the previously known linear dependence in $N$. Our second algorithm runs in polynomial time for a fixed number $k$ of weights. We complement our upper bounds by providing in both cases almost matching lower bounds, showing the limitations of the separating automata approach. We show that we cannot hope to improve on the dependence in $N$ nor break the linear dependence in the exponent in the number $k$ of weights. In particular, this shows that separating automata do not yield a quasipolynomial algorithm for solving mean payoff games.
1908.00381
Anton Vladzymyrskyy
S.P. Morozov, A.V. Vladzymyrskyy, V.G. Klyashtornyy, A.E. Andreychenko, N.S. Kulberg, V.A. Gombolevsky, K.A. Sergunova
Clinical acceptance of software based on artificial intelligence technologies (radiology)
For correspondence: info@npcmr.ru, npcmr@zdrav.mos.ru. 28/1, Srednyaya Kalitnikovskaya st., Moscow, 109029, Russia +7 (495) 276-04-36
null
null
Preprint No. CDT-2019-1
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aim: provide a methodological framework for the process of clinical tests, clinical acceptance, and scientific assessment of algorithms and software based on the artificial intelligence (AI) technologies. Clinical tests are considered as a preparation stage for the software registration as a medical product. The authors propose approaches to evaluate accuracy and efficiency of the AI algorithms for radiology.
[ { "created": "Thu, 1 Aug 2019 13:24:26 GMT", "version": "v1" }, { "created": "Thu, 27 Feb 2020 14:16:03 GMT", "version": "v2" } ]
2020-02-28
[ [ "Morozov", "S. P.", "" ], [ "Vladzymyrskyy", "A. V.", "" ], [ "Klyashtornyy", "V. G.", "" ], [ "Andreychenko", "A. E.", "" ], [ "Kulberg", "N. S.", "" ], [ "Gombolevsky", "V. A.", "" ], [ "Sergunova", "K. A.", "" ] ]
Aim: provide a methodological framework for the process of clinical tests, clinical acceptance, and scientific assessment of algorithms and software based on the artificial intelligence (AI) technologies. Clinical tests are considered as a preparation stage for the software registration as a medical product. The authors propose approaches to evaluate accuracy and efficiency of the AI algorithms for radiology.
2005.03853
Rishi Sonthalia
Rishi Sonthalia, Anna C. Gilbert
Project and Forget: Solving Large-Scale Metric Constrained Problems
null
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a set of dissimilarity measurements amongst data points, determining what metric representation is most "consistent" with the input measurements or the metric that best captures the relevant geometric features of the data is a key step in many machine learning algorithms. Existing methods are restricted to specific kinds of metrics or small problem sizes because of the large number of metric constraints in such problems. In this paper, we provide an active set algorithm, Project and Forget, that uses Bregman projections, to solve metric constrained problems with many (possibly exponentially) inequality constraints. We provide a theoretical analysis of \textsc{Project and Forget} and prove that our algorithm converges to the global optimal solution and that the $L_2$ distance of the current iterate to the optimal solution decays asymptotically at an exponential rate. We demonstrate that using our method we can solve large problem instances of three types of metric constrained problems: general weight correlation clustering, metric nearness, and metric learning; in each case, out-performing the state of the art methods with respect to CPU times and problem sizes.
[ { "created": "Fri, 8 May 2020 04:50:54 GMT", "version": "v1" }, { "created": "Mon, 26 Sep 2022 21:27:20 GMT", "version": "v2" } ]
2022-09-28
[ [ "Sonthalia", "Rishi", "" ], [ "Gilbert", "Anna C.", "" ] ]
Given a set of dissimilarity measurements amongst data points, determining what metric representation is most "consistent" with the input measurements or the metric that best captures the relevant geometric features of the data is a key step in many machine learning algorithms. Existing methods are restricted to specific kinds of metrics or small problem sizes because of the large number of metric constraints in such problems. In this paper, we provide an active set algorithm, Project and Forget, that uses Bregman projections, to solve metric constrained problems with many (possibly exponentially) inequality constraints. We provide a theoretical analysis of \textsc{Project and Forget} and prove that our algorithm converges to the global optimal solution and that the $L_2$ distance of the current iterate to the optimal solution decays asymptotically at an exponential rate. We demonstrate that using our method we can solve large problem instances of three types of metric constrained problems: general weight correlation clustering, metric nearness, and metric learning; in each case, out-performing the state of the art methods with respect to CPU times and problem sizes.
2206.10607
Jeewon Jeon
Jeewon Jeon, Woojun Kim, Whiyoung Jung, Youngchul Sung
MASER: Multi-Agent Reinforcement Learning with Subgoals Generated from Experience Replay Buffer
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider cooperative multi-agent reinforcement learning (MARL) with sparse reward. To tackle this problem, we propose a novel method named MASER: MARL with subgoals generated from experience replay buffer. Under the widely-used assumption of centralized training with decentralized execution and consistent Q-value decomposition for MARL, MASER automatically generates proper subgoals for multiple agents from the experience replay buffer by considering both individual Q-value and total Q-value. Then, MASER designs individual intrinsic reward for each agent based on actionable representation relevant to Q-learning so that the agents reach their subgoals while maximizing the joint action value. Numerical results show that MASER significantly outperforms StarCraft II micromanagement benchmark compared to other state-of-the-art MARL algorithms.
[ { "created": "Mon, 20 Jun 2022 08:12:26 GMT", "version": "v1" } ]
2022-06-23
[ [ "Jeon", "Jeewon", "" ], [ "Kim", "Woojun", "" ], [ "Jung", "Whiyoung", "" ], [ "Sung", "Youngchul", "" ] ]
In this paper, we consider cooperative multi-agent reinforcement learning (MARL) with sparse reward. To tackle this problem, we propose a novel method named MASER: MARL with subgoals generated from experience replay buffer. Under the widely-used assumption of centralized training with decentralized execution and consistent Q-value decomposition for MARL, MASER automatically generates proper subgoals for multiple agents from the experience replay buffer by considering both individual Q-value and total Q-value. Then, MASER designs individual intrinsic reward for each agent based on actionable representation relevant to Q-learning so that the agents reach their subgoals while maximizing the joint action value. Numerical results show that MASER significantly outperforms StarCraft II micromanagement benchmark compared to other state-of-the-art MARL algorithms.
1802.05998
Tomas Teijeiro
Tom\'as Teijeiro, Constantino A. Garc\'ia, Daniel Castro and Paulo F\'elix
Abductive reasoning as the basis to reproduce expert criteria in ECG Atrial Fibrillation identification
15 pages, 6 figures, 6 tables
Physiological Measurement. 2018 Aug 31;39(8):084006
10.1088/1361-6579/aad7e4
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: This work aims at providing a new method for the automatic detection of atrial fibrillation, other arrhythmia and noise on short single lead ECG signals, emphasizing the importance of the interpretability of the classification results. Approach: A morphological and rhythm description of the cardiac behavior is obtained by a knowledge-based interpretation of the signal using the \textit{Construe} abductive framework. Then, a set of meaningful features are extracted for each individual heartbeat and as a summary of the full record. The feature distributions were used to elucidate the expert criteria underlying the labeling of the 2017 Physionet/CinC Challenge dataset, enabling a manual partial relabeling to improve the consistency of the classification rules. Finally, state-of-the-art machine learning methods are combined to provide an answer on the basis of the feature values. Main results: The proposal tied for the first place in the official stage of the Challenge, with a combined $F_1$ score of 0.83, and was even improved in the follow-up stage to 0.85 with a significant simplification of the model. Significance: This approach demonstrates the potential of \textit{Construe} to provide robust and valuable descriptions of temporal data even with significant amounts of noise and artifacts. Also, we discuss the importance of a consistent classification criteria in manually labeled training datasets, and the fundamental advantages of knowledge-based approaches to formalize and validate that criteria.
[ { "created": "Fri, 16 Feb 2018 16:06:42 GMT", "version": "v1" } ]
2021-12-09
[ [ "Teijeiro", "Tomás", "" ], [ "García", "Constantino A.", "" ], [ "Castro", "Daniel", "" ], [ "Félix", "Paulo", "" ] ]
Objective: This work aims at providing a new method for the automatic detection of atrial fibrillation, other arrhythmia and noise on short single lead ECG signals, emphasizing the importance of the interpretability of the classification results. Approach: A morphological and rhythm description of the cardiac behavior is obtained by a knowledge-based interpretation of the signal using the \textit{Construe} abductive framework. Then, a set of meaningful features are extracted for each individual heartbeat and as a summary of the full record. The feature distributions were used to elucidate the expert criteria underlying the labeling of the 2017 Physionet/CinC Challenge dataset, enabling a manual partial relabeling to improve the consistency of the classification rules. Finally, state-of-the-art machine learning methods are combined to provide an answer on the basis of the feature values. Main results: The proposal tied for the first place in the official stage of the Challenge, with a combined $F_1$ score of 0.83, and was even improved in the follow-up stage to 0.85 with a significant simplification of the model. Significance: This approach demonstrates the potential of \textit{Construe} to provide robust and valuable descriptions of temporal data even with significant amounts of noise and artifacts. Also, we discuss the importance of a consistent classification criteria in manually labeled training datasets, and the fundamental advantages of knowledge-based approaches to formalize and validate that criteria.
2012.02024
Michael Heffels
Michael R. Heffels and Joaquin Vanschoren
Aerial Imagery Pixel-level Segmentation
30 pages, 15 figures, 4 tables. Code available through GitHub repo at https://github.com/mrheffels/aerial-imagery-segmentation
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Aerial imagery can be used for important work on a global scale. Nevertheless, the analysis of this data using neural network architectures lags behind the current state-of-the-art on popular datasets such as PASCAL VOC, CityScapes and Camvid. In this paper we bridge the performance-gap between these popular datasets and aerial imagery data. Little work is done on aerial imagery with state-of-the-art neural network architectures in a multi-class setting. Our experiments concerning data augmentation, normalisation, image size and loss functions give insight into a high performance setup for aerial imagery segmentation datasets. Our work, using the state-of-the-art DeepLabv3+ Xception65 architecture, achieves a mean IOU of 70% on the DroneDeploy validation set. With this result, we clearly outperform the current publicly available state-of-the-art validation set mIOU (65%) performance with 5%. Furthermore, to our knowledge, there is no mIOU benchmark for the test set. Hence, we also propose a new benchmark on the DroneDeploy test set using the best performing DeepLabv3+ Xception65 architecture, with a mIOU score of 52.5%.
[ { "created": "Thu, 3 Dec 2020 16:09:09 GMT", "version": "v1" } ]
2020-12-04
[ [ "Heffels", "Michael R.", "" ], [ "Vanschoren", "Joaquin", "" ] ]
Aerial imagery can be used for important work on a global scale. Nevertheless, the analysis of this data using neural network architectures lags behind the current state-of-the-art on popular datasets such as PASCAL VOC, CityScapes and Camvid. In this paper we bridge the performance-gap between these popular datasets and aerial imagery data. Little work is done on aerial imagery with state-of-the-art neural network architectures in a multi-class setting. Our experiments concerning data augmentation, normalisation, image size and loss functions give insight into a high performance setup for aerial imagery segmentation datasets. Our work, using the state-of-the-art DeepLabv3+ Xception65 architecture, achieves a mean IOU of 70% on the DroneDeploy validation set. With this result, we clearly outperform the current publicly available state-of-the-art validation set mIOU (65%) performance with 5%. Furthermore, to our knowledge, there is no mIOU benchmark for the test set. Hence, we also propose a new benchmark on the DroneDeploy test set using the best performing DeepLabv3+ Xception65 architecture, with a mIOU score of 52.5%.
2104.08404
Junghoon Kim
Junghoon Kim and Bruno Clerckx
Wireless Information and Power Transfer for IoT: Pulse Position Modulation, Integrated Receiver, and Experimental Validation
submitted for publication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneous wireless information and power transfer (SWIPT) has emerged as a viable technique to energize and connect low-power autonomous devices and enable future Internet of Things (IoT). A major challenge of SWIPT is the energy consumption of the receiver of such low-power devices. An attractive low-power solution consists in an integrated information decoder (ID) and energy harvester (EH) architecture for SWIPT receiver (IntRx) where the received RF signal is first rectified before being used for information decoding. Such architecture eliminates the need for energy-consuming RF components such as local oscillators and mixers. This paper proposes a novel modulation and demodulation method for the IntRx SWIPT architecture based on pulse position modulation (PPM) where information is encoded in the position of the pulse. The new method transmits high amplitude pulses to increase the Peak-to-Average Power Ratio (PAPR) of the transmit signal and exploits the EH's nonlinearity so as to boost the harvested DC power. Simultaneously, the information can be decoded from the rectifier signal by simply finding the position of the pulse in a certain symbol duration. We have analyzed both the information and the power transfer performance of the newly proposed PPM for IntRx SWIPT theoretically, numerically, and experimentally. To that end, we have established a SWIPT system testbed in an indoor environment by prototyping a base station to transfer information-power signal and the IntRx SWIPT receiver including ID and EH blocks. The performance evaluation of the PPM was carried out in various conditions, and the results have been compared and contrasted to conventional signals. Theoretical, numerical, and experimental results highlight the significant benefits of the proposed PPM scheme to enhance the power transfer performance and operate information decoding with low-power consumption.
[ { "created": "Fri, 16 Apr 2021 23:25:51 GMT", "version": "v1" } ]
2021-04-20
[ [ "Kim", "Junghoon", "" ], [ "Clerckx", "Bruno", "" ] ]
Simultaneous wireless information and power transfer (SWIPT) has emerged as a viable technique to energize and connect low-power autonomous devices and enable future Internet of Things (IoT). A major challenge of SWIPT is the energy consumption of the receiver of such low-power devices. An attractive low-power solution consists in an integrated information decoder (ID) and energy harvester (EH) architecture for SWIPT receiver (IntRx) where the received RF signal is first rectified before being used for information decoding. Such architecture eliminates the need for energy-consuming RF components such as local oscillators and mixers. This paper proposes a novel modulation and demodulation method for the IntRx SWIPT architecture based on pulse position modulation (PPM) where information is encoded in the position of the pulse. The new method transmits high amplitude pulses to increase the Peak-to-Average Power Ratio (PAPR) of the transmit signal and exploits the EH's nonlinearity so as to boost the harvested DC power. Simultaneously, the information can be decoded from the rectifier signal by simply finding the position of the pulse in a certain symbol duration. We have analyzed both the information and the power transfer performance of the newly proposed PPM for IntRx SWIPT theoretically, numerically, and experimentally. To that end, we have established a SWIPT system testbed in an indoor environment by prototyping a base station to transfer information-power signal and the IntRx SWIPT receiver including ID and EH blocks. The performance evaluation of the PPM was carried out in various conditions, and the results have been compared and contrasted to conventional signals. Theoretical, numerical, and experimental results highlight the significant benefits of the proposed PPM scheme to enhance the power transfer performance and operate information decoding with low-power consumption.
2204.03341
Tung Kieu
Tung Kieu, Bin Yang, Chenjuan Guo, Christian S. Jensen, Yan Zhao, Feiteng Huang, Kai Zheng
Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection---Extended Version
This paper has been accepted by IEEE ICDE 2022
null
null
null
cs.LG cs.DB
http://creativecommons.org/licenses/by-nc-sa/4.0/
Time series data occurs widely, and outlier detection is a fundamental problem in data mining, which has numerous applications. Existing autoencoder-based approaches deliver state-of-the-art performance on challenging real-world data but are vulnerable to outliers and exhibit low explainability. To address these two limitations, we propose robust and explainable unsupervised autoencoder frameworks that decompose an input time series into a clean time series and an outlier time series using autoencoders. Improved explainability is achieved because clean time series are better explained with easy-to-understand patterns such as trends and periodicities. We provide insight into this by means of a post-hoc explainability analysis and empirical studies. In addition, since outliers are separated from clean time series iteratively, our approach offers improved robustness to outliers, which in turn improves accuracy. We evaluate our approach on five real-world datasets and report improvements over the state-of-the-art approaches in terms of robustness and explainability. This is an extended version of "Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection", to appear in IEEE ICDE 2022.
[ { "created": "Thu, 7 Apr 2022 10:24:12 GMT", "version": "v1" } ]
2022-04-08
[ [ "Kieu", "Tung", "" ], [ "Yang", "Bin", "" ], [ "Guo", "Chenjuan", "" ], [ "Jensen", "Christian S.", "" ], [ "Zhao", "Yan", "" ], [ "Huang", "Feiteng", "" ], [ "Zheng", "Kai", "" ] ]
Time series data occurs widely, and outlier detection is a fundamental problem in data mining, which has numerous applications. Existing autoencoder-based approaches deliver state-of-the-art performance on challenging real-world data but are vulnerable to outliers and exhibit low explainability. To address these two limitations, we propose robust and explainable unsupervised autoencoder frameworks that decompose an input time series into a clean time series and an outlier time series using autoencoders. Improved explainability is achieved because clean time series are better explained with easy-to-understand patterns such as trends and periodicities. We provide insight into this by means of a post-hoc explainability analysis and empirical studies. In addition, since outliers are separated from clean time series iteratively, our approach offers improved robustness to outliers, which in turn improves accuracy. We evaluate our approach on five real-world datasets and report improvements over the state-of-the-art approaches in terms of robustness and explainability. This is an extended version of "Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection", to appear in IEEE ICDE 2022.
1701.04626
Simone Bova
Simone Bova and Stefan Szeider
Circuit Treewidth, Sentential Decision, and Query Compilation
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evaluation of a query over a probabilistic database boils down to computing the probability of a suitable Boolean function, the lineage of the query over the database. The method of query compilation approaches the task in two stages: first, the query lineage is implemented (compiled) in a circuit form where probability computation is tractable; and second, the desired probability is computed over the compiled circuit. A basic theoretical quest in query compilation is that of identifying pertinent classes of queries whose lineages admit compact representations over increasingly succinct, tractable circuit classes. Fostering previous work by Jha and Suciu (2012) and Petke and Razgon (2013), we focus on queries whose lineages admit circuit implementations with small treewidth, and investigate their compilability within tame classes of decision diagrams. In perfect analogy with the characterization of bounded circuit pathwidth by bounded OBDD width, we show that a class of Boolean functions has bounded circuit treewidth if and only if it has bounded SDD width. Sentential decision diagrams (SDDs) are central in knowledge compilation, being essentially as tractable as OBDDs but exponentially more succinct. By incorporating constant width SDDs and polynomial size SDDs, we refine the panorama of query compilation for unions of conjunctive queries with and without inequalities.
[ { "created": "Tue, 17 Jan 2017 11:34:07 GMT", "version": "v1" } ]
2017-01-18
[ [ "Bova", "Simone", "" ], [ "Szeider", "Stefan", "" ] ]
The evaluation of a query over a probabilistic database boils down to computing the probability of a suitable Boolean function, the lineage of the query over the database. The method of query compilation approaches the task in two stages: first, the query lineage is implemented (compiled) in a circuit form where probability computation is tractable; and second, the desired probability is computed over the compiled circuit. A basic theoretical quest in query compilation is that of identifying pertinent classes of queries whose lineages admit compact representations over increasingly succinct, tractable circuit classes. Fostering previous work by Jha and Suciu (2012) and Petke and Razgon (2013), we focus on queries whose lineages admit circuit implementations with small treewidth, and investigate their compilability within tame classes of decision diagrams. In perfect analogy with the characterization of bounded circuit pathwidth by bounded OBDD width, we show that a class of Boolean functions has bounded circuit treewidth if and only if it has bounded SDD width. Sentential decision diagrams (SDDs) are central in knowledge compilation, being essentially as tractable as OBDDs but exponentially more succinct. By incorporating constant width SDDs and polynomial size SDDs, we refine the panorama of query compilation for unions of conjunctive queries with and without inequalities.
1611.04144
Xuanpeng Li
Xuanpeng Li and Rachid Belaroussi
Semi-Dense 3D Semantic Mapping from Monocular SLAM
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The bundle of geometry and appearance in computer vision has proven to be a promising solution for robots across a wide variety of applications. Stereo cameras and RGB-D sensors are widely used to realise fast 3D reconstruction and trajectory tracking in a dense way. However, they lack flexibility of seamless switch between different scaled environments, i.e., indoor and outdoor scenes. In addition, semantic information are still hard to acquire in a 3D mapping. We address this challenge by combining the state-of-art deep learning method and semi-dense Simultaneous Localisation and Mapping (SLAM) based on video stream from a monocular camera. In our approach, 2D semantic information are transferred to 3D mapping via correspondence between connective Keyframes with spatial consistency. There is no need to obtain a semantic segmentation for each frame in a sequence, so that it could achieve a reasonable computation time. We evaluate our method on indoor/outdoor datasets and lead to an improvement in the 2D semantic labelling over baseline single frame predictions.
[ { "created": "Sun, 13 Nov 2016 15:31:31 GMT", "version": "v1" } ]
2016-11-15
[ [ "Li", "Xuanpeng", "" ], [ "Belaroussi", "Rachid", "" ] ]
The bundle of geometry and appearance in computer vision has proven to be a promising solution for robots across a wide variety of applications. Stereo cameras and RGB-D sensors are widely used to realise fast 3D reconstruction and trajectory tracking in a dense way. However, they lack flexibility of seamless switch between different scaled environments, i.e., indoor and outdoor scenes. In addition, semantic information are still hard to acquire in a 3D mapping. We address this challenge by combining the state-of-art deep learning method and semi-dense Simultaneous Localisation and Mapping (SLAM) based on video stream from a monocular camera. In our approach, 2D semantic information are transferred to 3D mapping via correspondence between connective Keyframes with spatial consistency. There is no need to obtain a semantic segmentation for each frame in a sequence, so that it could achieve a reasonable computation time. We evaluate our method on indoor/outdoor datasets and lead to an improvement in the 2D semantic labelling over baseline single frame predictions.
2211.17107
Ruturaj Ghatage
Sharvi Endait, Ruturaj Ghatage, Prof. DD Kadam
Handling and extracting key entities from customer conversations using Speech recognition and Named Entity recognition
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this modern era of technology with e-commerce developing at a rapid pace, it is very important to understand customer requirements and details from a business conversation. It is very crucial for customer retention and satisfaction. Extracting key insights from these conversations is very important when it comes to developing their product or solving their issue. Understanding customer feedback, responses, and important details of the product are essential and it would be done using Named entity recognition (NER). For extracting the entities we would be converting the conversations to text using the optimal speech-to-text model. The model would be a two-stage network in which the conversation is converted to text. Then, suitable entities are extracted using robust techniques using a NER BERT transformer model. This will aid in the enrichment of customer experience when there is an issue which is faced by them. If a customer faces a problem he will call and register his complaint. The model will then extract the key features from this conversation which will be necessary to look into the problem. These features would include details like the order number, and the exact problem. All these would be extracted directly from the conversation and this would reduce the effort of going through the conversation again.
[ { "created": "Mon, 28 Nov 2022 06:41:29 GMT", "version": "v1" } ]
2022-12-01
[ [ "Endait", "Sharvi", "" ], [ "Ghatage", "Ruturaj", "" ], [ "Kadam", "Prof. DD", "" ] ]
In this modern era of technology with e-commerce developing at a rapid pace, it is very important to understand customer requirements and details from a business conversation. It is very crucial for customer retention and satisfaction. Extracting key insights from these conversations is very important when it comes to developing their product or solving their issue. Understanding customer feedback, responses, and important details of the product are essential and it would be done using Named entity recognition (NER). For extracting the entities we would be converting the conversations to text using the optimal speech-to-text model. The model would be a two-stage network in which the conversation is converted to text. Then, suitable entities are extracted using robust techniques using a NER BERT transformer model. This will aid in the enrichment of customer experience when there is an issue which is faced by them. If a customer faces a problem he will call and register his complaint. The model will then extract the key features from this conversation which will be necessary to look into the problem. These features would include details like the order number, and the exact problem. All these would be extracted directly from the conversation and this would reduce the effort of going through the conversation again.
2405.02583
Xiangqi Kong
Xiangqi Kong, Yang Xing, Antonios Tsourdos, Ziyue Wang, Weisi Guo, Adolfo Perrusquia, Andreas Wikander
Explainable Interface for Human-Autonomy Teaming: A Survey
45 pages, 9 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Nowadays, large-scale foundation models are being increasingly integrated into numerous safety-critical applications, including human-autonomy teaming (HAT) within transportation, medical, and defence domains. Consequently, the inherent 'black-box' nature of these sophisticated deep neural networks heightens the significance of fostering mutual understanding and trust between humans and autonomous systems. To tackle the transparency challenges in HAT, this paper conducts a thoughtful study on the underexplored domain of Explainable Interface (EI) in HAT systems from a human-centric perspective, thereby enriching the existing body of research in Explainable Artificial Intelligence (XAI). We explore the design, development, and evaluation of EI within XAI-enhanced HAT systems. To do so, we first clarify the distinctions between these concepts: EI, explanations and model explainability, aiming to provide researchers and practitioners with a structured understanding. Second, we contribute to a novel framework for EI, addressing the unique challenges in HAT. Last, our summarized evaluation framework for ongoing EI offers a holistic perspective, encompassing model performance, human-centered factors, and group task objectives. Based on extensive surveys across XAI, HAT, psychology, and Human-Computer Interaction (HCI), this review offers multiple novel insights into incorporating XAI into HAT systems and outlines future directions.
[ { "created": "Sat, 4 May 2024 06:35:38 GMT", "version": "v1" } ]
2024-05-07
[ [ "Kong", "Xiangqi", "" ], [ "Xing", "Yang", "" ], [ "Tsourdos", "Antonios", "" ], [ "Wang", "Ziyue", "" ], [ "Guo", "Weisi", "" ], [ "Perrusquia", "Adolfo", "" ], [ "Wikander", "Andreas", "" ] ]
Nowadays, large-scale foundation models are being increasingly integrated into numerous safety-critical applications, including human-autonomy teaming (HAT) within transportation, medical, and defence domains. Consequently, the inherent 'black-box' nature of these sophisticated deep neural networks heightens the significance of fostering mutual understanding and trust between humans and autonomous systems. To tackle the transparency challenges in HAT, this paper conducts a thoughtful study on the underexplored domain of Explainable Interface (EI) in HAT systems from a human-centric perspective, thereby enriching the existing body of research in Explainable Artificial Intelligence (XAI). We explore the design, development, and evaluation of EI within XAI-enhanced HAT systems. To do so, we first clarify the distinctions between these concepts: EI, explanations and model explainability, aiming to provide researchers and practitioners with a structured understanding. Second, we contribute to a novel framework for EI, addressing the unique challenges in HAT. Last, our summarized evaluation framework for ongoing EI offers a holistic perspective, encompassing model performance, human-centered factors, and group task objectives. Based on extensive surveys across XAI, HAT, psychology, and Human-Computer Interaction (HCI), this review offers multiple novel insights into incorporating XAI into HAT systems and outlines future directions.
1007.0571
Vikram Krishnamurthy
Vikram Krishnamurthy
Quickest Detection with Social Learning: Interaction of local and global decision makers
null
null
null
null
cs.GT cs.IT math.IT stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider how local and global decision policies interact in stopping time problems such as quickest time change detection. Individual agents make myopic local decisions via social learning, that is, each agent records a private observation of a noisy underlying state process, selfishly optimizes its local utility and then broadcasts its local decision. Given these local decisions, how can a global decision maker achieve quickest time change detection when the underlying state changes according to a phase-type distribution? The paper presents four results. First, using Blackwell dominance of measures, it is shown that the optimal cost incurred in social learning based quickest detection is always larger than that of classical quickest detection. Second, it is shown that in general the optimal decision policy for social learning based quickest detection is characterized by multiple thresholds within the space of Bayesian distributions. Third, using lattice programming and stochastic dominance, sufficient conditions are given for the optimal decision policy to consist of a single linear hyperplane, or, more generally, a threshold curve. Estimation of the optimal linear approximation to this threshold curve is formulated as a simulation-based stochastic optimization problem. Finally, the paper shows that in multi-agent sensor management with quickest detection, where each agent views the world according to its prior, the optimal policy has a similar structure to social learning.
[ { "created": "Sun, 4 Jul 2010 17:06:38 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2011 09:01:38 GMT", "version": "v2" }, { "created": "Fri, 2 Mar 2012 18:55:34 GMT", "version": "v3" } ]
2012-03-05
[ [ "Krishnamurthy", "Vikram", "" ] ]
We consider how local and global decision policies interact in stopping time problems such as quickest time change detection. Individual agents make myopic local decisions via social learning, that is, each agent records a private observation of a noisy underlying state process, selfishly optimizes its local utility and then broadcasts its local decision. Given these local decisions, how can a global decision maker achieve quickest time change detection when the underlying state changes according to a phase-type distribution? The paper presents four results. First, using Blackwell dominance of measures, it is shown that the optimal cost incurred in social learning based quickest detection is always larger than that of classical quickest detection. Second, it is shown that in general the optimal decision policy for social learning based quickest detection is characterized by multiple thresholds within the space of Bayesian distributions. Third, using lattice programming and stochastic dominance, sufficient conditions are given for the optimal decision policy to consist of a single linear hyperplane, or, more generally, a threshold curve. Estimation of the optimal linear approximation to this threshold curve is formulated as a simulation-based stochastic optimization problem. Finally, the paper shows that in multi-agent sensor management with quickest detection, where each agent views the world according to its prior, the optimal policy has a similar structure to social learning.
2408.07272
Junxuan Li
Junxuan Li, Ryan Wickman, Sahil Bhatnagar, Raj Kumar Maity, Arko Mukherjee
NL2OR: Solve Complex Operations Research Problems Using Natural Language Inputs
null
null
null
null
cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Operations research (OR) uses mathematical models to enhance decision-making, but developing these models requires expert knowledge and can be time-consuming. Automated mathematical programming (AMP) has emerged to simplify this process, but existing systems have limitations. This paper introduces a novel methodology that uses recent advances in Large Language Model (LLM) to create and edit OR solutions from non-expert user queries expressed using Natural Language. This reduces the need for domain expertise and the time to formulate a problem. The paper presents an end-to-end pipeline, named NL2OR, that generates solutions to OR problems from natural language input, and shares experimental results on several important OR problems.
[ { "created": "Wed, 14 Aug 2024 03:42:53 GMT", "version": "v1" } ]
2024-08-15
[ [ "Li", "Junxuan", "" ], [ "Wickman", "Ryan", "" ], [ "Bhatnagar", "Sahil", "" ], [ "Maity", "Raj Kumar", "" ], [ "Mukherjee", "Arko", "" ] ]
Operations research (OR) uses mathematical models to enhance decision-making, but developing these models requires expert knowledge and can be time-consuming. Automated mathematical programming (AMP) has emerged to simplify this process, but existing systems have limitations. This paper introduces a novel methodology that uses recent advances in Large Language Model (LLM) to create and edit OR solutions from non-expert user queries expressed using Natural Language. This reduces the need for domain expertise and the time to formulate a problem. The paper presents an end-to-end pipeline, named NL2OR, that generates solutions to OR problems from natural language input, and shares experimental results on several important OR problems.
2112.01098
Surabhi Gupta
Surabhi Gupta, Ashwath Shetty, Avinash Sharma
Attention based Occlusion Removal for Hybrid Telepresence Systems
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Traditionally, video conferencing is a widely adopted solution for telecommunication, but a lack of immersiveness comes inherently due to the 2D nature of facial representation. The integration of Virtual Reality (VR) in a communication/telepresence system through Head Mounted Displays (HMDs) promises to provide users a much better immersive experience. However, HMDs cause hindrance by blocking the facial appearance and expressions of the user. To overcome these issues, we propose a novel attention-enabled encoder-decoder architecture for HMD de-occlusion. We also propose to train our person-specific model using short videos (1-2 minutes) of the user, captured in varying appearances, and demonstrated generalization to unseen poses and appearances of the user. We report superior qualitative and quantitative results over state-of-the-art methods. We also present applications of this approach to hybrid video teleconferencing using existing animation and 3D face reconstruction pipelines.
[ { "created": "Thu, 2 Dec 2021 10:18:22 GMT", "version": "v1" } ]
2021-12-03
[ [ "Gupta", "Surabhi", "" ], [ "Shetty", "Ashwath", "" ], [ "Sharma", "Avinash", "" ] ]
Traditionally, video conferencing is a widely adopted solution for telecommunication, but a lack of immersiveness comes inherently due to the 2D nature of facial representation. The integration of Virtual Reality (VR) in a communication/telepresence system through Head Mounted Displays (HMDs) promises to provide users a much better immersive experience. However, HMDs cause hindrance by blocking the facial appearance and expressions of the user. To overcome these issues, we propose a novel attention-enabled encoder-decoder architecture for HMD de-occlusion. We also propose to train our person-specific model using short videos (1-2 minutes) of the user, captured in varying appearances, and demonstrated generalization to unseen poses and appearances of the user. We report superior qualitative and quantitative results over state-of-the-art methods. We also present applications of this approach to hybrid video teleconferencing using existing animation and 3D face reconstruction pipelines.
2109.00301
Pedro Henrique Martins
Pedro Henrique Martins and Zita Marinho and Andr\'e F. T. Martins
$\infty$-former: Infinite Memory Transformer
ACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length. While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information. In this paper, we propose the $\infty$-former, which extends the vanilla transformer with an unbounded long-term memory. By making use of a continuous-space attention mechanism to attend over the long-term memory, the $\infty$-former's attention complexity becomes independent of the context length, trading off memory length with precision. In order to control where precision is more important, $\infty$-former maintains "sticky memories" being able to model arbitrarily long contexts while keeping the computation budget fixed. Experiments on a synthetic sorting task, language modeling, and document grounded dialogue generation demonstrate the $\infty$-former's ability to retain information from long sequences.
[ { "created": "Wed, 1 Sep 2021 10:51:58 GMT", "version": "v1" }, { "created": "Wed, 15 Sep 2021 10:12:03 GMT", "version": "v2" }, { "created": "Fri, 25 Mar 2022 10:37:54 GMT", "version": "v3" } ]
2022-03-28
[ [ "Martins", "Pedro Henrique", "" ], [ "Marinho", "Zita", "" ], [ "Martins", "André F. T.", "" ] ]
Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length. While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information. In this paper, we propose the $\infty$-former, which extends the vanilla transformer with an unbounded long-term memory. By making use of a continuous-space attention mechanism to attend over the long-term memory, the $\infty$-former's attention complexity becomes independent of the context length, trading off memory length with precision. In order to control where precision is more important, $\infty$-former maintains "sticky memories" being able to model arbitrarily long contexts while keeping the computation budget fixed. Experiments on a synthetic sorting task, language modeling, and document grounded dialogue generation demonstrate the $\infty$-former's ability to retain information from long sequences.
2402.01079
David OBrien
David OBrien, Robert Dyer, Tien N. Nguyen, Hridesh Rajan
Data-Driven Evidence-Based Syntactic Sugar Design
12 pages, 12 figures, to be published in ICSE'24
null
10.1145/3597503.3639580
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Programming languages are essential tools for developers, and their evolution plays a crucial role in supporting the activities of developers. One instance of programming language evolution is the introduction of syntactic sugars, which are additional syntax elements that provide alternative, more readable code constructs. However, the process of designing and evolving a programming language has traditionally been guided by anecdotal experiences and intuition. Recent advances in tools and methodologies for mining open-source repositories have enabled developers to make data-driven software engineering decisions. In light of this, this paper proposes an approach for motivating data-driven programming evolution by applying frequent subgraph mining techniques to a large dataset of 166,827,154 open-source Java methods. The dataset is mined by generalizing Java control-flow graphs to capture broad programming language usages and instances of duplication. Frequent subgraphs are then extracted to identify potentially impactful opportunities for new syntactic sugars. Our diverse results demonstrate the benefits of the proposed technique by identifying new syntactic sugars involving a variety of programming constructs that could be implemented in Java, thus simplifying frequent code idioms. This approach can potentially provide valuable insights for Java language designers, and serve as a proof-of-concept for data-driven programming language design and evolution.
[ { "created": "Fri, 2 Feb 2024 00:35:14 GMT", "version": "v1" } ]
2024-02-05
[ [ "OBrien", "David", "" ], [ "Dyer", "Robert", "" ], [ "Nguyen", "Tien N.", "" ], [ "Rajan", "Hridesh", "" ] ]
Programming languages are essential tools for developers, and their evolution plays a crucial role in supporting the activities of developers. One instance of programming language evolution is the introduction of syntactic sugars, which are additional syntax elements that provide alternative, more readable code constructs. However, the process of designing and evolving a programming language has traditionally been guided by anecdotal experiences and intuition. Recent advances in tools and methodologies for mining open-source repositories have enabled developers to make data-driven software engineering decisions. In light of this, this paper proposes an approach for motivating data-driven programming evolution by applying frequent subgraph mining techniques to a large dataset of 166,827,154 open-source Java methods. The dataset is mined by generalizing Java control-flow graphs to capture broad programming language usages and instances of duplication. Frequent subgraphs are then extracted to identify potentially impactful opportunities for new syntactic sugars. Our diverse results demonstrate the benefits of the proposed technique by identifying new syntactic sugars involving a variety of programming constructs that could be implemented in Java, thus simplifying frequent code idioms. This approach can potentially provide valuable insights for Java language designers, and serve as a proof-of-concept for data-driven programming language design and evolution.
2106.08389
Anne Collin
Anne Collin, Amitai Y. Bin-Nun, Radboud Duintjer Tebbens
Plane and Sample: Maximizing Information about Autonomous Vehicle Performance using Submodular Optimization
8 pages, 8 figures. Accepted for publication at the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC)
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
As autonomous vehicles (AVs) take on growing Operational Design Domains (ODDs), they need to go through a systematic, transparent, and scalable evaluation process to demonstrate their benefits to society. Current scenario sampling techniques for AV performance evaluation usually focus on a specific functionality, such as lane changing, and do not accommodate a transfer of information about an AV system from one ODD to the next. In this paper, we reformulate the scenario sampling problem across ODDs and functionalities as a submodular optimization problem. To do so, we abstract AV performance as a Bayesian Hierarchical Model, which we use to infer information gained by revealing performance in new scenarios. We propose the information gain as a measure of scenario relevance and evaluation progress. Furthermore, we leverage the submodularity, or diminishing returns, property of the information gain not only to find a near-optimal scenario set, but also to propose a stopping criterion for an AV performance evaluation campaign. We find that we only need to explore about 7.5% of the scenario space to meet this criterion, a 23% improvement over Latin Hypercube Sampling.
[ { "created": "Tue, 15 Jun 2021 19:35:30 GMT", "version": "v1" } ]
2021-06-17
[ [ "Collin", "Anne", "" ], [ "Bin-Nun", "Amitai Y.", "" ], [ "Tebbens", "Radboud Duintjer", "" ] ]
As autonomous vehicles (AVs) take on growing Operational Design Domains (ODDs), they need to go through a systematic, transparent, and scalable evaluation process to demonstrate their benefits to society. Current scenario sampling techniques for AV performance evaluation usually focus on a specific functionality, such as lane changing, and do not accommodate a transfer of information about an AV system from one ODD to the next. In this paper, we reformulate the scenario sampling problem across ODDs and functionalities as a submodular optimization problem. To do so, we abstract AV performance as a Bayesian Hierarchical Model, which we use to infer information gained by revealing performance in new scenarios. We propose the information gain as a measure of scenario relevance and evaluation progress. Furthermore, we leverage the submodularity, or diminishing returns, property of the information gain not only to find a near-optimal scenario set, but also to propose a stopping criterion for an AV performance evaluation campaign. We find that we only need to explore about 7.5% of the scenario space to meet this criterion, a 23% improvement over Latin Hypercube Sampling.
1812.03556
Kamran Keykhosravi
Kamran Keykhosravi, Marco Secondini, Giuseppe Durisi and Erik Agrell
How to Increase the Achievable Information Rate by Per-Channel Dispersion Compensation
null
IEEE/OSA Journal of Lightwave Technology, vol. 37, no. 10, pp. 2443-2451, May 2019
10.1109/JLT.2019.2907311
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deploying periodic inline chromatic dispersion compensation enables reducing the complexity of the digital back propagation (DBP) algorithm. However, compared with nondispersion-managed (NDM) links, dispersion-managed (DM) ones suffer a stronger cross-phase modulation (XPM). Utilizing per-channel dispersion-managed (CDM) links (e.g., using fiber Bragg grating) allows for a complexity reduction of DBP, while abating XPM compared to DM links. In this paper, we show for the first time that CDM links enable also a more effective XPM compensation compared to NDM ones, allowing a higher achievable information rate (AIR). This is explained by resorting to the frequency-resolved logarithmic perturbation model and showing that per-channel dispersion compensation increases the frequency correlation of the distortions induced by XPM over the channel bandwidth, making them more similar to a conventional phase noise. We compare the performance (in terms of the AIR) of a DM, an NDM, and a CDM link, considering two types of mismatched receivers: one neglects the XPM phase distortion and the other compensates for it. With the former, the CDM link is inferior to the NDM one due to an increased in-band signal--noise interaction. However, with the latter, a higher AIR is obtained with the CDM link than with the NDM one owing to a higher XPM frequency correlation. The DM link has the lowest AIR for both receivers because of a stronger XPM.
[ { "created": "Sun, 9 Dec 2018 20:50:52 GMT", "version": "v1" } ]
2024-01-25
[ [ "Keykhosravi", "Kamran", "" ], [ "Secondini", "Marco", "" ], [ "Durisi", "Giuseppe", "" ], [ "Agrell", "Erik", "" ] ]
Deploying periodic inline chromatic dispersion compensation enables reducing the complexity of the digital back propagation (DBP) algorithm. However, compared with nondispersion-managed (NDM) links, dispersion-managed (DM) ones suffer a stronger cross-phase modulation (XPM). Utilizing per-channel dispersion-managed (CDM) links (e.g., using fiber Bragg grating) allows for a complexity reduction of DBP, while abating XPM compared to DM links. In this paper, we show for the first time that CDM links enable also a more effective XPM compensation compared to NDM ones, allowing a higher achievable information rate (AIR). This is explained by resorting to the frequency-resolved logarithmic perturbation model and showing that per-channel dispersion compensation increases the frequency correlation of the distortions induced by XPM over the channel bandwidth, making them more similar to a conventional phase noise. We compare the performance (in terms of the AIR) of a DM, an NDM, and a CDM link, considering two types of mismatched receivers: one neglects the XPM phase distortion and the other compensates for it. With the former, the CDM link is inferior to the NDM one due to an increased in-band signal--noise interaction. However, with the latter, a higher AIR is obtained with the CDM link than with the NDM one owing to a higher XPM frequency correlation. The DM link has the lowest AIR for both receivers because of a stronger XPM.
1210.6241
Mael Le Treust
Ma\"el Le Treust, Samson Lasaulce
Transforming Monitoring Structures with Resilient Encoders. Application to Repeated Games
Springer, Dynamic Games and Applications, 2012
null
10.1007/s13235-012-0058-3
null
cs.IT cs.GT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important feature of a dynamic game is its monitoring structure namely, what the players effectively see from the played actions. We consider games with arbitrary monitoring structures. One of the purposes of this paper is to know to what extent an encoder, who perfectly observes the played actions and sends a complementary public signal to the players, can establish perfect monitoring for all the players. To reach this goal, the main technical problem to be solved at the encoder is to design a source encoder which compresses the action profile in the most concise manner possible. A special feature of this encoder is that the multi-dimensional signal (namely, the action profiles) to be encoded is assumed to comprise a component whose probability distribution is not known to the encoder and the decoder has a side information (the private signals received by the players when the encoder is off). This new framework appears to be both of game-theoretical and information-theoretical interest. In particular, it is useful for designing certain types of encoders that are resilient to single deviations and provide an equilibrium utility region in the proposed setting; it provides a new type of constraints to compress an information source (i.e., a random variable). Regarding the first aspect, we apply the derived result to the repeated prisoner's dilemma.
[ { "created": "Tue, 23 Oct 2012 14:14:17 GMT", "version": "v1" } ]
2012-10-24
[ [ "Treust", "Maël Le", "" ], [ "Lasaulce", "Samson", "" ] ]
An important feature of a dynamic game is its monitoring structure namely, what the players effectively see from the played actions. We consider games with arbitrary monitoring structures. One of the purposes of this paper is to know to what extent an encoder, who perfectly observes the played actions and sends a complementary public signal to the players, can establish perfect monitoring for all the players. To reach this goal, the main technical problem to be solved at the encoder is to design a source encoder which compresses the action profile in the most concise manner possible. A special feature of this encoder is that the multi-dimensional signal (namely, the action profiles) to be encoded is assumed to comprise a component whose probability distribution is not known to the encoder and the decoder has a side information (the private signals received by the players when the encoder is off). This new framework appears to be both of game-theoretical and information-theoretical interest. In particular, it is useful for designing certain types of encoders that are resilient to single deviations and provide an equilibrium utility region in the proposed setting; it provides a new type of constraints to compress an information source (i.e., a random variable). Regarding the first aspect, we apply the derived result to the repeated prisoner's dilemma.
1104.3497
Shih-Chun Lin
Pin-Hsun Lin, Shih-Chun Lin, Hsuan-Jung Su, and Y.-W. Peter Hong
Clean relaying aided cognitive radio under the coexistence constraint
30 pages
null
10.1109/TWC.2012.092712.120005
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the interference-mitigation based cognitive radio where the primary and secondary users can coexist at the same time and frequency bands, under the constraint that the rate of the primary user (PU) must remain the same with a single-user decoder. To meet such a coexistence constraint, the relaying from the secondary user (SU) can help the PU's transmission under the interference from the SU. However, the relayed signal in the known dirty paper coding (DPC) based scheme is interfered by the SU's signal, and is not "clean". In this paper, under the half-duplex constraints, we propose two new transmission schemes aided by the clean relaying from the SU's transmitter and receiver without interference from the SU. We name them as the clean transmitter relaying (CT) and clean transmitter-receiver relaying (CTR) aided cognitive radio, respectively. The rate and multiplexing gain performances of CT and CTR in fading channels with various availabilities of the channel state information at the transmitters (CSIT) are studied. Our CT generalizes the celebrated DPC based scheme proposed previously. With full CSIT, the multiplexing gain of the CTR is proved to be better (or no less) than that of the previous DPC based schemes. This is because the silent period for decoding the PU's messages for the DPC may not be necessary in the CTR. With only the statistics of CSIT, we further prove that the CTR outperforms the rate performance of the previous scheme in fast Rayleigh fading channels. The numerical examples also show that in a large class of channels, the proposed CT and CTR provide significant rate gains over the previous scheme with small complexity penalties.
[ { "created": "Mon, 18 Apr 2011 14:33:58 GMT", "version": "v1" } ]
2012-12-24
[ [ "Lin", "Pin-Hsun", "" ], [ "Lin", "Shih-Chun", "" ], [ "Su", "Hsuan-Jung", "" ], [ "Hong", "Y. -W. Peter", "" ] ]
We consider the interference-mitigation based cognitive radio where the primary and secondary users can coexist at the same time and frequency bands, under the constraint that the rate of the primary user (PU) must remain the same with a single-user decoder. To meet such a coexistence constraint, the relaying from the secondary user (SU) can help the PU's transmission under the interference from the SU. However, the relayed signal in the known dirty paper coding (DPC) based scheme is interfered by the SU's signal, and is not "clean". In this paper, under the half-duplex constraints, we propose two new transmission schemes aided by the clean relaying from the SU's transmitter and receiver without interference from the SU. We name them as the clean transmitter relaying (CT) and clean transmitter-receiver relaying (CTR) aided cognitive radio, respectively. The rate and multiplexing gain performances of CT and CTR in fading channels with various availabilities of the channel state information at the transmitters (CSIT) are studied. Our CT generalizes the celebrated DPC based scheme proposed previously. With full CSIT, the multiplexing gain of the CTR is proved to be better (or no less) than that of the previous DPC based schemes. This is because the silent period for decoding the PU's messages for the DPC may not be necessary in the CTR. With only the statistics of CSIT, we further prove that the CTR outperforms the rate performance of the previous scheme in fast Rayleigh fading channels. The numerical examples also show that in a large class of channels, the proposed CT and CTR provide significant rate gains over the previous scheme with small complexity penalties.
2104.11693
Yiming Zhao
Yiming Zhao, Xinming Huang and Ziming Zhang
Deep Lucas-Kanade Homography for Multimodal Image Alignment
Accepted by CVPR2021, codelink: https://github.com/placeforyiming/CVPR21-Deep-Lucas-Kanade-Homography
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating homography to align image pairs captured by different sensors or image pairs with large appearance changes is an important and general challenge for many computer vision applications. In contrast to others, we propose a generic solution to pixel-wise align multimodal image pairs by extending the traditional Lucas-Kanade algorithm with networks. The key contribution in our method is how we construct feature maps, named as deep Lucas-Kanade feature map (DLKFM). The learned DLKFM can spontaneously recognize invariant features under various appearance-changing conditions. It also has two nice properties for the Lucas-Kanade algorithm: (1) The template feature map keeps brightness consistency with the input feature map, thus the color difference is very small while they are well-aligned. (2) The Lucas-Kanade objective function built on DLKFM has a smooth landscape around ground truth homography parameters, so the iterative solution of the Lucas-Kanade can easily converge to the ground truth. With those properties, directly updating the Lucas-Kanade algorithm on our feature maps will precisely align image pairs with large appearance changes. We share the datasets, code, and demo video online.
[ { "created": "Thu, 22 Apr 2021 04:11:29 GMT", "version": "v1" } ]
2021-04-26
[ [ "Zhao", "Yiming", "" ], [ "Huang", "Xinming", "" ], [ "Zhang", "Ziming", "" ] ]
Estimating homography to align image pairs captured by different sensors or image pairs with large appearance changes is an important and general challenge for many computer vision applications. In contrast to others, we propose a generic solution to pixel-wise align multimodal image pairs by extending the traditional Lucas-Kanade algorithm with networks. The key contribution in our method is how we construct feature maps, named as deep Lucas-Kanade feature map (DLKFM). The learned DLKFM can spontaneously recognize invariant features under various appearance-changing conditions. It also has two nice properties for the Lucas-Kanade algorithm: (1) The template feature map keeps brightness consistency with the input feature map, thus the color difference is very small while they are well-aligned. (2) The Lucas-Kanade objective function built on DLKFM has a smooth landscape around ground truth homography parameters, so the iterative solution of the Lucas-Kanade can easily converge to the ground truth. With those properties, directly updating the Lucas-Kanade algorithm on our feature maps will precisely align image pairs with large appearance changes. We share the datasets, code, and demo video online.
1907.10709
Giulio Siracusano Dr.
Giulio Siracusano, Francesca Garesc\`i, Giovanni Finocchio, Riccardo Tomasello, Francesco Lamonaca, Carmelo Scuro, Mario Carpentieri, Massimo Chiappini and Aurelio La Corte
Automatic crack classification by exploiting statistical event descriptors for Deep Learning
19 pages, 2 tables, 9 figures
null
null
null
cs.LG eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In modern building infrastructures, the chance to devise adaptive and unsupervised data-driven health monitoring systems is gaining in popularity due to the large availability of big data from low-cost sensors with communication capabilities and advanced modeling tools such as Deep Learning. The main purpose of this paper is to combine deep neural networks with Bidirectional Long Short Term Memory and advanced statistical analysis involving Instantaneous Frequency and Spectral Kurtosis to develop an accurate classification tool for tensile, shear and mixed modes originated from acoustic emission events (cracks). We investigated on effective event descriptors to capture the unique characteristics from the different types of modes. Tests on experimental results confirm that this method achieves promising classification among different crack events and can impact on the design of future on structural health monitoring (SHM) technologies. This approach is effective to classify incipient damages with 92% of accuracy, which is advantageous to plan maintenance.
[ { "created": "Wed, 24 Jul 2019 20:39:49 GMT", "version": "v1" }, { "created": "Fri, 26 Nov 2021 17:01:13 GMT", "version": "v2" } ]
2021-11-29
[ [ "Siracusano", "Giulio", "" ], [ "Garescì", "Francesca", "" ], [ "Finocchio", "Giovanni", "" ], [ "Tomasello", "Riccardo", "" ], [ "Lamonaca", "Francesco", "" ], [ "Scuro", "Carmelo", "" ], [ "Carpentieri", "Mario", "" ], [ "Chiappini", "Massimo", "" ], [ "La Corte", "Aurelio", "" ] ]
In modern building infrastructures, the chance to devise adaptive and unsupervised data-driven health monitoring systems is gaining in popularity due to the large availability of big data from low-cost sensors with communication capabilities and advanced modeling tools such as Deep Learning. The main purpose of this paper is to combine deep neural networks with Bidirectional Long Short Term Memory and advanced statistical analysis involving Instantaneous Frequency and Spectral Kurtosis to develop an accurate classification tool for tensile, shear and mixed modes originated from acoustic emission events (cracks). We investigated on effective event descriptors to capture the unique characteristics from the different types of modes. Tests on experimental results confirm that this method achieves promising classification among different crack events and can impact on the design of future on structural health monitoring (SHM) technologies. This approach is effective to classify incipient damages with 92% of accuracy, which is advantageous to plan maintenance.
2112.14192
Xudong Li
Xudong Li, Ye Fan, Rugui Yao, Peng Wang, Nan Qi, Xiaoya Zuo
Robust Security Analysis Based on Random Geometry Theory for Satellite-Terrestrial-Vehicle Network
The theoretical analysis in the original manuscript is insufficient, and the system model is not convincing. With the consideration of these flaws, we decide to withdraw our work for further improvement
null
null
null
cs.IT cs.CL math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Driven by B5G and 6G technologies, multi-network fusion is an indispensable tendency for future communications. In this paper, we focus on and analyze the \emph{security performance} (SP) of the \emph{satellite-terrestrial downlink transmission} (STDT). Here, the STDT is composed of a satellite network and a vehicular network with a legitimate mobile receiver and an mobile eavesdropper distributing. To theoretically analyze the SP of this system from the perspective of mobile terminals better, the random geometry theory is adopted, which assumes that both terrestrial vehicles are distributed stochastically in one beam of the satellite. Furthermore, based on this theory, the closed-form analytical expressions for two crucial and specific indicators in the STDT are derived, respectively, the secrecy outage probability and the ergodic secrecy capacity. Additionally, several related variables restricting the SP of the STDT are discussed, and specific schemes are presented to enhance the SP. Then, the asymptotic property is investigated in the high signal-to-noise ratio scenario, and accurate and asymptotic closed-form expressions are given. Finally, simulation results show that, under the precondition of guaranteeing the reliability of the STDT, the asymptotic solutions outperform the corresponding accurate results significantly in the effectiveness.
[ { "created": "Tue, 28 Dec 2021 15:46:28 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 09:44:03 GMT", "version": "v2" } ]
2022-07-15
[ [ "Li", "Xudong", "" ], [ "Fan", "Ye", "" ], [ "Yao", "Rugui", "" ], [ "Wang", "Peng", "" ], [ "Qi", "Nan", "" ], [ "Zuo", "Xiaoya", "" ] ]
Driven by B5G and 6G technologies, multi-network fusion is an indispensable tendency for future communications. In this paper, we focus on and analyze the \emph{security performance} (SP) of the \emph{satellite-terrestrial downlink transmission} (STDT). Here, the STDT is composed of a satellite network and a vehicular network with a legitimate mobile receiver and an mobile eavesdropper distributing. To theoretically analyze the SP of this system from the perspective of mobile terminals better, the random geometry theory is adopted, which assumes that both terrestrial vehicles are distributed stochastically in one beam of the satellite. Furthermore, based on this theory, the closed-form analytical expressions for two crucial and specific indicators in the STDT are derived, respectively, the secrecy outage probability and the ergodic secrecy capacity. Additionally, several related variables restricting the SP of the STDT are discussed, and specific schemes are presented to enhance the SP. Then, the asymptotic property is investigated in the high signal-to-noise ratio scenario, and accurate and asymptotic closed-form expressions are given. Finally, simulation results show that, under the precondition of guaranteeing the reliability of the STDT, the asymptotic solutions outperform the corresponding accurate results significantly in the effectiveness.
1810.12272
Dimitrios Diochnos
Dimitrios I. Diochnos, Saeed Mahloujifar, Mohammad Mahmoody
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution
Full version of a work with the same title that will appear in NIPS 2018, 31 pages containing 5 figures, 1 table, 2 algorithms
null
null
null
cs.LG cs.CC cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study adversarial perturbations when the instances are uniformly distributed over $\{0,1\}^n$. We study both "inherent" bounds that apply to any problem and any classifier for such a problem as well as bounds that apply to specific problems and specific hypothesis classes. As the current literature contains multiple definitions of adversarial risk and robustness, we start by giving a taxonomy for these definitions based on their goals, we identify one of them as the one guaranteeing misclassification by pushing the instances to the error region. We then study some classic algorithms for learning monotone conjunctions and compare their adversarial risk and robustness under different definitions by attacking the hypotheses using instances drawn from the uniform distribution. We observe that sometimes these definitions lead to significantly different bounds. Thus, this study advocates for the use of the error-region definition, even though other definitions, in other contexts, may coincide with the error-region definition. Using the error-region definition of adversarial perturbations, we then study inherent bounds on risk and robustness of any classifier for any classification problem whose instances are uniformly distributed over $\{0,1\}^n$. Using the isoperimetric inequality for the Boolean hypercube, we show that for initial error $0.01$, there always exists an adversarial perturbation that changes $O(\sqrt{n})$ bits of the instances to increase the risk to $0.5$, making classifier's decisions meaningless. Furthermore, by also using the central limit theorem we show that when $n\to \infty$, at most $c \cdot \sqrt{n}$ bits of perturbations, for a universal constant $c< 1.17$, suffice for increasing the risk to $0.5$, and the same $c \cdot \sqrt{n} $ bits of perturbations on average suffice to increase the risk to $1$, hence bounding the robustness by $c \cdot \sqrt{n}$.
[ { "created": "Mon, 29 Oct 2018 17:41:29 GMT", "version": "v1" } ]
2018-10-30
[ [ "Diochnos", "Dimitrios I.", "" ], [ "Mahloujifar", "Saeed", "" ], [ "Mahmoody", "Mohammad", "" ] ]
We study adversarial perturbations when the instances are uniformly distributed over $\{0,1\}^n$. We study both "inherent" bounds that apply to any problem and any classifier for such a problem as well as bounds that apply to specific problems and specific hypothesis classes. As the current literature contains multiple definitions of adversarial risk and robustness, we start by giving a taxonomy for these definitions based on their goals, we identify one of them as the one guaranteeing misclassification by pushing the instances to the error region. We then study some classic algorithms for learning monotone conjunctions and compare their adversarial risk and robustness under different definitions by attacking the hypotheses using instances drawn from the uniform distribution. We observe that sometimes these definitions lead to significantly different bounds. Thus, this study advocates for the use of the error-region definition, even though other definitions, in other contexts, may coincide with the error-region definition. Using the error-region definition of adversarial perturbations, we then study inherent bounds on risk and robustness of any classifier for any classification problem whose instances are uniformly distributed over $\{0,1\}^n$. Using the isoperimetric inequality for the Boolean hypercube, we show that for initial error $0.01$, there always exists an adversarial perturbation that changes $O(\sqrt{n})$ bits of the instances to increase the risk to $0.5$, making classifier's decisions meaningless. Furthermore, by also using the central limit theorem we show that when $n\to \infty$, at most $c \cdot \sqrt{n}$ bits of perturbations, for a universal constant $c< 1.17$, suffice for increasing the risk to $0.5$, and the same $c \cdot \sqrt{n} $ bits of perturbations on average suffice to increase the risk to $1$, hence bounding the robustness by $c \cdot \sqrt{n}$.
1906.09934
Ioannis Chatzigiannakis
Chrysanthi Tziortzioti, Irene Mavrommati, Georgios Mylonas, Andrea Vitaletti, Ioannis Chatzigiannakis
Scenarios for Educational and Game Activities using Internet of Things Data
14 pages, 5 figures. arXiv admin note: text overlap with arXiv:1805.09561
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by/4.0/
Raising awareness among young people and changing their behavior and habits concerning energy usage and the environment is key to achieving a sustainable planet. The goal to address the global climate problem requires informing the population on their roles in mitigation actions and adaptation of sustainable behaviors. Addressing climate change and achieve ambitious energy and climate targets requires a change in citizen behavior and consumption practices. IoT sensing and related scenario and practices, which address school children via discovery, gamification, and educational activities, are examined in this paper. Use of seawater sensors in STEM education, that has not previously been addressed, is included in these educational scenaria.
[ { "created": "Thu, 20 Jun 2019 18:26:12 GMT", "version": "v1" } ]
2019-06-25
[ [ "Tziortzioti", "Chrysanthi", "" ], [ "Mavrommati", "Irene", "" ], [ "Mylonas", "Georgios", "" ], [ "Vitaletti", "Andrea", "" ], [ "Chatzigiannakis", "Ioannis", "" ] ]
Raising awareness among young people and changing their behavior and habits concerning energy usage and the environment is key to achieving a sustainable planet. The goal to address the global climate problem requires informing the population on their roles in mitigation actions and adaptation of sustainable behaviors. Addressing climate change and achieve ambitious energy and climate targets requires a change in citizen behavior and consumption practices. IoT sensing and related scenario and practices, which address school children via discovery, gamification, and educational activities, are examined in this paper. Use of seawater sensors in STEM education, that has not previously been addressed, is included in these educational scenaria.
0804.2155
Joseph Y. Halpern
Joseph Y. Halpern
From Qualitative to Quantitative Proofs of Security Properties Using First-Order Conditional Logic
null
null
null
null
cs.CR cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A first-order conditional logic is considered, with semantics given by a variant of epsilon-semantics, where p -> q means that Pr(q | p) approaches 1 super-polynomially --faster than any inverse polynomial. This type of convergence is needed for reasoning about security protocols. A complete axiomatization is provided for this semantics, and it is shown how a qualitative proof of the correctness of a security protocol can be automatically converted to a quantitative proof appropriate for reasoning about concrete security.
[ { "created": "Mon, 14 Apr 2008 12:06:04 GMT", "version": "v1" } ]
2008-12-18
[ [ "Halpern", "Joseph Y.", "" ] ]
A first-order conditional logic is considered, with semantics given by a variant of epsilon-semantics, where p -> q means that Pr(q | p) approaches 1 super-polynomially --faster than any inverse polynomial. This type of convergence is needed for reasoning about security protocols. A complete axiomatization is provided for this semantics, and it is shown how a qualitative proof of the correctness of a security protocol can be automatically converted to a quantitative proof appropriate for reasoning about concrete security.
1911.11534
Siyan Dong
Siyan Dong, Songyin Wu, Yixin Zhuang, Kai Xu, Shanghang Zhang, Baoquan Chen
Decoupling Features and Coordinates for Few-shot RGB Relocalization
This is a very early initialization of a research project and contains some out-of-date results and errors. A later version with significant improvements has been published as a new paper. See arXiv:2208.06933
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-scene model adaption is crucial for camera relocalization in real scenarios. It is often preferable that a pre-learned model can be fast adapted to a novel scene with as few training samples as possible. The existing state-of-the-art approaches, however, can hardly support such few-shot scene adaption due to the entangling of image feature extraction and scene coordinate regression. To address this issue, we approach camera relocalization with a decoupled solution where feature extraction, coordinate regression, and pose estimation are performed separately. Our key insight is that feature encoder used for coordinate regression should be learned by removing the distracting factor of coordinate systems, such that feature encoder is learned from multiple scenes for general feature representation and more important, view-insensitive capability. With this feature prior, and combined with a coordinate regressor, few-shot observations in a new scene are much easier to connect with the 3D world than the one with existing integrated solution. Experiments have shown the superiority of our approach compared to the state-of-the-art methods, producing higher accuracy on several scenes with diverse visual appearance and viewpoint distribution.
[ { "created": "Tue, 26 Nov 2019 13:57:39 GMT", "version": "v1" }, { "created": "Sat, 1 Aug 2020 17:49:36 GMT", "version": "v2" }, { "created": "Tue, 4 Aug 2020 10:29:36 GMT", "version": "v3" }, { "created": "Tue, 16 Aug 2022 15:40:00 GMT", "version": "v4" } ]
2022-08-17
[ [ "Dong", "Siyan", "" ], [ "Wu", "Songyin", "" ], [ "Zhuang", "Yixin", "" ], [ "Xu", "Kai", "" ], [ "Zhang", "Shanghang", "" ], [ "Chen", "Baoquan", "" ] ]
Cross-scene model adaption is crucial for camera relocalization in real scenarios. It is often preferable that a pre-learned model can be fast adapted to a novel scene with as few training samples as possible. The existing state-of-the-art approaches, however, can hardly support such few-shot scene adaption due to the entangling of image feature extraction and scene coordinate regression. To address this issue, we approach camera relocalization with a decoupled solution where feature extraction, coordinate regression, and pose estimation are performed separately. Our key insight is that feature encoder used for coordinate regression should be learned by removing the distracting factor of coordinate systems, such that feature encoder is learned from multiple scenes for general feature representation and more important, view-insensitive capability. With this feature prior, and combined with a coordinate regressor, few-shot observations in a new scene are much easier to connect with the 3D world than the one with existing integrated solution. Experiments have shown the superiority of our approach compared to the state-of-the-art methods, producing higher accuracy on several scenes with diverse visual appearance and viewpoint distribution.
2010.13986
Zhuqi Li
Zhuqi Li, Can Wu, Sigurd Wagner, James C. Sturm, Naveen Verma, Kyle Jamieson
REITS: Reflective Surface for Intelligent Transportation Systems
null
null
10.1145/3446382.3448650
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous vehicles are predicted to dominate the transportation industry in the foreseeable future. Safety is one of the major challenges to the early deployment of self-driving systems. To ensure safety, self-driving vehicles must sense and detect humans, other vehicles, and road infrastructure accurately, robustly, and timely. However, existing sensing techniques used by self-driving vehicles may not be absolutely reliable. In this paper, we design REITS, a system to improve the reliability of RF-based sensing modules for autonomous vehicles. We conduct theoretical analysis on possible failures of existing RF-based sensing systems. Based on the analysis, REITS adopts a multi-antenna design, which enables constructive blind beamforming to return an enhanced radar signal in the incident direction. REITS can also let the existing radar system sense identification information by switching between constructive beamforming state and destructive beamforming state. Preliminary results show that REITS improves the detection distance of a self-driving car radar by a factor of 3.63.
[ { "created": "Tue, 27 Oct 2020 01:45:09 GMT", "version": "v1" }, { "created": "Mon, 16 Nov 2020 15:57:35 GMT", "version": "v2" }, { "created": "Tue, 2 Feb 2021 16:25:12 GMT", "version": "v3" } ]
2021-02-03
[ [ "Li", "Zhuqi", "" ], [ "Wu", "Can", "" ], [ "Wagner", "Sigurd", "" ], [ "Sturm", "James C.", "" ], [ "Verma", "Naveen", "" ], [ "Jamieson", "Kyle", "" ] ]
Autonomous vehicles are predicted to dominate the transportation industry in the foreseeable future. Safety is one of the major challenges to the early deployment of self-driving systems. To ensure safety, self-driving vehicles must sense and detect humans, other vehicles, and road infrastructure accurately, robustly, and timely. However, existing sensing techniques used by self-driving vehicles may not be absolutely reliable. In this paper, we design REITS, a system to improve the reliability of RF-based sensing modules for autonomous vehicles. We conduct theoretical analysis on possible failures of existing RF-based sensing systems. Based on the analysis, REITS adopts a multi-antenna design, which enables constructive blind beamforming to return an enhanced radar signal in the incident direction. REITS can also let the existing radar system sense identification information by switching between constructive beamforming state and destructive beamforming state. Preliminary results show that REITS improves the detection distance of a self-driving car radar by a factor of 3.63.
1812.02428
Sai Sri Sathya
Sai Sri Sathya, Praneeth Vepakomma, Ramesh Raskar, Ranjan Ramachandra and Santanu Bhattacharya
A Review of Homomorphic Encryption Libraries for Secure Computation
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we provide a survey of various libraries for homomorphic encryption. We describe key features and trade-offs that should be considered while choosing the right approach for secure computation. We then present a comparison of six commonly available Homomorphic Encryption libraries - SEAL, HElib, TFHE, Paillier, ELGamal and RSA across these identified features. Support for different languages and real-life applications are also elucidated.
[ { "created": "Thu, 6 Dec 2018 09:55:24 GMT", "version": "v1" }, { "created": "Fri, 7 Dec 2018 06:06:35 GMT", "version": "v2" } ]
2018-12-10
[ [ "Sathya", "Sai Sri", "" ], [ "Vepakomma", "Praneeth", "" ], [ "Raskar", "Ramesh", "" ], [ "Ramachandra", "Ranjan", "" ], [ "Bhattacharya", "Santanu", "" ] ]
In this paper we provide a survey of various libraries for homomorphic encryption. We describe key features and trade-offs that should be considered while choosing the right approach for secure computation. We then present a comparison of six commonly available Homomorphic Encryption libraries - SEAL, HElib, TFHE, Paillier, ELGamal and RSA across these identified features. Support for different languages and real-life applications are also elucidated.
2406.17636
Alexander Gambashidze
Alexander Gambashidze, Anton Kulikov, Yuriy Sosnin, Ilya Makarov
Aligning Diffusion Models with Noise-Conditioned Perception
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advancements in human preference optimization, initially developed for Language Models (LMs), have shown promise for text-to-image Diffusion Models, enhancing prompt alignment, visual appeal, and user preference. Unlike LMs, Diffusion Models typically optimize in pixel or VAE space, which does not align well with human perception, leading to slower and less efficient training during the preference alignment stage. We propose using a perceptual objective in the U-Net embedding space of the diffusion model to address these issues. Our approach involves fine-tuning Stable Diffusion 1.5 and XL using Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT) within this embedding space. This method significantly outperforms standard latent-space implementations across various metrics, including quality and computational cost. For SDXL, our approach provides 60.8\% general preference, 62.2\% visual appeal, and 52.1\% prompt following against original open-sourced SDXL-DPO on the PartiPrompts dataset, while significantly reducing compute. Our approach not only improves the efficiency and quality of human preference alignment for diffusion models but is also easily integrable with other optimization techniques. The training code and LoRA weights will be available here: https://huggingface.co/alexgambashidze/SDXL\_NCP-DPO\_v0.1
[ { "created": "Tue, 25 Jun 2024 15:21:50 GMT", "version": "v1" } ]
2024-06-26
[ [ "Gambashidze", "Alexander", "" ], [ "Kulikov", "Anton", "" ], [ "Sosnin", "Yuriy", "" ], [ "Makarov", "Ilya", "" ] ]
Recent advancements in human preference optimization, initially developed for Language Models (LMs), have shown promise for text-to-image Diffusion Models, enhancing prompt alignment, visual appeal, and user preference. Unlike LMs, Diffusion Models typically optimize in pixel or VAE space, which does not align well with human perception, leading to slower and less efficient training during the preference alignment stage. We propose using a perceptual objective in the U-Net embedding space of the diffusion model to address these issues. Our approach involves fine-tuning Stable Diffusion 1.5 and XL using Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT) within this embedding space. This method significantly outperforms standard latent-space implementations across various metrics, including quality and computational cost. For SDXL, our approach provides 60.8\% general preference, 62.2\% visual appeal, and 52.1\% prompt following against original open-sourced SDXL-DPO on the PartiPrompts dataset, while significantly reducing compute. Our approach not only improves the efficiency and quality of human preference alignment for diffusion models but is also easily integrable with other optimization techniques. The training code and LoRA weights will be available here: https://huggingface.co/alexgambashidze/SDXL\_NCP-DPO\_v0.1
2103.11874
Zemin Sun
Zemin Sun, Yanheng Liu, Jian Wang, Guofa Li, Carie Anil, Keqiang Li, Xinyu Guo, Geng Sun, Daxin Tian, Dongpu Cao
Applications of Game Theory in Vehicular Networks: A Survey
It has been published on "IEEE communications surveys and tutorials" (https://ieeexplore.ieee.org/document/9524815)
IEEE Communications Surveys & Tutorials, vol. 23, no. 4, pp. 2660-2710, Fourthquarter 2021
10.1109/COMST.2021.3108466
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Internet of Things (IoT) era, vehicles and other intelligent components in an intelligent transportation system (ITS) are connected, forming Vehicular Networks (VNs) that provide efficient and secure traffic and ubiquitous access to various applications. However, as the number of nodes in ITS increases, it is challenging to satisfy a varied and large number of service requests with different Quality of Service and security requirements in highly dynamic VNs. Intelligent nodes in VNs can compete or cooperate for limited network resources to achieve either an individual or a group's objectives. Game Theory (GT), a theoretical framework designed for strategic interactions among rational decision-makers sharing scarce resources, can be used to model and analyze individual or group behaviors of communicating entities in VNs. This paper primarily surveys the recent developments of GT in solving various challenges of VNs. This survey starts with an introduction to the background of VNs. A review of GT models studied in the VNs is then introduced, including its basic concepts, classifications, and applicable vehicular issues. After discussing the requirements of VNs and the motivation of using GT, a comprehensive literature review on GT applications in dealing with the challenges of current VNs is provided. Furthermore, recent contributions of GT to VNs integrating with diverse emerging 5G technologies are surveyed. Finally, the lessons learned are given, and several key research challenges and possible solutions for applying GT in VNs are outlined.
[ { "created": "Mon, 22 Mar 2021 14:09:33 GMT", "version": "v1" }, { "created": "Wed, 5 Jan 2022 13:22:20 GMT", "version": "v2" } ]
2022-01-06
[ [ "Sun", "Zemin", "" ], [ "Liu", "Yanheng", "" ], [ "Wang", "Jian", "" ], [ "Li", "Guofa", "" ], [ "Anil", "Carie", "" ], [ "Li", "Keqiang", "" ], [ "Guo", "Xinyu", "" ], [ "Sun", "Geng", "" ], [ "Tian", "Daxin", "" ], [ "Cao", "Dongpu", "" ] ]
In the Internet of Things (IoT) era, vehicles and other intelligent components in an intelligent transportation system (ITS) are connected, forming Vehicular Networks (VNs) that provide efficient and secure traffic and ubiquitous access to various applications. However, as the number of nodes in ITS increases, it is challenging to satisfy a varied and large number of service requests with different Quality of Service and security requirements in highly dynamic VNs. Intelligent nodes in VNs can compete or cooperate for limited network resources to achieve either an individual or a group's objectives. Game Theory (GT), a theoretical framework designed for strategic interactions among rational decision-makers sharing scarce resources, can be used to model and analyze individual or group behaviors of communicating entities in VNs. This paper primarily surveys the recent developments of GT in solving various challenges of VNs. This survey starts with an introduction to the background of VNs. A review of GT models studied in the VNs is then introduced, including its basic concepts, classifications, and applicable vehicular issues. After discussing the requirements of VNs and the motivation of using GT, a comprehensive literature review on GT applications in dealing with the challenges of current VNs is provided. Furthermore, recent contributions of GT to VNs integrating with diverse emerging 5G technologies are surveyed. Finally, the lessons learned are given, and several key research challenges and possible solutions for applying GT in VNs are outlined.
1701.08547
Robert Lim
Robert V. Lim, Boyana Norris, Allen D. Malony
Autotuning GPU Kernels via Static and Predictive Analysis
null
null
null
null
cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimizing the performance of GPU kernels is challenging for both human programmers and code generators. For example, CUDA programmers must set thread and block parameters for a kernel, but might not have the intuition to make a good choice. Similarly, compilers can generate working code, but may miss tuning opportunities by not targeting GPU models or performing code transformations. Although empirical autotuning addresses some of these challenges, it requires extensive experimentation and search for optimal code variants. This research presents an approach for tuning CUDA kernels based on static analysis that considers fine-grained code structure and the specific GPU architecture features. Notably, our approach does not require any program runs in order to discover near-optimal parameter settings. We demonstrate the applicability of our approach in enabling code autotuners such as Orio to produce competitive code variants comparable with empirical-based methods, without the high cost of experiments.
[ { "created": "Mon, 30 Jan 2017 11:23:42 GMT", "version": "v1" }, { "created": "Thu, 11 May 2017 22:27:32 GMT", "version": "v2" }, { "created": "Thu, 29 Jun 2017 11:25:02 GMT", "version": "v3" } ]
2017-06-30
[ [ "Lim", "Robert V.", "" ], [ "Norris", "Boyana", "" ], [ "Malony", "Allen D.", "" ] ]
Optimizing the performance of GPU kernels is challenging for both human programmers and code generators. For example, CUDA programmers must set thread and block parameters for a kernel, but might not have the intuition to make a good choice. Similarly, compilers can generate working code, but may miss tuning opportunities by not targeting GPU models or performing code transformations. Although empirical autotuning addresses some of these challenges, it requires extensive experimentation and search for optimal code variants. This research presents an approach for tuning CUDA kernels based on static analysis that considers fine-grained code structure and the specific GPU architecture features. Notably, our approach does not require any program runs in order to discover near-optimal parameter settings. We demonstrate the applicability of our approach in enabling code autotuners such as Orio to produce competitive code variants comparable with empirical-based methods, without the high cost of experiments.
2010.12176
Yuxi Li
Yuxi Li, Ning Xu, Jinlong Peng, John See, Weiyao Lin
Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation
13 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address several inadequacies of current video object segmentation pipelines. Firstly, a cyclic mechanism is incorporated to the standard semi-supervised process to produce more robust representations. By relying on the accurate reference mask in the starting frame, we show that the error propagation problem can be mitigated. Next, we introduce a simple gradient correction module, which extends the offline pipeline to an online method while maintaining the efficiency of the former. Finally we develop cycle effective receptive field (cycle-ERF) based on gradient correction to provide a new perspective into analyzing object-specific regions of interests. We conduct comprehensive experiments on challenging benchmarks of DAVIS17 and Youtube-VOS, demonstrating that the cyclic mechanism is beneficial to segmentation quality.
[ { "created": "Fri, 23 Oct 2020 05:40:53 GMT", "version": "v1" } ]
2020-10-26
[ [ "Li", "Yuxi", "" ], [ "Xu", "Ning", "" ], [ "Peng", "Jinlong", "" ], [ "See", "John", "" ], [ "Lin", "Weiyao", "" ] ]
In this paper, we address several inadequacies of current video object segmentation pipelines. Firstly, a cyclic mechanism is incorporated to the standard semi-supervised process to produce more robust representations. By relying on the accurate reference mask in the starting frame, we show that the error propagation problem can be mitigated. Next, we introduce a simple gradient correction module, which extends the offline pipeline to an online method while maintaining the efficiency of the former. Finally we develop cycle effective receptive field (cycle-ERF) based on gradient correction to provide a new perspective into analyzing object-specific regions of interests. We conduct comprehensive experiments on challenging benchmarks of DAVIS17 and Youtube-VOS, demonstrating that the cyclic mechanism is beneficial to segmentation quality.
1903.07840
John Skinner
John Skinner, David Hall, Haoyang Zhang, Feras Dayoub, Niko S\"underhauf
The Probabilistic Object Detection Challenge
4 pages, workshop paper
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new challenge for computer and robotic vision, the first ACRV Robotic Vision Challenge, Probabilistic Object Detection. Probabilistic object detection is a new variation on traditional object detection tasks, requiring estimates of spatial and semantic uncertainty. We extend the traditional bounding box format of object detection to express spatial uncertainty using gaussian distributions for the box corners. The challenge introduces a new test dataset of video sequences, which are designed to more closely resemble the kind of data available to a robotic system. We evaluate probabilistic detections using a new probability-based detection quality (PDQ) measure. The goal in creating this challenge is to draw the computer and robotic vision communities together, toward applying object detection solutions for practical robotics applications.
[ { "created": "Tue, 19 Mar 2019 05:18:52 GMT", "version": "v1" }, { "created": "Mon, 8 Apr 2019 00:58:24 GMT", "version": "v2" } ]
2019-04-09
[ [ "Skinner", "John", "" ], [ "Hall", "David", "" ], [ "Zhang", "Haoyang", "" ], [ "Dayoub", "Feras", "" ], [ "Sünderhauf", "Niko", "" ] ]
We introduce a new challenge for computer and robotic vision, the first ACRV Robotic Vision Challenge, Probabilistic Object Detection. Probabilistic object detection is a new variation on traditional object detection tasks, requiring estimates of spatial and semantic uncertainty. We extend the traditional bounding box format of object detection to express spatial uncertainty using gaussian distributions for the box corners. The challenge introduces a new test dataset of video sequences, which are designed to more closely resemble the kind of data available to a robotic system. We evaluate probabilistic detections using a new probability-based detection quality (PDQ) measure. The goal in creating this challenge is to draw the computer and robotic vision communities together, toward applying object detection solutions for practical robotics applications.
1808.04510
Kanstantsin Pashkovich
Robert Chiang and Kanstantsin Pashkovich
On the approximability of the stable matching problem with ties of size two
Added a detailed comparison with the approaches from the papers "On the approximability of the stable marriage problem with one-sided ties." by Bauckholt, Pashkovich, and Sanita and "Improved approximation algorithms for two variants of the stable marriage problem with ties." by Huang and Kavitha. (See Appendix)
null
null
null
cs.GT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stable matching problem is one of the central problems of algorithmic game theory. If participants are allowed to have ties, the problem of finding a stable matching of maximum cardinality is an NP-hard problem, even when the ties are of size two. Moreover, in this setting it is UGC-hard to provide an approximation for the maximum cardinality stable matching problem with a constant factor smaller than 4/3. In this paper, we give a tight analysis of an approximation algorithm given by Huang and Kavitha for the maximum cardinality stable matching problem with ties of size two, demonstrating an improved 4/3-approximation factor.
[ { "created": "Tue, 14 Aug 2018 02:47:05 GMT", "version": "v1" }, { "created": "Fri, 15 Feb 2019 21:59:31 GMT", "version": "v2" } ]
2019-02-19
[ [ "Chiang", "Robert", "" ], [ "Pashkovich", "Kanstantsin", "" ] ]
The stable matching problem is one of the central problems of algorithmic game theory. If participants are allowed to have ties, the problem of finding a stable matching of maximum cardinality is an NP-hard problem, even when the ties are of size two. Moreover, in this setting it is UGC-hard to provide an approximation for the maximum cardinality stable matching problem with a constant factor smaller than 4/3. In this paper, we give a tight analysis of an approximation algorithm given by Huang and Kavitha for the maximum cardinality stable matching problem with ties of size two, demonstrating an improved 4/3-approximation factor.
1603.00977
EPTCS
Mahdi Amani (Universit\`a di Pisa, Pisa, Italy), Abbas Nowzari-Dalini (UT, Tehran, Iran)
Generation, Ranking and Unranking of Ordered Trees with Degree Bounds
In Proceedings DCM 2015, arXiv:1603.00536
EPTCS 204, 2016, pp. 31-45
10.4204/EPTCS.204.4
null
cs.CC cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of generating, ranking and unranking of unlabeled ordered trees whose nodes have maximum degree of $\Delta$. This class of trees represents a generalization of chemical trees. A chemical tree is an unlabeled tree in which no node has degree greater than 4. By allowing up to $\Delta$ children for each node of chemical tree instead of 4, we will have a generalization of chemical trees. Here, we introduce a new encoding over an alphabet of size 4 for representing unlabeled ordered trees with maximum degree of $\Delta$. We use this encoding for generating these trees in A-order with constant average time and O(n) worst case time. Due to the given encoding, with a precomputation of size and time O(n^2) (assuming $\Delta$ is constant), both ranking and unranking algorithms are also designed taking O(n) and O(nlogn) time complexities.
[ { "created": "Thu, 3 Mar 2016 05:33:50 GMT", "version": "v1" } ]
2016-08-22
[ [ "Amani", "Mahdi", "", "Università di Pisa, Pisa, Italy" ], [ "Nowzari-Dalini", "Abbas", "", "UT, Tehran, Iran" ] ]
We study the problem of generating, ranking and unranking of unlabeled ordered trees whose nodes have maximum degree of $\Delta$. This class of trees represents a generalization of chemical trees. A chemical tree is an unlabeled tree in which no node has degree greater than 4. By allowing up to $\Delta$ children for each node of chemical tree instead of 4, we will have a generalization of chemical trees. Here, we introduce a new encoding over an alphabet of size 4 for representing unlabeled ordered trees with maximum degree of $\Delta$. We use this encoding for generating these trees in A-order with constant average time and O(n) worst case time. Due to the given encoding, with a precomputation of size and time O(n^2) (assuming $\Delta$ is constant), both ranking and unranking algorithms are also designed taking O(n) and O(nlogn) time complexities.
2005.14253
Nicholas FitzGerald
Thibault F\'evry, Nicholas FitzGerald, Livio Baldini Soares, Tom Kwiatkowski
Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking
11 pages, 8 figures, appearing at AKBC 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data.
[ { "created": "Thu, 28 May 2020 19:32:52 GMT", "version": "v1" } ]
2020-06-01
[ [ "Févry", "Thibault", "" ], [ "FitzGerald", "Nicholas", "" ], [ "Soares", "Livio Baldini", "" ], [ "Kwiatkowski", "Tom", "" ] ]
In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data.
2406.12746
Miaoyu Li
Miaoyu Li, Haoxin Li, Zilin Du, and Boyang Li
Rationale-based Ensemble of Multiple QA Strategies for Zero-shot Knowledge-based VQA
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge-based Visual Qustion-answering (K-VQA) necessitates the use of background knowledge beyond what is depicted in the image. Current zero-shot K-VQA methods usually translate an image to a single type of textual decision context and use a text-based model to answer the question based on it, which conflicts with the fact that K-VQA questions often require the combination of multiple question-answering strategies. In light of this, we propose Rationale-based Ensemble of Answer Context Tactics (REACT) to achieve a dynamic ensemble of multiple question-answering tactics, comprising Answer Candidate Generation (ACG) and Rationale-based Strategy Fusion (RSF). In ACG, we generate three distinctive decision contexts to provide different strategies for each question, resulting in the generation of three answer candidates. RSF generates automatic and mechanistic rationales from decision contexts for each candidate, allowing the model to select the correct answer from all candidates. We conduct comprehensive experiments on the OK-VQA and A-OKVQA datasets, and our method significantly outperforms state-of-the-art LLM-based baselines on all datasets.
[ { "created": "Tue, 18 Jun 2024 16:06:38 GMT", "version": "v1" }, { "created": "Wed, 19 Jun 2024 02:02:13 GMT", "version": "v2" }, { "created": "Sun, 23 Jun 2024 03:06:42 GMT", "version": "v3" } ]
2024-06-25
[ [ "Li", "Miaoyu", "" ], [ "Li", "Haoxin", "" ], [ "Du", "Zilin", "" ], [ "Li", "Boyang", "" ] ]
Knowledge-based Visual Qustion-answering (K-VQA) necessitates the use of background knowledge beyond what is depicted in the image. Current zero-shot K-VQA methods usually translate an image to a single type of textual decision context and use a text-based model to answer the question based on it, which conflicts with the fact that K-VQA questions often require the combination of multiple question-answering strategies. In light of this, we propose Rationale-based Ensemble of Answer Context Tactics (REACT) to achieve a dynamic ensemble of multiple question-answering tactics, comprising Answer Candidate Generation (ACG) and Rationale-based Strategy Fusion (RSF). In ACG, we generate three distinctive decision contexts to provide different strategies for each question, resulting in the generation of three answer candidates. RSF generates automatic and mechanistic rationales from decision contexts for each candidate, allowing the model to select the correct answer from all candidates. We conduct comprehensive experiments on the OK-VQA and A-OKVQA datasets, and our method significantly outperforms state-of-the-art LLM-based baselines on all datasets.
1412.8501
Eli A. Meirom
Eli A. Meirom, Shie Mannor, Ariel Orda
Formation Games of Reliable Networks
null
null
null
null
cs.GT cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We establish a network formation game for the Internet's Autonomous System (AS) interconnection topology. The game includes different types of players, accounting for the heterogeneity of ASs in the Internet. We incorporate reliability considerations in the player's utility function, and analyze static properties of the game as well as its dynamic evolution. We provide dynamic analysis of its topological quantities, and explain the prevalence of some "network motifs" in the Internet graph. We assess our predictions with real-world data.
[ { "created": "Mon, 29 Dec 2014 22:38:53 GMT", "version": "v1" } ]
2014-12-31
[ [ "Meirom", "Eli A.", "" ], [ "Mannor", "Shie", "" ], [ "Orda", "Ariel", "" ] ]
We establish a network formation game for the Internet's Autonomous System (AS) interconnection topology. The game includes different types of players, accounting for the heterogeneity of ASs in the Internet. We incorporate reliability considerations in the player's utility function, and analyze static properties of the game as well as its dynamic evolution. We provide dynamic analysis of its topological quantities, and explain the prevalence of some "network motifs" in the Internet graph. We assess our predictions with real-world data.
1410.4207
Claudio Criscione
Enrico Bazzoli, Claudio Criscione, Federico Maggi, Stefano Zanero
XSS Peeker: A Systematic Analysis of Cross-site Scripting Vulnerability Scanners
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the first publication of the "OWASP Top 10" (2004), cross-site scripting (XSS) vulnerabilities have always been among the top 5 web application security bugs. Black-box vulnerability scanners are widely used in the industry to reproduce (XSS) attacks automatically. In spite of the technical sophistication and advancement, previous work showed that black-box scanners miss a non-negligible portion of vulnerabilities, and report non-existing, non-exploitable or uninteresting vulnerabilities. Unfortunately, these results hold true even for XSS vulnerabilities, which are relatively simple to trigger if compared, for instance, to logic flaws. Black-box scanners have not been studied in depth on this vertical: knowing precisely how scanners try to detect XSS can provide useful insights to understand their limitations, to design better detection methods. In this paper, we present and discuss the results of a detailed and systematic study on 6 black-box web scanners (both proprietary and open source) that we conducted in coordination with the respective vendors. To this end, we developed an automated tool to (1) extract the payloads used by each scanner, (2) distill the "templates" that have originated each payload, (3) evaluate them according to quality indicators, and (4) perform a cross-scanner analysis. Unlike previous work, our testbed application, which contains a large set of XSS vulnerabilities, including DOM XSS, was gradually retrofitted to accomodate for the payloads that triggered no vulnerabilities. Our analysis reveals a highly fragmented scenario. Scanners exhibit a wide variety of distinct payloads, a non-uniform approach to fuzzing and mutating the payloads, and a very diverse detection effectiveness.
[ { "created": "Wed, 15 Oct 2014 20:03:07 GMT", "version": "v1" } ]
2014-10-17
[ [ "Bazzoli", "Enrico", "" ], [ "Criscione", "Claudio", "" ], [ "Maggi", "Federico", "" ], [ "Zanero", "Stefano", "" ] ]
Since the first publication of the "OWASP Top 10" (2004), cross-site scripting (XSS) vulnerabilities have always been among the top 5 web application security bugs. Black-box vulnerability scanners are widely used in the industry to reproduce (XSS) attacks automatically. In spite of the technical sophistication and advancement, previous work showed that black-box scanners miss a non-negligible portion of vulnerabilities, and report non-existing, non-exploitable or uninteresting vulnerabilities. Unfortunately, these results hold true even for XSS vulnerabilities, which are relatively simple to trigger if compared, for instance, to logic flaws. Black-box scanners have not been studied in depth on this vertical: knowing precisely how scanners try to detect XSS can provide useful insights to understand their limitations, to design better detection methods. In this paper, we present and discuss the results of a detailed and systematic study on 6 black-box web scanners (both proprietary and open source) that we conducted in coordination with the respective vendors. To this end, we developed an automated tool to (1) extract the payloads used by each scanner, (2) distill the "templates" that have originated each payload, (3) evaluate them according to quality indicators, and (4) perform a cross-scanner analysis. Unlike previous work, our testbed application, which contains a large set of XSS vulnerabilities, including DOM XSS, was gradually retrofitted to accomodate for the payloads that triggered no vulnerabilities. Our analysis reveals a highly fragmented scenario. Scanners exhibit a wide variety of distinct payloads, a non-uniform approach to fuzzing and mutating the payloads, and a very diverse detection effectiveness.
2001.04484
Luca Papariello
Luca Papariello, Alexandros Bampoulidis, Mihai Lupu
On the Replicability of Combining Word Embeddings and Retrieval Models
null
null
null
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We replicate recent experiments attempting to demonstrate an attractive hypothesis about the use of the Fisher kernel framework and mixture models for aggregating word embeddings towards document representations and the use of these representations in document classification, clustering, and retrieval. Specifically, the hypothesis was that the use of a mixture model of von Mises-Fisher (VMF) distributions instead of Gaussian distributions would be beneficial because of the focus on cosine distances of both VMF and the vector space model traditionally used in information retrieval. Previous experiments had validated this hypothesis. Our replication was not able to validate it, despite a large parameter scan space.
[ { "created": "Mon, 13 Jan 2020 19:01:07 GMT", "version": "v1" } ]
2020-01-15
[ [ "Papariello", "Luca", "" ], [ "Bampoulidis", "Alexandros", "" ], [ "Lupu", "Mihai", "" ] ]
We replicate recent experiments attempting to demonstrate an attractive hypothesis about the use of the Fisher kernel framework and mixture models for aggregating word embeddings towards document representations and the use of these representations in document classification, clustering, and retrieval. Specifically, the hypothesis was that the use of a mixture model of von Mises-Fisher (VMF) distributions instead of Gaussian distributions would be beneficial because of the focus on cosine distances of both VMF and the vector space model traditionally used in information retrieval. Previous experiments had validated this hypothesis. Our replication was not able to validate it, despite a large parameter scan space.
2403.09547
Jo\~ao Helis Bernardo
Jo\~ao Helis Bernardo, Daniel Alencar da Costa, S\'ergio Queiroz de Medeiros, Uir\'a Kulesza
How do Machine Learning Projects use Continuous Integration Practices? An Empirical Study on GitHub Actions
10 pages, Mining Software Repositories, MSR 2024
null
null
null
cs.SE cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Continuous Integration (CI) is a well-established practice in traditional software development, but its nuances in the domain of Machine Learning (ML) projects remain relatively unexplored. Given the distinctive nature of ML development, understanding how CI practices are adopted in this context is crucial for tailoring effective approaches. In this study, we conduct a comprehensive analysis of 185 open-source projects on GitHub (93 ML and 92 non-ML projects). Our investigation comprises both quantitative and qualitative dimensions, aiming to uncover differences in CI adoption between ML and non-ML projects. Our findings indicate that ML projects often require longer build durations, and medium-sized ML projects exhibit lower test coverage compared to non-ML projects. Moreover, small and medium-sized ML projects show a higher prevalence of increasing build duration trends compared to their non-ML counterparts. Additionally, our qualitative analysis illuminates the discussions around CI in both ML and non-ML projects, encompassing themes like CI Build Execution and Status, CI Testing, and CI Infrastructure. These insights shed light on the unique challenges faced by ML projects in adopting CI practices effectively.
[ { "created": "Thu, 14 Mar 2024 16:35:39 GMT", "version": "v1" } ]
2024-03-15
[ [ "Bernardo", "João Helis", "" ], [ "da Costa", "Daniel Alencar", "" ], [ "de Medeiros", "Sérgio Queiroz", "" ], [ "Kulesza", "Uirá", "" ] ]
Continuous Integration (CI) is a well-established practice in traditional software development, but its nuances in the domain of Machine Learning (ML) projects remain relatively unexplored. Given the distinctive nature of ML development, understanding how CI practices are adopted in this context is crucial for tailoring effective approaches. In this study, we conduct a comprehensive analysis of 185 open-source projects on GitHub (93 ML and 92 non-ML projects). Our investigation comprises both quantitative and qualitative dimensions, aiming to uncover differences in CI adoption between ML and non-ML projects. Our findings indicate that ML projects often require longer build durations, and medium-sized ML projects exhibit lower test coverage compared to non-ML projects. Moreover, small and medium-sized ML projects show a higher prevalence of increasing build duration trends compared to their non-ML counterparts. Additionally, our qualitative analysis illuminates the discussions around CI in both ML and non-ML projects, encompassing themes like CI Build Execution and Status, CI Testing, and CI Infrastructure. These insights shed light on the unique challenges faced by ML projects in adopting CI practices effectively.
1902.09103
Tianwei Shen
Tianwei Shen, Zixin Luo, Lei Zhou, Hanyu Deng, Runze Zhang, Tian Fang, Long Quan
Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
Accepted by ICRA 2019
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at https://github.com/hlzz/DeepMatchVO.
[ { "created": "Mon, 25 Feb 2019 06:22:52 GMT", "version": "v1" } ]
2019-02-26
[ [ "Shen", "Tianwei", "" ], [ "Luo", "Zixin", "" ], [ "Zhou", "Lei", "" ], [ "Deng", "Hanyu", "" ], [ "Zhang", "Runze", "" ], [ "Fang", "Tian", "" ], [ "Quan", "Long", "" ] ]
Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at https://github.com/hlzz/DeepMatchVO.
2305.19157
Reza Faieghi
S. Mohammadreza Ebrahimi, Farid Norouzi, Hossein Dastres, Reza Faieghi, Mehdi Naderi, Milad Malekzadeh
Sensor Fault Detection and Compensation with Performance Prescription for Robotic Manipulators
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on sensor fault detection and compensation for robotic manipulators. The proposed method features a new adaptive observer and a new terminal sliding mode control law established on a second-order integral sliding surface. The method enables sensor fault detection without the need to know the bounds on fault value and/or its derivative. It also enables fast and fixed-time fault-tolerant control whose performance can be prescribed beforehand by defining funnel bounds on the tracking error. The ultimate boundedness of the estimation errors for the proposed observer and the fixed-time stability of the control system are shown using Lyapunov stability analysis. The effectiveness of the proposed method is verified using numerical simulations on two different robotic manipulators, and the results are compared with existing methods. Our results demonstrate performance gains obtained by the proposed method compared to the existing results.
[ { "created": "Tue, 30 May 2023 15:58:56 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 01:09:34 GMT", "version": "v2" } ]
2024-03-20
[ [ "Ebrahimi", "S. Mohammadreza", "" ], [ "Norouzi", "Farid", "" ], [ "Dastres", "Hossein", "" ], [ "Faieghi", "Reza", "" ], [ "Naderi", "Mehdi", "" ], [ "Malekzadeh", "Milad", "" ] ]
This paper focuses on sensor fault detection and compensation for robotic manipulators. The proposed method features a new adaptive observer and a new terminal sliding mode control law established on a second-order integral sliding surface. The method enables sensor fault detection without the need to know the bounds on fault value and/or its derivative. It also enables fast and fixed-time fault-tolerant control whose performance can be prescribed beforehand by defining funnel bounds on the tracking error. The ultimate boundedness of the estimation errors for the proposed observer and the fixed-time stability of the control system are shown using Lyapunov stability analysis. The effectiveness of the proposed method is verified using numerical simulations on two different robotic manipulators, and the results are compared with existing methods. Our results demonstrate performance gains obtained by the proposed method compared to the existing results.
2102.11649
Rafa\"el Bocquet
Rafa\"el Bocquet and Ambrus Kaposi and Christian Sattler
Relative induction principles for type theories
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present new induction principles for the syntax of dependent type theories, which we call relative induction principles. The result of the induction principle relative to a functor F into the syntax is stable over the codomain of F. We rely on the internal language of presheaf categories. In order to combine the internal languages of multiple presheaf categories, we use Dependent Right Adjoints and Multimodal Type Theory. Categorical gluing is used to prove these induction principles, but it not visible in their statements, which involve a notion of model without context extensions. As example applications of these induction principles, we give short and boilerplate-free proofs of canonicity and normalization for some small type theories, and sketch proofs of other metatheoretic results.
[ { "created": "Tue, 23 Feb 2021 12:08:25 GMT", "version": "v1" }, { "created": "Mon, 19 Jul 2021 12:42:20 GMT", "version": "v2" } ]
2021-07-20
[ [ "Bocquet", "Rafaël", "" ], [ "Kaposi", "Ambrus", "" ], [ "Sattler", "Christian", "" ] ]
We present new induction principles for the syntax of dependent type theories, which we call relative induction principles. The result of the induction principle relative to a functor F into the syntax is stable over the codomain of F. We rely on the internal language of presheaf categories. In order to combine the internal languages of multiple presheaf categories, we use Dependent Right Adjoints and Multimodal Type Theory. Categorical gluing is used to prove these induction principles, but it not visible in their statements, which involve a notion of model without context extensions. As example applications of these induction principles, we give short and boilerplate-free proofs of canonicity and normalization for some small type theories, and sketch proofs of other metatheoretic results.
1811.05233
Yuichi Kageyama
Hiroaki Mikami, Hisahiro Suganuma, Pongsakorn U-chupala, Yoshiki Tanaka and Yuichi Kageyama
Massively Distributed SGD: ImageNet/ResNet-50 Training in a Flash
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scaling the distributed deep learning to a massive GPU cluster level is challenging due to the instability of the large mini-batch training and the overhead of the gradient synchronization. We address the instability of the large mini-batch training with batch-size control and label smoothing. We address the overhead of the gradient synchronization with 2D-Torus all-reduce. Specifically, 2D-Torus all-reduce arranges GPUs in a logical 2D grid and performs a series of collective operation in different orientations. These two techniques are implemented with Neural Network Libraries (NNL). We have successfully trained ImageNet/ResNet-50 in 122 seconds without significant accuracy loss on ABCI cluster.
[ { "created": "Tue, 13 Nov 2018 11:52:04 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2019 09:18:09 GMT", "version": "v2" } ]
2019-03-06
[ [ "Mikami", "Hiroaki", "" ], [ "Suganuma", "Hisahiro", "" ], [ "U-chupala", "Pongsakorn", "" ], [ "Tanaka", "Yoshiki", "" ], [ "Kageyama", "Yuichi", "" ] ]
Scaling the distributed deep learning to a massive GPU cluster level is challenging due to the instability of the large mini-batch training and the overhead of the gradient synchronization. We address the instability of the large mini-batch training with batch-size control and label smoothing. We address the overhead of the gradient synchronization with 2D-Torus all-reduce. Specifically, 2D-Torus all-reduce arranges GPUs in a logical 2D grid and performs a series of collective operation in different orientations. These two techniques are implemented with Neural Network Libraries (NNL). We have successfully trained ImageNet/ResNet-50 in 122 seconds without significant accuracy loss on ABCI cluster.
2401.02636
Haitao Wang
Haitao Wang
Algorithms for Computing Closest Points for Segments
Accepted to STACS 2024
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a set $P$ of $n$ points and a set $S$ of $n$ segments in the plane, we consider the problem of computing for each segment of $S$ its closest point in $P$. The previously best algorithm solves the problem in $n^{4/3}2^{O(\log^*n)}$ time [Bespamyatnikh, 2003] and a lower bound (under a somewhat restricted model) $\Omega(n^{4/3})$ has also been proved. In this paper, we present an $O(n^{4/3})$ time algorithm and thus solve the problem optimally (under the restricted model). In addition, we also present data structures for solving the online version of the problem, i.e., given a query segment (or a line as a special case), find its closest point in $P$. Our new results improve the previous work.
[ { "created": "Fri, 5 Jan 2024 04:56:22 GMT", "version": "v1" } ]
2024-01-08
[ [ "Wang", "Haitao", "" ] ]
Given a set $P$ of $n$ points and a set $S$ of $n$ segments in the plane, we consider the problem of computing for each segment of $S$ its closest point in $P$. The previously best algorithm solves the problem in $n^{4/3}2^{O(\log^*n)}$ time [Bespamyatnikh, 2003] and a lower bound (under a somewhat restricted model) $\Omega(n^{4/3})$ has also been proved. In this paper, we present an $O(n^{4/3})$ time algorithm and thus solve the problem optimally (under the restricted model). In addition, we also present data structures for solving the online version of the problem, i.e., given a query segment (or a line as a special case), find its closest point in $P$. Our new results improve the previous work.
2303.12594
Jie Luo
Jie Luo, Carlo Longhi and Agoston E. Eiben
A Comparative Study of Brain Reproduction Methods for Morphologically Evolving Robots
8 pages, ALife
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In the most extensive robot evolution systems, both the bodies and the brains of the robots undergo evolution and the brains of 'infant' robots are also optimized by a learning process immediately after 'birth'. This paper is concerned with the brain evolution mechanism in such a system. In particular, we compare four options obtained by combining asexual or sexual brain reproduction with Darwinian or Lamarckian evolution mechanisms. We conduct experiments in simulation with a system of evolvable modular robots on two different tasks. The results show that sexual reproduction of the robots' brains is preferable in the Darwinian framework, but the effect is the opposite in the Lamarckian system (both using the same infant learning method). Our experiments suggest that the overall best option is asexual reproduction combined with the Lamarckian framework, as it obtains better robots in terms of fitness than the other three. Considering the evolved morphologies, the different brain reproduction methods do not lead to differences. This result indicates that the morphology of the robot is mainly determined by the task and the environment, not by the brain reproduction methods.
[ { "created": "Wed, 22 Mar 2023 14:31:52 GMT", "version": "v1" }, { "created": "Mon, 29 May 2023 07:23:32 GMT", "version": "v2" }, { "created": "Tue, 30 May 2023 12:01:04 GMT", "version": "v3" } ]
2023-05-31
[ [ "Luo", "Jie", "" ], [ "Longhi", "Carlo", "" ], [ "Eiben", "Agoston E.", "" ] ]
In the most extensive robot evolution systems, both the bodies and the brains of the robots undergo evolution and the brains of 'infant' robots are also optimized by a learning process immediately after 'birth'. This paper is concerned with the brain evolution mechanism in such a system. In particular, we compare four options obtained by combining asexual or sexual brain reproduction with Darwinian or Lamarckian evolution mechanisms. We conduct experiments in simulation with a system of evolvable modular robots on two different tasks. The results show that sexual reproduction of the robots' brains is preferable in the Darwinian framework, but the effect is the opposite in the Lamarckian system (both using the same infant learning method). Our experiments suggest that the overall best option is asexual reproduction combined with the Lamarckian framework, as it obtains better robots in terms of fitness than the other three. Considering the evolved morphologies, the different brain reproduction methods do not lead to differences. This result indicates that the morphology of the robot is mainly determined by the task and the environment, not by the brain reproduction methods.
2401.13656
Federico Cinus
Ernesto Colacrai, Federico Cinus, Gianmarco De Francisci Morales, Michele Starnini
Navigating Multidimensional Ideologies with Reddit's Political Compass: Economic Conflict and Social Affinity
null
null
null
null
cs.SI cs.CY physics.soc-ph stat.AP
http://creativecommons.org/licenses/by/4.0/
The prevalent perspective in quantitative research on opinion dynamics flattens the landscape of the online political discourse into a traditional left--right dichotomy. While this approach helps simplify the analysis and modeling effort, it also neglects the intrinsic multidimensional richness of ideologies. In this study, we analyze social interactions on Reddit, under the lens of a multi-dimensional ideological framework: the political compass. We examine over 8 million comments posted on the subreddits /r/PoliticalCompass and /r/PoliticalCompassMemes during 2020--2022. By leveraging their self-declarations, we disentangle the ideological dimensions of users into economic (left--right) and social (libertarian--authoritarian) axes. In addition, we characterize users by their demographic attributes (age, gender, and affluence). We find significant homophily for interactions along the social axis of the political compass and demographic attributes. Compared to a null model, interactions among individuals of similar ideology surpass expectations by 6%. In contrast, we uncover a significant heterophily along the economic axis: left/right interactions exceed expectations by 10%. Furthermore, heterophilic interactions are characterized by a higher language toxicity than homophilic interactions, which hints at a conflictual discourse between every opposite ideology. Our results help reconcile apparent contradictions in recent literature, which found a superposition of homophilic and heterophilic interactions in online political discussions. By disentangling such interactions into the economic and social axes we pave the way for a deeper understanding of opinion dynamics on social media.
[ { "created": "Wed, 24 Jan 2024 18:49:19 GMT", "version": "v1" } ]
2024-01-25
[ [ "Colacrai", "Ernesto", "" ], [ "Cinus", "Federico", "" ], [ "Morales", "Gianmarco De Francisci", "" ], [ "Starnini", "Michele", "" ] ]
The prevalent perspective in quantitative research on opinion dynamics flattens the landscape of the online political discourse into a traditional left--right dichotomy. While this approach helps simplify the analysis and modeling effort, it also neglects the intrinsic multidimensional richness of ideologies. In this study, we analyze social interactions on Reddit, under the lens of a multi-dimensional ideological framework: the political compass. We examine over 8 million comments posted on the subreddits /r/PoliticalCompass and /r/PoliticalCompassMemes during 2020--2022. By leveraging their self-declarations, we disentangle the ideological dimensions of users into economic (left--right) and social (libertarian--authoritarian) axes. In addition, we characterize users by their demographic attributes (age, gender, and affluence). We find significant homophily for interactions along the social axis of the political compass and demographic attributes. Compared to a null model, interactions among individuals of similar ideology surpass expectations by 6%. In contrast, we uncover a significant heterophily along the economic axis: left/right interactions exceed expectations by 10%. Furthermore, heterophilic interactions are characterized by a higher language toxicity than homophilic interactions, which hints at a conflictual discourse between every opposite ideology. Our results help reconcile apparent contradictions in recent literature, which found a superposition of homophilic and heterophilic interactions in online political discussions. By disentangling such interactions into the economic and social axes we pave the way for a deeper understanding of opinion dynamics on social media.
1807.06958
Dimitar Nikolov
Dimitar Nikolov and Mounia Lalmas and Alessandro Flammini and Filippo Menczer
Quantifying Biases in Online Information Exposure
25 pages, 10 figures, to appear in the Journal of the Association for Information Science and Technology (JASIST)
JASIST 70 (3): 218-229, 2019
10.1002/asi.24121
null
cs.SI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."
[ { "created": "Wed, 18 Jul 2018 14:19:49 GMT", "version": "v1" } ]
2020-10-07
[ [ "Nikolov", "Dimitar", "" ], [ "Lalmas", "Mounia", "" ], [ "Flammini", "Alessandro", "" ], [ "Menczer", "Filippo", "" ] ]
Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."
2406.02034
Soha Hussein
Soha Hussein, Stephen McCamant, Mike Whalen
Generator-Based Fuzzers with Type-Based Targeted Mutation
Fixing rendering of figure
null
null
null
cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
As with any fuzzer, directing Generator-Based Fuzzers (GBF) to reach particular code targets can increase the fuzzer's effectiveness. In previous work, coverage-guided fuzzers used a mix of static analysis, taint analysis, and constraint-solving approaches to address this problem. However, none of these techniques were particularly crafted for GBF where input generators are used to construct program inputs. The observation is that input generators carry information about the input structure that is naturally present through the typing composition of the program input. In this paper, we introduce a type-based mutation heuristic, along with constant string lookup, for Java GBF. Our key intuition is that if one can identify which sub-part (types) of the input will likely influence the branching decision, then focusing on mutating the choices of the generators constructing these types is likely to achieve the desired coverages. We used our technique to fuzz AWSLambda applications. Results compared to a baseline GBF tool show an almost 20\% average improvement in application coverage, and larger improvements when third-party code is included.
[ { "created": "Tue, 4 Jun 2024 07:20:13 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 07:32:41 GMT", "version": "v2" } ]
2024-06-13
[ [ "Hussein", "Soha", "" ], [ "McCamant", "Stephen", "" ], [ "Whalen", "Mike", "" ] ]
As with any fuzzer, directing Generator-Based Fuzzers (GBF) to reach particular code targets can increase the fuzzer's effectiveness. In previous work, coverage-guided fuzzers used a mix of static analysis, taint analysis, and constraint-solving approaches to address this problem. However, none of these techniques were particularly crafted for GBF where input generators are used to construct program inputs. The observation is that input generators carry information about the input structure that is naturally present through the typing composition of the program input. In this paper, we introduce a type-based mutation heuristic, along with constant string lookup, for Java GBF. Our key intuition is that if one can identify which sub-part (types) of the input will likely influence the branching decision, then focusing on mutating the choices of the generators constructing these types is likely to achieve the desired coverages. We used our technique to fuzz AWSLambda applications. Results compared to a baseline GBF tool show an almost 20\% average improvement in application coverage, and larger improvements when third-party code is included.