id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2403.05055
Yitao Zhu
Yitao Zhu, Sheng Wang, Mengjie Xu, Zixu Zhuang, Zhixin Wang, Kaidong Wang, Han Zhang, Qian Wang
MUC: Mixture of Uncalibrated Cameras for Robust 3D Human Body Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple cameras can provide multi-view video coverage of a person. It is necessary to fuse multi-view data, e.g., for subsequent behavioral analysis, while such fusion often relies on calibration of cameras in traditional solutions. However, it is non-trivial to calibrate multiple cameras. In this work, we propose a method to reconstruct 3D human body from multiple uncalibrated camera views. First, we adopt a pre-trained human body encoder to process each individual camera view, such that human body models and parameters can be reconstructed for each view. Next, instead of simply averaging models across views, we train a network to determine the weights of individual views for their fusion, based on the parameters estimated for joints and hands of human body as well as camera positions. Further, we turn to the mesh surface of human body for dynamic fusion, such that facial expression can be seamlessly integrated into the model of human body. Our method has demonstrated superior performance in reconstructing human body upon two public datasets. More importantly, our method can flexibly support ad-hoc deployment of an arbitrary number of cameras, which has significant potential in related applications. We will release source code upon acceptance of the paper.
[ { "created": "Fri, 8 Mar 2024 05:03:25 GMT", "version": "v1" } ]
2024-03-11
[ [ "Zhu", "Yitao", "" ], [ "Wang", "Sheng", "" ], [ "Xu", "Mengjie", "" ], [ "Zhuang", "Zixu", "" ], [ "Wang", "Zhixin", "" ], [ "Wang", "Kaidong", "" ], [ "Zhang", "Han", "" ], [ "Wang", "Qian", "" ] ]
Multiple cameras can provide multi-view video coverage of a person. It is necessary to fuse multi-view data, e.g., for subsequent behavioral analysis, while such fusion often relies on calibration of cameras in traditional solutions. However, it is non-trivial to calibrate multiple cameras. In this work, we propose a method to reconstruct 3D human body from multiple uncalibrated camera views. First, we adopt a pre-trained human body encoder to process each individual camera view, such that human body models and parameters can be reconstructed for each view. Next, instead of simply averaging models across views, we train a network to determine the weights of individual views for their fusion, based on the parameters estimated for joints and hands of human body as well as camera positions. Further, we turn to the mesh surface of human body for dynamic fusion, such that facial expression can be seamlessly integrated into the model of human body. Our method has demonstrated superior performance in reconstructing human body upon two public datasets. More importantly, our method can flexibly support ad-hoc deployment of an arbitrary number of cameras, which has significant potential in related applications. We will release source code upon acceptance of the paper.
1906.08936
Maofan Yin
Team Rocket, Maofan Yin, Kevin Sekniqi, Robbert van Renesse, and Emin G\"un Sirer
Scalable and Probabilistic Leaderless BFT Consensus through Metastability
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a family of leaderless Byzantine fault tolerance protocols, built around a metastable mechanism via network subsampling. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries while their concurrent and leaderless nature enables them to achieve high throughput and scalability. Unlike blockchains that rely on proof-of-work, they are quiescent and green. Unlike traditional consensus protocols where one or more nodes typically process linear bits in the number of total nodes per decision, no node processes more than logarithmic bits. It does not require accurate knowledge of all participants and exposes new possible tradeoffs and improvements in safety and liveness for building consensus protocols. The paper describes the Snow protocol family, analyzes its guarantees, and describes how it can be used to construct the core of an internet-scale electronic payment system called Avalanche, which is evaluated in a large scale deployment. Experiments demonstrate that the system can achieve high throughput (3400 tps), provide low confirmation latency (1.35 sec), and scale well compared to existing systems that deliver similar functionality. For our implementation and setup, the bottleneck of the system is in transaction verification.
[ { "created": "Fri, 21 Jun 2019 03:55:19 GMT", "version": "v1" }, { "created": "Mon, 24 Aug 2020 14:54:44 GMT", "version": "v2" } ]
2020-08-25
[ [ "Rocket", "Team", "" ], [ "Yin", "Maofan", "" ], [ "Sekniqi", "Kevin", "" ], [ "van Renesse", "Robbert", "" ], [ "Sirer", "Emin Gün", "" ] ]
This paper introduces a family of leaderless Byzantine fault tolerance protocols, built around a metastable mechanism via network subsampling. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries while their concurrent and leaderless nature enables them to achieve high throughput and scalability. Unlike blockchains that rely on proof-of-work, they are quiescent and green. Unlike traditional consensus protocols where one or more nodes typically process linear bits in the number of total nodes per decision, no node processes more than logarithmic bits. It does not require accurate knowledge of all participants and exposes new possible tradeoffs and improvements in safety and liveness for building consensus protocols. The paper describes the Snow protocol family, analyzes its guarantees, and describes how it can be used to construct the core of an internet-scale electronic payment system called Avalanche, which is evaluated in a large scale deployment. Experiments demonstrate that the system can achieve high throughput (3400 tps), provide low confirmation latency (1.35 sec), and scale well compared to existing systems that deliver similar functionality. For our implementation and setup, the bottleneck of the system is in transaction verification.
1905.02428
Tobias Kaminski
Tobias Kaminski
Integrated Algorithms for HEX-Programs and Applications in Machine Learning
7 pages, submitted for the Doctoral Consortium at the 15th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR 2019)
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper summarizes my doctoral research on evaluation algorithms for HEX-programs, which extend Answer Set Programming with means for interfacing external computations. The focus is on integrating different subprocesses of HEX-evaluation, such as solving and external calls as well as grounding, and on applications of HEX-programs in the area of Machine Learning.
[ { "created": "Tue, 7 May 2019 09:22:36 GMT", "version": "v1" } ]
2019-05-08
[ [ "Kaminski", "Tobias", "" ] ]
This paper summarizes my doctoral research on evaluation algorithms for HEX-programs, which extend Answer Set Programming with means for interfacing external computations. The focus is on integrating different subprocesses of HEX-evaluation, such as solving and external calls as well as grounding, and on applications of HEX-programs in the area of Machine Learning.
1903.03019
Nalin Asanka Gamagedara Arachchilage
Matt Dixon, Nalin Asanka Gamagedara Arachchilage, James Nicholson
Engaging Users with Educational Games: The Case of Phishing
4
CHI '19 Extended Abstracts on Human Factors in Computing Systems Proceedings (CHI 2019), 2019
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phishing continues to be a difficult problem for individuals and organisations. Educational games and simulations have been increasingly acknowledged as enormous and powerful teaching tools, yet little work has examined how to engage users with these games. We explore this problem by conducting workshops with 9 younger adults and reporting on their expectations for cybersecurity educational games. We find a disconnect between casual and serious gamers, where casual gamers prefer simple games incorporating humour while serious gamers demand a congruent narrative or storyline. Importantly, both demographics agree that educational games should prioritise gameplay over information provision - i.e. the game should be a game with educational content. We discuss the implications for educational games developers.
[ { "created": "Thu, 7 Mar 2019 16:12:14 GMT", "version": "v1" } ]
2019-03-08
[ [ "Dixon", "Matt", "" ], [ "Arachchilage", "Nalin Asanka Gamagedara", "" ], [ "Nicholson", "James", "" ] ]
Phishing continues to be a difficult problem for individuals and organisations. Educational games and simulations have been increasingly acknowledged as enormous and powerful teaching tools, yet little work has examined how to engage users with these games. We explore this problem by conducting workshops with 9 younger adults and reporting on their expectations for cybersecurity educational games. We find a disconnect between casual and serious gamers, where casual gamers prefer simple games incorporating humour while serious gamers demand a congruent narrative or storyline. Importantly, both demographics agree that educational games should prioritise gameplay over information provision - i.e. the game should be a game with educational content. We discuss the implications for educational games developers.
1710.06194
Da Chen
Da Chen, Laurent D. Cohen
A New Coherence-Penalized Minimal Path Model with Application to Retinal Vessel Centerline Delineation
null
null
null
null
cs.CG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new minimal path model for minimally interactive retinal vessel centerline extraction. The main contribution lies at the construction of a novel coherence-penalized Riemannian metric in a lifted space, dependently of the local geometry of tubularity and an external scalar-valued reference feature map. The globally minimizing curves associated to the proposed metric favour to pass through a set of retinal vessel segments with low variations of the feature map, thus can avoid the short branches combination problem and shortcut problem, commonly suffered by the existing minimal path models in the application of retinal imaging. We validate our model on a series of retinal vessel patches obtained from the DRIVE and IOSTAR datasets, showing that our model indeed get promising results.
[ { "created": "Tue, 17 Oct 2017 10:23:57 GMT", "version": "v1" } ]
2017-10-18
[ [ "Chen", "Da", "" ], [ "Cohen", "Laurent D.", "" ] ]
In this paper, we propose a new minimal path model for minimally interactive retinal vessel centerline extraction. The main contribution lies at the construction of a novel coherence-penalized Riemannian metric in a lifted space, dependently of the local geometry of tubularity and an external scalar-valued reference feature map. The globally minimizing curves associated to the proposed metric favour to pass through a set of retinal vessel segments with low variations of the feature map, thus can avoid the short branches combination problem and shortcut problem, commonly suffered by the existing minimal path models in the application of retinal imaging. We validate our model on a series of retinal vessel patches obtained from the DRIVE and IOSTAR datasets, showing that our model indeed get promising results.
2202.05895
Farhad Shirani Chaharsooghi
M. Shariatnasab, F. Shirani and Z. Anwar
Privacy Limits in Power-Law Bipartite Networks under Active Fingerprinting Attacks
null
null
null
null
cs.SI cs.DB cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work considers the fundamental privacy limits under active fingerprinting attacks in power-law bipartite networks. The scenario arises naturally in social network analysis, tracking user mobility in wireless networks, and forensics applications, among others. A stochastic growing network generation model -- called the popularity-based model -- is investigated, where the bipartite network is generated iteratively, and in each iteration vertices attract new edges based on their assigned popularity values. It is shown that using the appropriate choice of initial popularity values, the node degree distribution follows a power-law distribution with arbitrary parameter $\alpha>2$, i.e. fraction of nodes with degree $d$ is proportional to $d^{-\alpha}$. An active fingerprinting deanonymization attack strategy called the augmented information threshold attack strategy (A-ITS) is proposed which uses the attacker's knowledge of the node degree distribution along with the concept of information values for deanonymization. Sufficient conditions for the success of the A-ITS, based on network parameters, are derived. It is shown through simulations that the proposed attack significantly outperforms the state-of-the-art attack strategies.
[ { "created": "Fri, 11 Feb 2022 20:31:09 GMT", "version": "v1" } ]
2022-02-15
[ [ "Shariatnasab", "M.", "" ], [ "Shirani", "F.", "" ], [ "Anwar", "Z.", "" ] ]
This work considers the fundamental privacy limits under active fingerprinting attacks in power-law bipartite networks. The scenario arises naturally in social network analysis, tracking user mobility in wireless networks, and forensics applications, among others. A stochastic growing network generation model -- called the popularity-based model -- is investigated, where the bipartite network is generated iteratively, and in each iteration vertices attract new edges based on their assigned popularity values. It is shown that using the appropriate choice of initial popularity values, the node degree distribution follows a power-law distribution with arbitrary parameter $\alpha>2$, i.e. fraction of nodes with degree $d$ is proportional to $d^{-\alpha}$. An active fingerprinting deanonymization attack strategy called the augmented information threshold attack strategy (A-ITS) is proposed which uses the attacker's knowledge of the node degree distribution along with the concept of information values for deanonymization. Sufficient conditions for the success of the A-ITS, based on network parameters, are derived. It is shown through simulations that the proposed attack significantly outperforms the state-of-the-art attack strategies.
1908.02575
Oscar Correa
Oscar Correa and Jeffrey Chan and Vinh Nguyen
Alternative Blockmodelling
56 pages, 23 figures
null
null
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many approaches have been proposed to discover clusters within networks. Community finding field encompasses approaches which try to discover clusters where nodes are tightly related within them but loosely related with nodes of other clusters. However, a community network configuration is not the only possible latent structure in a graph. Core-periphery and hierarchical network configurations are valid structures to discover in a relational dataset. On the other hand, a network is not completely explained by only knowing the membership of each node. A high level view of the inter-cluster relationships is needed. Blockmodelling techniques deal with these two issues. Firstly, blockmodelling allows finding any network configuration besides to the well-known community structure. Secondly, blockmodelling is a summary representation of a network which regards not only membership of nodes but also relations between clusters. Finally, a unique summary representation of a network is unlikely. Networks might hide more than one blockmodel. Therefore, our proposed problem aims to discover a secondary blockmodel representation of a network that is of good quality and dissimilar with respect to a given blockmodel. Our methodology is presented through two approaches, (a) inclusion of cannot-link constraints and (b) dissimilarity between image matrices. Both approaches are based on non-negative matrix factorisation NMF which fits the blockmodelling representation. The evaluation of these two approaches regards quality and dissimilarity of the discovered alternative blockmodel as these are the requirements of the problem.
[ { "created": "Sat, 27 Jul 2019 06:49:47 GMT", "version": "v1" } ]
2019-08-08
[ [ "Correa", "Oscar", "" ], [ "Chan", "Jeffrey", "" ], [ "Nguyen", "Vinh", "" ] ]
Many approaches have been proposed to discover clusters within networks. Community finding field encompasses approaches which try to discover clusters where nodes are tightly related within them but loosely related with nodes of other clusters. However, a community network configuration is not the only possible latent structure in a graph. Core-periphery and hierarchical network configurations are valid structures to discover in a relational dataset. On the other hand, a network is not completely explained by only knowing the membership of each node. A high level view of the inter-cluster relationships is needed. Blockmodelling techniques deal with these two issues. Firstly, blockmodelling allows finding any network configuration besides to the well-known community structure. Secondly, blockmodelling is a summary representation of a network which regards not only membership of nodes but also relations between clusters. Finally, a unique summary representation of a network is unlikely. Networks might hide more than one blockmodel. Therefore, our proposed problem aims to discover a secondary blockmodel representation of a network that is of good quality and dissimilar with respect to a given blockmodel. Our methodology is presented through two approaches, (a) inclusion of cannot-link constraints and (b) dissimilarity between image matrices. Both approaches are based on non-negative matrix factorisation NMF which fits the blockmodelling representation. The evaluation of these two approaches regards quality and dissimilarity of the discovered alternative blockmodel as these are the requirements of the problem.
1909.06319
Yang Li
Yang Li, Shoaib Akbar, Junier B. Oliva
Flow Models for Arbitrary Conditional Likelihoods
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the dependencies among features of a dataset is at the core of most unsupervised learning tasks. However, a majority of generative modeling approaches are focused solely on the joint distribution $p(x)$ and utilize models where it is intractable to obtain the conditional distribution of some arbitrary subset of features $x_u$ given the rest of the observed covariates $x_o$: $p(x_u \mid x_o)$. Traditional conditional approaches provide a model for a fixed set of covariates conditioned on another fixed set of observed covariates. Instead, in this work we develop a model that is capable of yielding all conditional distributions $p(x_u \mid x_o)$ (for arbitrary $x_u$) via tractable conditional likelihoods. We propose a novel extension of (change of variables based) flow generative models, arbitrary conditioning flow models (AC-Flow), that can be conditioned on arbitrary subsets of observed covariates, which was previously infeasible. We apply AC-Flow to the imputation of features, and also develop a unified platform for both multiple and single imputation by introducing an auxiliary objective that provides a principled single "best guess" for flow models. Extensive empirical evaluations show that our models achieve state-of-the-art performance in both single and multiple imputation across image inpainting and feature imputation in synthetic and real-world datasets. Code is available at https://github.com/lupalab/ACFlow.
[ { "created": "Fri, 13 Sep 2019 16:35:17 GMT", "version": "v1" }, { "created": "Thu, 6 Aug 2020 13:30:33 GMT", "version": "v2" } ]
2020-08-07
[ [ "Li", "Yang", "" ], [ "Akbar", "Shoaib", "" ], [ "Oliva", "Junier B.", "" ] ]
Understanding the dependencies among features of a dataset is at the core of most unsupervised learning tasks. However, a majority of generative modeling approaches are focused solely on the joint distribution $p(x)$ and utilize models where it is intractable to obtain the conditional distribution of some arbitrary subset of features $x_u$ given the rest of the observed covariates $x_o$: $p(x_u \mid x_o)$. Traditional conditional approaches provide a model for a fixed set of covariates conditioned on another fixed set of observed covariates. Instead, in this work we develop a model that is capable of yielding all conditional distributions $p(x_u \mid x_o)$ (for arbitrary $x_u$) via tractable conditional likelihoods. We propose a novel extension of (change of variables based) flow generative models, arbitrary conditioning flow models (AC-Flow), that can be conditioned on arbitrary subsets of observed covariates, which was previously infeasible. We apply AC-Flow to the imputation of features, and also develop a unified platform for both multiple and single imputation by introducing an auxiliary objective that provides a principled single "best guess" for flow models. Extensive empirical evaluations show that our models achieve state-of-the-art performance in both single and multiple imputation across image inpainting and feature imputation in synthetic and real-world datasets. Code is available at https://github.com/lupalab/ACFlow.
1301.2683
Josef Urban
Josef Urban
BliStr: The Blind Strategymaker
null
null
null
null
cs.AI cs.LG cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BliStr is a system that automatically develops strategies for E prover on a large set of problems. The main idea is to interleave (i) iterated low-timelimit local search for new strategies on small sets of similar easy problems with (ii) higher-timelimit evaluation of the new strategies on all problems. The accumulated results of the global higher-timelimit runs are used to define and evolve the notion of "similar easy problems", and to control the selection of the next strategy to be improved. The technique was used to significantly strengthen the set of E strategies used by the MaLARea, PS-E, E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the Mizar division. Similar improvement was obtained on the problems created from the Flyspeck corpus.
[ { "created": "Sat, 12 Jan 2013 13:02:21 GMT", "version": "v1" }, { "created": "Wed, 28 May 2014 12:54:41 GMT", "version": "v2" } ]
2014-05-29
[ [ "Urban", "Josef", "" ] ]
BliStr is a system that automatically develops strategies for E prover on a large set of problems. The main idea is to interleave (i) iterated low-timelimit local search for new strategies on small sets of similar easy problems with (ii) higher-timelimit evaluation of the new strategies on all problems. The accumulated results of the global higher-timelimit runs are used to define and evolve the notion of "similar easy problems", and to control the selection of the next strategy to be improved. The technique was used to significantly strengthen the set of E strategies used by the MaLARea, PS-E, E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the Mizar division. Similar improvement was obtained on the problems created from the Flyspeck corpus.
1504.02843
Riccardo Sven Risuleo
Riccardo Sven Risuleo, Marco Molinari, Giulio Bottegal, H{\aa}kan Hjalmarsson, Karl H. Johansson
A benchmark for data-based office modeling: challenges related to CO$_2$ dynamics
14 pages, accepted for publication to IFAC SysId 2015
null
10.1016/j.ifacol.2015.12.304
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a benchmark consisting of a set of synthetic measurements relative to an office environment simulated with the software IDA-ICE. The simulated environment reproduces a laboratory at the KTH-EES Smart Building, equipped with a building management system. The data set contains records collected over a period of several days. The signals to CO$_2$ concentration, mechanical ventilation airflows, air infiltrations and occupancy. Information on door and window opening is also available. This benchmark is intended for testing data-based modeling techniques. The ultimate goal is the development of models to improve the forecast and control of environmental variables. Among the numerous challenges related to this framework, we point out the problem of occupancy estimation using information on CO$_2$ concentration. This can be seen as a blind identification problem. For benchmarking purposes, we present two different identification approaches: a baseline overparametrization method and a kernel-based method.
[ { "created": "Sat, 11 Apr 2015 06:31:24 GMT", "version": "v1" }, { "created": "Thu, 19 May 2016 09:12:14 GMT", "version": "v2" } ]
2016-05-20
[ [ "Risuleo", "Riccardo Sven", "" ], [ "Molinari", "Marco", "" ], [ "Bottegal", "Giulio", "" ], [ "Hjalmarsson", "Håkan", "" ], [ "Johansson", "Karl H.", "" ] ]
This paper describes a benchmark consisting of a set of synthetic measurements relative to an office environment simulated with the software IDA-ICE. The simulated environment reproduces a laboratory at the KTH-EES Smart Building, equipped with a building management system. The data set contains records collected over a period of several days. The signals to CO$_2$ concentration, mechanical ventilation airflows, air infiltrations and occupancy. Information on door and window opening is also available. This benchmark is intended for testing data-based modeling techniques. The ultimate goal is the development of models to improve the forecast and control of environmental variables. Among the numerous challenges related to this framework, we point out the problem of occupancy estimation using information on CO$_2$ concentration. This can be seen as a blind identification problem. For benchmarking purposes, we present two different identification approaches: a baseline overparametrization method and a kernel-based method.
1207.0805
Pallavali Radha Krishna Reddy
G.Geethu Lakshmi
Anatomical Structure Segmentation in Liver MRI Images
Withdrawn by author for final modification
null
null
null
cs.CV
http://creativecommons.org/licenses/by/3.0/
Segmentation of medical images is a challenging task owing to their complexity. A standard segmentation problem within Magnetic Resonance Imaging (MRI) is the task of labeling voxels according to their tissue type. Image segmentation provides volumetric quantification of liver area and thus helps in the diagnosis of disorders, such as Hepatitis, Cirrhosis, Jaundice, Hemochromatosis etc.This work deals with comparison of segmentation by applying Level Set Method,Fuzzy Level Information C-Means Clustering Algorithm and Gradient Vector Flow Snake Algorithm.The results are compared using the parameters such as Number of pixels correctly classified, and percentage of area segmented.
[ { "created": "Tue, 3 Jul 2012 14:32:20 GMT", "version": "v1" }, { "created": "Fri, 13 Jul 2012 10:48:33 GMT", "version": "v2" }, { "created": "Sat, 30 Mar 2013 05:25:58 GMT", "version": "v3" } ]
2013-04-02
[ [ "Lakshmi", "G. Geethu", "" ] ]
Segmentation of medical images is a challenging task owing to their complexity. A standard segmentation problem within Magnetic Resonance Imaging (MRI) is the task of labeling voxels according to their tissue type. Image segmentation provides volumetric quantification of liver area and thus helps in the diagnosis of disorders, such as Hepatitis, Cirrhosis, Jaundice, Hemochromatosis etc.This work deals with comparison of segmentation by applying Level Set Method,Fuzzy Level Information C-Means Clustering Algorithm and Gradient Vector Flow Snake Algorithm.The results are compared using the parameters such as Number of pixels correctly classified, and percentage of area segmented.
2404.01843
Wangguandong Zheng
Wangguandong Zheng, Haifeng Xia, Rui Chen, Ming Shao, Siyu Xia, Zhengming Ding
Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, image-to-3D approaches have achieved significant results with a natural image as input. However, it is not always possible to access these enriched color input samples in practical applications, where only sketches are available. Existing sketch-to-3D researches suffer from limitations in broad applications due to the challenges of lacking color information and multi-view content. To overcome them, this paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description. Concretely, Sketch3D first instantiates the given sketch in the reference image through the shape-preserving generation process. Second, the reference image is leveraged to deduce a coarse 3D Gaussian prior, and multi-view style-consistent guidance images are generated based on the renderings of the 3D Gaussians. Finally, three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss. Extensive visual comparisons and quantitative analysis illustrate the advantage of our Sketch3D in generating realistic 3D assets while preserving consistency with the input.
[ { "created": "Tue, 2 Apr 2024 11:03:24 GMT", "version": "v1" }, { "created": "Sun, 7 Apr 2024 04:17:32 GMT", "version": "v2" } ]
2024-04-09
[ [ "Zheng", "Wangguandong", "" ], [ "Xia", "Haifeng", "" ], [ "Chen", "Rui", "" ], [ "Shao", "Ming", "" ], [ "Xia", "Siyu", "" ], [ "Ding", "Zhengming", "" ] ]
Recently, image-to-3D approaches have achieved significant results with a natural image as input. However, it is not always possible to access these enriched color input samples in practical applications, where only sketches are available. Existing sketch-to-3D researches suffer from limitations in broad applications due to the challenges of lacking color information and multi-view content. To overcome them, this paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description. Concretely, Sketch3D first instantiates the given sketch in the reference image through the shape-preserving generation process. Second, the reference image is leveraged to deduce a coarse 3D Gaussian prior, and multi-view style-consistent guidance images are generated based on the renderings of the 3D Gaussians. Finally, three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss. Extensive visual comparisons and quantitative analysis illustrate the advantage of our Sketch3D in generating realistic 3D assets while preserving consistency with the input.
2306.13773
Stephen Pasteris
Stephen Pasteris, Chris Hicks, Vasilios Mavroudis
Nearest Neighbour with Bandit Feedback
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we adapt the nearest neighbour rule to the contextual bandit problem. Our algorithm handles the fully adversarial setting in which no assumptions at all are made about the data-generation process. When combined with a sufficiently fast data-structure for (perhaps approximate) adaptive nearest neighbour search, such as a navigating net, our algorithm is extremely efficient - having a per trial running time polylogarithmic in both the number of trials and actions, and taking only quasi-linear space. We give generic regret bounds for our algorithm and further analyse them when applied to the stochastic bandit problem in euclidean space. We note that our algorithm can also be applied to the online classification problem.
[ { "created": "Fri, 23 Jun 2023 20:09:01 GMT", "version": "v1" }, { "created": "Wed, 2 Aug 2023 20:19:16 GMT", "version": "v2" }, { "created": "Thu, 7 Mar 2024 21:07:35 GMT", "version": "v3" } ]
2024-03-11
[ [ "Pasteris", "Stephen", "" ], [ "Hicks", "Chris", "" ], [ "Mavroudis", "Vasilios", "" ] ]
In this paper we adapt the nearest neighbour rule to the contextual bandit problem. Our algorithm handles the fully adversarial setting in which no assumptions at all are made about the data-generation process. When combined with a sufficiently fast data-structure for (perhaps approximate) adaptive nearest neighbour search, such as a navigating net, our algorithm is extremely efficient - having a per trial running time polylogarithmic in both the number of trials and actions, and taking only quasi-linear space. We give generic regret bounds for our algorithm and further analyse them when applied to the stochastic bandit problem in euclidean space. We note that our algorithm can also be applied to the online classification problem.
2209.05070
Xiangyu Wang
Anjun Chen, Xiangyu Wang, Shaohao Zhu, Yanxu Li, Jiming Chen, Qi Ye
mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar
Accepted to ACM Multimedia 2022, Project Page: https://chen3110.github.io/mmbody/index.html
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Millimeter Wave (mmWave) Radar is gaining popularity as it can work in adverse environments like smoke, rain, snow, poor lighting, etc. Prior work has explored the possibility of reconstructing 3D skeletons or meshes from the noisy and sparse mmWave Radar signals. However, it is unclear how accurately we can reconstruct the 3D body from the mmWave signals across scenes and how it performs compared with cameras, which are important aspects needed to be considered when either using mmWave radars alone or combining them with cameras. To answer these questions, an automatic 3D body annotation system is first designed and built up with multiple sensors to collect a large-scale dataset. The dataset consists of synchronized and calibrated mmWave radar point clouds and RGB(D) images in different scenes and skeleton/mesh annotations for humans in the scenes. With this dataset, we train state-of-the-art methods with inputs from different sensors and test them in various scenarios. The results demonstrate that 1) despite the noise and sparsity of the generated point clouds, the mmWave radar can achieve better reconstruction accuracy than the RGB camera but worse than the depth camera; 2) the reconstruction from the mmWave radar is affected by adverse weather conditions moderately while the RGB(D) camera is severely affected. Further, analysis of the dataset and the results shadow insights on improving the reconstruction from the mmWave radar and the combination of signals from different sensors.
[ { "created": "Mon, 12 Sep 2022 08:00:31 GMT", "version": "v1" }, { "created": "Fri, 14 Apr 2023 03:07:03 GMT", "version": "v2" }, { "created": "Thu, 21 Sep 2023 10:11:03 GMT", "version": "v3" } ]
2023-09-22
[ [ "Chen", "Anjun", "" ], [ "Wang", "Xiangyu", "" ], [ "Zhu", "Shaohao", "" ], [ "Li", "Yanxu", "" ], [ "Chen", "Jiming", "" ], [ "Ye", "Qi", "" ] ]
Millimeter Wave (mmWave) Radar is gaining popularity as it can work in adverse environments like smoke, rain, snow, poor lighting, etc. Prior work has explored the possibility of reconstructing 3D skeletons or meshes from the noisy and sparse mmWave Radar signals. However, it is unclear how accurately we can reconstruct the 3D body from the mmWave signals across scenes and how it performs compared with cameras, which are important aspects needed to be considered when either using mmWave radars alone or combining them with cameras. To answer these questions, an automatic 3D body annotation system is first designed and built up with multiple sensors to collect a large-scale dataset. The dataset consists of synchronized and calibrated mmWave radar point clouds and RGB(D) images in different scenes and skeleton/mesh annotations for humans in the scenes. With this dataset, we train state-of-the-art methods with inputs from different sensors and test them in various scenarios. The results demonstrate that 1) despite the noise and sparsity of the generated point clouds, the mmWave radar can achieve better reconstruction accuracy than the RGB camera but worse than the depth camera; 2) the reconstruction from the mmWave radar is affected by adverse weather conditions moderately while the RGB(D) camera is severely affected. Further, analysis of the dataset and the results shadow insights on improving the reconstruction from the mmWave radar and the combination of signals from different sensors.
2202.09892
Alec Farid
Michelle Ho and Alec Farid and Anirudha Majumdar
Towards a Framework for Comparing the Complexity of Robotic Tasks
null
null
null
null
cs.RO cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are motivated by the problem of comparing the complexity of one robotic task relative to another. To this end, we define a notion of reduction that formalizes the following intuition: Task 1 reduces to Task 2 if we can efficiently transform any policy that solves Task 2 into a policy that solves Task 1. We further define a quantitative measure of the relative complexity between any two tasks for a given robot. We prove useful properties of our notion of reduction (e.g., reflexivity, transitivity, and antisymmetry) and relative complexity measure (e.g., nonnegativity and monotonicity). In addition, we propose practical algorithms for estimating the relative complexity measure. We illustrate our framework for comparing robotic tasks using (i) examples where one can analytically establish reductions, and (ii) reinforcement learning examples where the proposed algorithm can estimate the relative complexity between tasks.
[ { "created": "Sun, 20 Feb 2022 19:12:24 GMT", "version": "v1" }, { "created": "Sun, 29 May 2022 21:54:55 GMT", "version": "v2" }, { "created": "Fri, 24 Jun 2022 16:47:02 GMT", "version": "v3" } ]
2022-06-27
[ [ "Ho", "Michelle", "" ], [ "Farid", "Alec", "" ], [ "Majumdar", "Anirudha", "" ] ]
We are motivated by the problem of comparing the complexity of one robotic task relative to another. To this end, we define a notion of reduction that formalizes the following intuition: Task 1 reduces to Task 2 if we can efficiently transform any policy that solves Task 2 into a policy that solves Task 1. We further define a quantitative measure of the relative complexity between any two tasks for a given robot. We prove useful properties of our notion of reduction (e.g., reflexivity, transitivity, and antisymmetry) and relative complexity measure (e.g., nonnegativity and monotonicity). In addition, we propose practical algorithms for estimating the relative complexity measure. We illustrate our framework for comparing robotic tasks using (i) examples where one can analytically establish reductions, and (ii) reinforcement learning examples where the proposed algorithm can estimate the relative complexity between tasks.
2210.02843
Runmin Cong
Runmin Cong, Qinwei Lin, Chen Zhang, Chongyi Li, Xiaochun Cao, Qingming Huang, and Yao Zhao
CIR-Net: Cross-modality Interaction and Refinement for RGB-D Salient Object Detection
Accepted by IEEE Transactions on Image Processing 2022, 16 pages, 11 figures
null
10.1109/TIP.2022.3216198
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Focusing on the issue of how to effectively capture and utilize cross-modality information in RGB-D salient object detection (SOD) task, we present a convolutional neural network (CNN) model, named CIR-Net, based on the novel cross-modality interaction and refinement. For the cross-modality interaction, 1) a progressive attention guided integration unit is proposed to sufficiently integrate RGB-D feature representations in the encoder stage, and 2) a convergence aggregation structure is proposed, which flows the RGB and depth decoding features into the corresponding RGB-D decoding streams via an importance gated fusion unit in the decoder stage. For the cross-modality refinement, we insert a refinement middleware structure between the encoder and the decoder, in which the RGB, depth, and RGB-D encoder features are further refined by successively using a self-modality attention refinement unit and a cross-modality weighting refinement unit. At last, with the gradually refined features, we predict the saliency map in the decoder stage. Extensive experiments on six popular RGB-D SOD benchmarks demonstrate that our network outperforms the state-of-the-art saliency detectors both qualitatively and quantitatively.
[ { "created": "Thu, 6 Oct 2022 11:59:19 GMT", "version": "v1" } ]
2022-11-23
[ [ "Cong", "Runmin", "" ], [ "Lin", "Qinwei", "" ], [ "Zhang", "Chen", "" ], [ "Li", "Chongyi", "" ], [ "Cao", "Xiaochun", "" ], [ "Huang", "Qingming", "" ], [ "Zhao", "Yao", "" ] ]
Focusing on the issue of how to effectively capture and utilize cross-modality information in RGB-D salient object detection (SOD) task, we present a convolutional neural network (CNN) model, named CIR-Net, based on the novel cross-modality interaction and refinement. For the cross-modality interaction, 1) a progressive attention guided integration unit is proposed to sufficiently integrate RGB-D feature representations in the encoder stage, and 2) a convergence aggregation structure is proposed, which flows the RGB and depth decoding features into the corresponding RGB-D decoding streams via an importance gated fusion unit in the decoder stage. For the cross-modality refinement, we insert a refinement middleware structure between the encoder and the decoder, in which the RGB, depth, and RGB-D encoder features are further refined by successively using a self-modality attention refinement unit and a cross-modality weighting refinement unit. At last, with the gradually refined features, we predict the saliency map in the decoder stage. Extensive experiments on six popular RGB-D SOD benchmarks demonstrate that our network outperforms the state-of-the-art saliency detectors both qualitatively and quantitatively.
2403.18791
Tianfu Wang
Tianfu Wang, Guosheng Hu, Hongguang Wang
Object Pose Estimation via the Aggregation of Diffusion Features
Accepted to CVPR2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the pose of objects from images is a crucial task of 3D scene understanding, and recent approaches have shown promising results on very large benchmarks. However, these methods experience a significant performance drop when dealing with unseen objects. We believe that it results from the limited generalizability of image features. To address this problem, we have an in-depth analysis on the features of diffusion models, e.g. Stable Diffusion, which hold substantial potential for modeling unseen objects. Based on this analysis, we then innovatively introduce these diffusion features for object pose estimation. To achieve this, we propose three distinct architectures that can effectively capture and aggregate diffusion features of different granularity, greatly improving the generalizability of object pose estimation. Our approach outperforms the state-of-the-art methods by a considerable margin on three popular benchmark datasets, LM, O-LM, and T-LESS. In particular, our method achieves higher accuracy than the previous best arts on unseen objects: 98.2% vs. 93.5% on Unseen LM, 85.9% vs. 76.3% on Unseen O-LM, showing the strong generalizability of our method. Our code is released at https://github.com/Tianfu18/diff-feats-pose.
[ { "created": "Wed, 27 Mar 2024 17:35:24 GMT", "version": "v1" }, { "created": "Sat, 1 Jun 2024 15:25:47 GMT", "version": "v2" } ]
2024-06-04
[ [ "Wang", "Tianfu", "" ], [ "Hu", "Guosheng", "" ], [ "Wang", "Hongguang", "" ] ]
Estimating the pose of objects from images is a crucial task of 3D scene understanding, and recent approaches have shown promising results on very large benchmarks. However, these methods experience a significant performance drop when dealing with unseen objects. We believe that it results from the limited generalizability of image features. To address this problem, we have an in-depth analysis on the features of diffusion models, e.g. Stable Diffusion, which hold substantial potential for modeling unseen objects. Based on this analysis, we then innovatively introduce these diffusion features for object pose estimation. To achieve this, we propose three distinct architectures that can effectively capture and aggregate diffusion features of different granularity, greatly improving the generalizability of object pose estimation. Our approach outperforms the state-of-the-art methods by a considerable margin on three popular benchmark datasets, LM, O-LM, and T-LESS. In particular, our method achieves higher accuracy than the previous best arts on unseen objects: 98.2% vs. 93.5% on Unseen LM, 85.9% vs. 76.3% on Unseen O-LM, showing the strong generalizability of our method. Our code is released at https://github.com/Tianfu18/diff-feats-pose.
1205.4778
Matthias W\"ahlisch
Matthias W\"ahlisch, Thomas C. Schmidt, Markus Vahlenkamp
Backscatter from the Data Plane --- Threats to Stability and Security in Information-Centric Networking
15 pages
Computer Networks, Vol. 57, No. 16, pp. 3192-3206, Elsevier, Nov. 2013
10.1016/j.comnet.2013.07.009
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information-centric networking proposals attract much attention in the ongoing search for a future communication paradigm of the Internet. Replacing the host-to-host connectivity by a data-oriented publish/subscribe service eases content distribution and authentication by concept, while eliminating threats from unwanted traffic at an end host as are common in today's Internet. However, current approaches to content routing heavily rely on data-driven protocol events and thereby introduce a strong coupling of the control to the data plane in the underlying routing infrastructure. In this paper, threats to the stability and security of the content distribution system are analyzed in theory and practical experiments. We derive relations between state resources and the performance of routers and demonstrate how this coupling can be misused in practice. We discuss new attack vectors present in its current state of development, as well as possibilities and limitations to mitigate them.
[ { "created": "Tue, 22 May 2012 00:24:13 GMT", "version": "v1" }, { "created": "Sun, 2 Sep 2012 22:22:41 GMT", "version": "v2" } ]
2013-11-12
[ [ "Wählisch", "Matthias", "" ], [ "Schmidt", "Thomas C.", "" ], [ "Vahlenkamp", "Markus", "" ] ]
Information-centric networking proposals attract much attention in the ongoing search for a future communication paradigm of the Internet. Replacing the host-to-host connectivity by a data-oriented publish/subscribe service eases content distribution and authentication by concept, while eliminating threats from unwanted traffic at an end host as are common in today's Internet. However, current approaches to content routing heavily rely on data-driven protocol events and thereby introduce a strong coupling of the control to the data plane in the underlying routing infrastructure. In this paper, threats to the stability and security of the content distribution system are analyzed in theory and practical experiments. We derive relations between state resources and the performance of routers and demonstrate how this coupling can be misused in practice. We discuss new attack vectors present in its current state of development, as well as possibilities and limitations to mitigate them.
2301.08245
Pierluigi Zama Ramirez
Pierluigi Zama Ramirez, Alex Costanzino, Fabio Tosi, Matteo Poggi, Samuele Salti, Stefano Mattoccia, Luigi Di Stefano
Booster: a Benchmark for Depth from Images of Specular and Transparent Surfaces
Extension of the paper "Open Challenges in Deep Stereo: the Booster Dataset" presented at CVPR 2022. Accepted at TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating depth from images nowadays yields outstanding results, both in terms of in-domain accuracy and generalization. However, we identify two main challenges that remain open in this field: dealing with non-Lambertian materials and effectively processing high-resolution images. Purposely, we propose a novel dataset that includes accurate and dense ground-truth labels at high resolution, featuring scenes containing several specular and transparent surfaces. Our acquisition pipeline leverages a novel deep space-time stereo framework, enabling easy and accurate labeling with sub-pixel precision. The dataset is composed of 606 samples collected in 85 different scenes, each sample includes both a high-resolution pair (12 Mpx) as well as an unbalanced stereo pair (Left: 12 Mpx, Right: 1.1 Mpx), typical of modern mobile devices that mount sensors with different resolutions. Additionally, we provide manually annotated material segmentation masks and 15K unlabeled samples. The dataset is composed of a train set and two test sets, the latter devoted to the evaluation of stereo and monocular depth estimation networks. Our experiments highlight the open challenges and future research directions in this field.
[ { "created": "Thu, 19 Jan 2023 18:59:28 GMT", "version": "v1" }, { "created": "Mon, 9 Oct 2023 17:58:14 GMT", "version": "v2" }, { "created": "Tue, 30 Jan 2024 14:02:58 GMT", "version": "v3" } ]
2024-01-31
[ [ "Ramirez", "Pierluigi Zama", "" ], [ "Costanzino", "Alex", "" ], [ "Tosi", "Fabio", "" ], [ "Poggi", "Matteo", "" ], [ "Salti", "Samuele", "" ], [ "Mattoccia", "Stefano", "" ], [ "Di Stefano", "Luigi", "" ] ]
Estimating depth from images nowadays yields outstanding results, both in terms of in-domain accuracy and generalization. However, we identify two main challenges that remain open in this field: dealing with non-Lambertian materials and effectively processing high-resolution images. Purposely, we propose a novel dataset that includes accurate and dense ground-truth labels at high resolution, featuring scenes containing several specular and transparent surfaces. Our acquisition pipeline leverages a novel deep space-time stereo framework, enabling easy and accurate labeling with sub-pixel precision. The dataset is composed of 606 samples collected in 85 different scenes, each sample includes both a high-resolution pair (12 Mpx) as well as an unbalanced stereo pair (Left: 12 Mpx, Right: 1.1 Mpx), typical of modern mobile devices that mount sensors with different resolutions. Additionally, we provide manually annotated material segmentation masks and 15K unlabeled samples. The dataset is composed of a train set and two test sets, the latter devoted to the evaluation of stereo and monocular depth estimation networks. Our experiments highlight the open challenges and future research directions in this field.
2011.06198
Eric Le Ferrand
\'Eric Le Ferrand, Steven Bird, Laurent Besacier
Enabling Interactive Transcription in an Indigenous Community
inproceedings Coling 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel transcription workflow which combines spoken term detection and human-in-the-loop, together with a pilot experiment. This work is grounded in an almost zero-resource scenario where only a few terms have so far been identified, involving two endangered languages. We show that in the early stages of transcription, when the available data is insufficient to train a robust ASR system, it is possible to take advantage of the transcription of a small number of isolated words in order to bootstrap the transcription of a speech collection.
[ { "created": "Thu, 12 Nov 2020 04:41:35 GMT", "version": "v1" } ]
2020-11-13
[ [ "Ferrand", "Éric Le", "" ], [ "Bird", "Steven", "" ], [ "Besacier", "Laurent", "" ] ]
We propose a novel transcription workflow which combines spoken term detection and human-in-the-loop, together with a pilot experiment. This work is grounded in an almost zero-resource scenario where only a few terms have so far been identified, involving two endangered languages. We show that in the early stages of transcription, when the available data is insufficient to train a robust ASR system, it is possible to take advantage of the transcription of a small number of isolated words in order to bootstrap the transcription of a speech collection.
2107.12940
Mark Koren
Mark Koren and Ahmed Nassar and Mykel J. Kochenderfer
Finding Failures in High-Fidelity Simulation using Adaptive Stress Testing and the Backward Algorithm
Accepted to IROS 2021
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Validating the safety of autonomous systems generally requires the use of high-fidelity simulators that adequately capture the variability of real-world scenarios. However, it is generally not feasible to exhaustively search the space of simulation scenarios for failures. Adaptive stress testing (AST) is a method that uses reinforcement learning to find the most likely failure of a system. AST with a deep reinforcement learning solver has been shown to be effective in finding failures across a range of different systems. This approach generally involves running many simulations, which can be very expensive when using a high-fidelity simulator. To improve efficiency, we present a method that first finds failures in a low-fidelity simulator. It then uses the backward algorithm, which trains a deep neural network policy using a single expert demonstration, to adapt the low-fidelity failures to high-fidelity. We have created a series of autonomous vehicle validation case studies that represent some of the ways low-fidelity and high-fidelity simulators can differ, such as time discretization. We demonstrate in a variety of case studies that this new AST approach is able to find failures with significantly fewer high-fidelity simulation steps than are needed when just running AST directly in high-fidelity. As a proof of concept, we also demonstrate AST on NVIDIA's DriveSim simulator, an industry state-of-the-art high-fidelity simulator for finding failures in autonomous vehicles.
[ { "created": "Tue, 27 Jul 2021 16:54:04 GMT", "version": "v1" } ]
2021-07-28
[ [ "Koren", "Mark", "" ], [ "Nassar", "Ahmed", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
Validating the safety of autonomous systems generally requires the use of high-fidelity simulators that adequately capture the variability of real-world scenarios. However, it is generally not feasible to exhaustively search the space of simulation scenarios for failures. Adaptive stress testing (AST) is a method that uses reinforcement learning to find the most likely failure of a system. AST with a deep reinforcement learning solver has been shown to be effective in finding failures across a range of different systems. This approach generally involves running many simulations, which can be very expensive when using a high-fidelity simulator. To improve efficiency, we present a method that first finds failures in a low-fidelity simulator. It then uses the backward algorithm, which trains a deep neural network policy using a single expert demonstration, to adapt the low-fidelity failures to high-fidelity. We have created a series of autonomous vehicle validation case studies that represent some of the ways low-fidelity and high-fidelity simulators can differ, such as time discretization. We demonstrate in a variety of case studies that this new AST approach is able to find failures with significantly fewer high-fidelity simulation steps than are needed when just running AST directly in high-fidelity. As a proof of concept, we also demonstrate AST on NVIDIA's DriveSim simulator, an industry state-of-the-art high-fidelity simulator for finding failures in autonomous vehicles.
1910.05522
Hassan Khosravi
Hassan Khosravi, Kirsty Kitto, Joseph Jay Williams
RiPPLE: A Crowdsourced Adaptive Platform for Recommendation of Learning Activities
To be published by the Journal of Learning Analytics
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a platform called RiPPLE (Recommendation in Personalised Peer-Learning Environments) that recommends personalized learning activities to students based on their knowledge state from a pool of crowdsourced learning activities that are generated by educators and the students themselves. RiPPLE integrates insights from crowdsourcing, learning sciences, and adaptive learning, aiming to narrow the gap between these large bodies of research while providing a practical platform-based implementation that instructors can easily use in their courses. This paper provides a design overview of RiPPLE, which can be employed as a standalone tool or embedded into any learning management system (LMS) or online platform that supports the Learning Tools Interoperability (LTI) standard. The platform has been evaluated based on a pilot in an introductory course with 453 students at The University of Queensland. Initial results suggest that the use of the \name platform led to measurable learning gains and that students perceived the platform as beneficially supporting their learning.
[ { "created": "Sat, 12 Oct 2019 07:42:52 GMT", "version": "v1" } ]
2019-10-15
[ [ "Khosravi", "Hassan", "" ], [ "Kitto", "Kirsty", "" ], [ "Williams", "Joseph Jay", "" ] ]
This paper presents a platform called RiPPLE (Recommendation in Personalised Peer-Learning Environments) that recommends personalized learning activities to students based on their knowledge state from a pool of crowdsourced learning activities that are generated by educators and the students themselves. RiPPLE integrates insights from crowdsourcing, learning sciences, and adaptive learning, aiming to narrow the gap between these large bodies of research while providing a practical platform-based implementation that instructors can easily use in their courses. This paper provides a design overview of RiPPLE, which can be employed as a standalone tool or embedded into any learning management system (LMS) or online platform that supports the Learning Tools Interoperability (LTI) standard. The platform has been evaluated based on a pilot in an introductory course with 453 students at The University of Queensland. Initial results suggest that the use of the \name platform led to measurable learning gains and that students perceived the platform as beneficially supporting their learning.
2302.10351
Georgios Kissas
Jacob H. Seidman, Georgios Kissas, George J. Pappas, Paris Perdikaris
Variational Autoencoding Neural Operators
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems. A natural way of modeling functional data is by learning operators between infinite dimensional spaces, leading to discretization invariant representations that scale independently of the sample grid resolution. Here we present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders. For this purpose, we provide a novel rigorous mathematical formulation of the variational objective in function spaces for training. VANO first maps an input function to a distribution over a latent space using a parametric encoder and then decodes a sample from the latent distribution to reconstruct the input, as in classic variational autoencoders. We test VANO with different model set-ups and architecture choices for a variety of benchmarks. We start from a simple Gaussian random field where we can analytically track what the model learns and progressively transition to more challenging benchmarks including modeling phase separation in Cahn-Hilliard systems and real world satellite data for measuring Earth surface deformation.
[ { "created": "Mon, 20 Feb 2023 22:34:43 GMT", "version": "v1" } ]
2023-02-22
[ [ "Seidman", "Jacob H.", "" ], [ "Kissas", "Georgios", "" ], [ "Pappas", "George J.", "" ], [ "Perdikaris", "Paris", "" ] ]
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems. A natural way of modeling functional data is by learning operators between infinite dimensional spaces, leading to discretization invariant representations that scale independently of the sample grid resolution. Here we present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders. For this purpose, we provide a novel rigorous mathematical formulation of the variational objective in function spaces for training. VANO first maps an input function to a distribution over a latent space using a parametric encoder and then decodes a sample from the latent distribution to reconstruct the input, as in classic variational autoencoders. We test VANO with different model set-ups and architecture choices for a variety of benchmarks. We start from a simple Gaussian random field where we can analytically track what the model learns and progressively transition to more challenging benchmarks including modeling phase separation in Cahn-Hilliard systems and real world satellite data for measuring Earth surface deformation.
2305.14215
Ziru Chen
Chang-You Tai, Ziru Chen, Tianshu Zhang, Xiang Deng and Huan Sun
Exploring Chain-of-Thought Style Prompting for Text-to-SQL
EMNLP 2023 main; long paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs' reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023). Our experiments demonstrate that iterative prompting as in Zhou et al. (2023) may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method.
[ { "created": "Tue, 23 May 2023 16:32:36 GMT", "version": "v1" }, { "created": "Fri, 27 Oct 2023 15:21:38 GMT", "version": "v2" } ]
2023-10-30
[ [ "Tai", "Chang-You", "" ], [ "Chen", "Ziru", "" ], [ "Zhang", "Tianshu", "" ], [ "Deng", "Xiang", "" ], [ "Sun", "Huan", "" ] ]
In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs' reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023). Our experiments demonstrate that iterative prompting as in Zhou et al. (2023) may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method.
2206.10697
Simona Maggio
Simona Maggio, Victor Bouvier and L\'eo Dreyfus-Schmidt
Performance Prediction Under Dataset Shift
Published at ICPR
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ML models deployed in production often have to face unknown domain changes, fundamentally different from their training settings. Performance prediction models carry out the crucial task of measuring the impact of these changes on model performance. We study the generalization capabilities of various performance prediction models to new domains by learning on generated synthetic perturbations. Empirical validation on a benchmark of ten tabular datasets shows that models based upon state-of-the-art shift detection metrics are not expressive enough to generalize to unseen domains, while Error Predictors bring a consistent improvement in performance prediction under shift. We additionally propose a natural and effortless uncertainty estimation of the predicted accuracy that ensures reliable use of performance predictors. Our implementation is available at https: //github.com/dataiku-research/performance_prediction_under_shift.
[ { "created": "Tue, 21 Jun 2022 19:40:58 GMT", "version": "v1" } ]
2022-06-23
[ [ "Maggio", "Simona", "" ], [ "Bouvier", "Victor", "" ], [ "Dreyfus-Schmidt", "Léo", "" ] ]
ML models deployed in production often have to face unknown domain changes, fundamentally different from their training settings. Performance prediction models carry out the crucial task of measuring the impact of these changes on model performance. We study the generalization capabilities of various performance prediction models to new domains by learning on generated synthetic perturbations. Empirical validation on a benchmark of ten tabular datasets shows that models based upon state-of-the-art shift detection metrics are not expressive enough to generalize to unseen domains, while Error Predictors bring a consistent improvement in performance prediction under shift. We additionally propose a natural and effortless uncertainty estimation of the predicted accuracy that ensures reliable use of performance predictors. Our implementation is available at https: //github.com/dataiku-research/performance_prediction_under_shift.
1810.13195
Yacine Ouzrout
Thtiya Manakitsirisuthi, Yacine Ouzrout (LIESP), Abdelaziz Bouras (LIESP)
A multi-agent system for managing the product lifecycle sustainability
null
International Conference on Software, Knowledge and Application, Oct 2009, Fez, Morocco. pp.8, 2009
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The international competitive market causes the increasing of shorten product life cycle and product development process with the improvement in term of time, cost and quality while increasing the waste generation. Product life cycle sustainability can reduce waste, conserve resources, use recycling materials, design product for easy disassembly and avoid using hazardous material. This paper proposes a knowledge management architecture, based on a multi-agent system, which focuses on the "sustainability" in order to manage knowledge in each stage of the product lifecycle, and particularly in the recovery process. The aim of this research work is to make the link between a decision-making system based on the agent's knowledge about the sustainability (environmental norms, rules...) and a PLM (Product Lifecycle Management) system. The software Agents will help the decision makers in each stage of the lifecycle and make them take into account the environmental impact of their decisions.
[ { "created": "Wed, 31 Oct 2018 10:17:33 GMT", "version": "v1" } ]
2018-11-01
[ [ "Manakitsirisuthi", "Thtiya", "", "LIESP" ], [ "Ouzrout", "Yacine", "", "LIESP" ], [ "Bouras", "Abdelaziz", "", "LIESP" ] ]
The international competitive market causes the increasing of shorten product life cycle and product development process with the improvement in term of time, cost and quality while increasing the waste generation. Product life cycle sustainability can reduce waste, conserve resources, use recycling materials, design product for easy disassembly and avoid using hazardous material. This paper proposes a knowledge management architecture, based on a multi-agent system, which focuses on the "sustainability" in order to manage knowledge in each stage of the product lifecycle, and particularly in the recovery process. The aim of this research work is to make the link between a decision-making system based on the agent's knowledge about the sustainability (environmental norms, rules...) and a PLM (Product Lifecycle Management) system. The software Agents will help the decision makers in each stage of the lifecycle and make them take into account the environmental impact of their decisions.
1309.1516
Shih-Chun Lin
Shih-Chun Lin and Cheng-Liang Lin
On Secrecy Capacity of Fast Fading MIMOME Wiretap Channels With Statistical CSIT
submitted to IEEE Transactions on Wireless Communications
null
10.1109/TWC.2014.041714.11654
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider secure transmissions in ergodic Rayleigh fast-faded multiple-input multiple-output multiple-antenna-eavesdropper (MIMOME) wiretap channels with only statistical channel state information at the transmitter (CSIT). When the legitimate receiver has more (or equal) antennas than the eavesdropper, we prove the first MIMOME secrecy capacity with partial CSIT by establishing a new secrecy capacity upper-bound. The key step is to form an MIMOME degraded channel by dividing the legitimate receiver's channel matrix into two submatrices, and setting one of the submatrices to be the same as the eavesdropper's channel matrix. Next, under the total power constraint over all transmit antennas, we analytically solve the channel-input covariance matrix optimization problem to fully characterize the MIMOME secrecy capacity. Typically, the MIMOME optimization problems are non-concave. However, thank to the proposed degraded channel, we can transform the stochastic MIMOME optimization problem to be a Schur-concave one and then find its solution. Besides total power constraint, we also investigate the secrecy capacity when the transmitter is subject to the practical per-antenna power constraint. The corresponding optimization problem is even more difficult since it is not Schuar-concave. Under the two power constraints considered, the corresponding MIMOME secrecy capacities can both scale with the signal-to-noise ratios (SNR) when the difference between numbers of antennas at legitimate receiver and eavesdropper are large enough. However, when the legitimate receiver and eavesdropper have a single antenna each, such SNR scalings do not exist for both cases.
[ { "created": "Fri, 6 Sep 2013 01:32:11 GMT", "version": "v1" } ]
2014-05-13
[ [ "Lin", "Shih-Chun", "" ], [ "Lin", "Cheng-Liang", "" ] ]
In this paper, we consider secure transmissions in ergodic Rayleigh fast-faded multiple-input multiple-output multiple-antenna-eavesdropper (MIMOME) wiretap channels with only statistical channel state information at the transmitter (CSIT). When the legitimate receiver has more (or equal) antennas than the eavesdropper, we prove the first MIMOME secrecy capacity with partial CSIT by establishing a new secrecy capacity upper-bound. The key step is to form an MIMOME degraded channel by dividing the legitimate receiver's channel matrix into two submatrices, and setting one of the submatrices to be the same as the eavesdropper's channel matrix. Next, under the total power constraint over all transmit antennas, we analytically solve the channel-input covariance matrix optimization problem to fully characterize the MIMOME secrecy capacity. Typically, the MIMOME optimization problems are non-concave. However, thank to the proposed degraded channel, we can transform the stochastic MIMOME optimization problem to be a Schur-concave one and then find its solution. Besides total power constraint, we also investigate the secrecy capacity when the transmitter is subject to the practical per-antenna power constraint. The corresponding optimization problem is even more difficult since it is not Schuar-concave. Under the two power constraints considered, the corresponding MIMOME secrecy capacities can both scale with the signal-to-noise ratios (SNR) when the difference between numbers of antennas at legitimate receiver and eavesdropper are large enough. However, when the legitimate receiver and eavesdropper have a single antenna each, such SNR scalings do not exist for both cases.
2309.05388
Seong Hun Lee
Seong Hun Lee, Javier Civera
Robust Single Rotation Averaging Revisited
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose a novel method for robust single rotation averaging that can efficiently handle an extremely large fraction of outliers. Our approach is to minimize the total truncated least unsquared deviations (TLUD) cost of geodesic distances. The proposed algorithm consists of three steps: First, we consider each input rotation as a potential initial solution and choose the one that yields the least sum of truncated chordal deviations. Next, we obtain the inlier set using the initial solution and compute its chordal $L_2$-mean. Finally, starting from this estimate, we iteratively compute the geodesic $L_1$-mean of the inliers using the Weiszfeld algorithm on $SO(3)$. An extensive evaluation shows that our method is robust against up to 99% outliers given a sufficient number of accurate inliers, outperforming the current state of the art.
[ { "created": "Mon, 11 Sep 2023 11:35:17 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2024 22:49:08 GMT", "version": "v2" }, { "created": "Mon, 26 Feb 2024 23:10:27 GMT", "version": "v3" }, { "created": "Wed, 28 Feb 2024 12:14:48 GMT", "version": "v4" } ]
2024-02-29
[ [ "Lee", "Seong Hun", "" ], [ "Civera", "Javier", "" ] ]
In this work, we propose a novel method for robust single rotation averaging that can efficiently handle an extremely large fraction of outliers. Our approach is to minimize the total truncated least unsquared deviations (TLUD) cost of geodesic distances. The proposed algorithm consists of three steps: First, we consider each input rotation as a potential initial solution and choose the one that yields the least sum of truncated chordal deviations. Next, we obtain the inlier set using the initial solution and compute its chordal $L_2$-mean. Finally, starting from this estimate, we iteratively compute the geodesic $L_1$-mean of the inliers using the Weiszfeld algorithm on $SO(3)$. An extensive evaluation shows that our method is robust against up to 99% outliers given a sufficient number of accurate inliers, outperforming the current state of the art.
2203.13883
Sara Abdali
Sara Abdali, Sina shaham, Bhaskar Krishnamachari
Multi-modal Misinformation Detection: Approaches, Challenges and Opportunities
null
null
null
null
cs.LG cs.AI cs.CV cs.CY cs.MM cs.SI
http://creativecommons.org/licenses/by-sa/4.0/
As social media platforms are evolving from text-based forums into multi-modal environments, the nature of misinformation in social media is also transforming accordingly. Taking advantage of the fact that visual modalities such as images and videos are more favorable and attractive to the users and textual contents are sometimes skimmed carelessly, misinformation spreaders have recently targeted contextual connections between the modalities e.g., text and image. Hence many researchers have developed automatic techniques for detecting possible cross-modal discordance in web-based content. We analyze, categorize and identify existing approaches in addition to challenges and shortcomings they face in order to unearth new research opportunities in the field of multi-modal misinformation detection.
[ { "created": "Fri, 25 Mar 2022 19:45:33 GMT", "version": "v1" }, { "created": "Fri, 1 Apr 2022 21:03:13 GMT", "version": "v2" }, { "created": "Tue, 26 Jul 2022 21:55:37 GMT", "version": "v3" }, { "created": "Tue, 23 Jan 2024 03:54:48 GMT", "version": "v4" }, { "created": "Wed, 24 Jan 2024 01:50:22 GMT", "version": "v5" }, { "created": "Wed, 27 Mar 2024 23:27:58 GMT", "version": "v6" } ]
2024-03-29
[ [ "Abdali", "Sara", "" ], [ "shaham", "Sina", "" ], [ "Krishnamachari", "Bhaskar", "" ] ]
As social media platforms are evolving from text-based forums into multi-modal environments, the nature of misinformation in social media is also transforming accordingly. Taking advantage of the fact that visual modalities such as images and videos are more favorable and attractive to the users and textual contents are sometimes skimmed carelessly, misinformation spreaders have recently targeted contextual connections between the modalities e.g., text and image. Hence many researchers have developed automatic techniques for detecting possible cross-modal discordance in web-based content. We analyze, categorize and identify existing approaches in addition to challenges and shortcomings they face in order to unearth new research opportunities in the field of multi-modal misinformation detection.
2306.10724
Hamed Hemati
Hamed Hemati, Vincenzo Lomonaco, Davide Bacciu, Damian Borth
Partial Hypernetworks for Continual Learning
Accepted to the 2nd Conference on Lifelong Learning Agents (CoLLAs), 2023
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Hypernetworks mitigate forgetting in continual learning (CL) by generating task-dependent weights and penalizing weight changes at a meta-model level. Unfortunately, generating all weights is not only computationally expensive for larger architectures, but also, it is not well understood whether generating all model weights is necessary. Inspired by latent replay methods in CL, we propose partial weight generation for the final layers of a model using hypernetworks while freezing the initial layers. With this objective, we first answer the question of how many layers can be frozen without compromising the final performance. Through several experiments, we empirically show that the number of layers that can be frozen is proportional to the distributional similarity in the CL stream. Then, to demonstrate the effectiveness of hypernetworks, we show that noisy streams can significantly impact the performance of latent replay methods, leading to increased forgetting when features from noisy experiences are replayed with old samples. In contrast, partial hypernetworks are more robust to noise by maintaining accuracy on previous experiences. Finally, we conduct experiments on the split CIFAR-100 and TinyImagenet benchmarks and compare different versions of partial hypernetworks to latent replay methods. We conclude that partial weight generation using hypernetworks is a promising solution to the problem of forgetting in neural networks. It can provide an effective balance between computation and final test accuracy in CL streams.
[ { "created": "Mon, 19 Jun 2023 06:49:10 GMT", "version": "v1" } ]
2023-06-21
[ [ "Hemati", "Hamed", "" ], [ "Lomonaco", "Vincenzo", "" ], [ "Bacciu", "Davide", "" ], [ "Borth", "Damian", "" ] ]
Hypernetworks mitigate forgetting in continual learning (CL) by generating task-dependent weights and penalizing weight changes at a meta-model level. Unfortunately, generating all weights is not only computationally expensive for larger architectures, but also, it is not well understood whether generating all model weights is necessary. Inspired by latent replay methods in CL, we propose partial weight generation for the final layers of a model using hypernetworks while freezing the initial layers. With this objective, we first answer the question of how many layers can be frozen without compromising the final performance. Through several experiments, we empirically show that the number of layers that can be frozen is proportional to the distributional similarity in the CL stream. Then, to demonstrate the effectiveness of hypernetworks, we show that noisy streams can significantly impact the performance of latent replay methods, leading to increased forgetting when features from noisy experiences are replayed with old samples. In contrast, partial hypernetworks are more robust to noise by maintaining accuracy on previous experiences. Finally, we conduct experiments on the split CIFAR-100 and TinyImagenet benchmarks and compare different versions of partial hypernetworks to latent replay methods. We conclude that partial weight generation using hypernetworks is a promising solution to the problem of forgetting in neural networks. It can provide an effective balance between computation and final test accuracy in CL streams.
1202.1458
Adam Williamson
Adam R. Williamson, Tsung-Yi Chen, and Richard D. Wesel
A Rate-Compatible Sphere-Packing Analysis of Feedback Coding with Limited Retransmissions
To be published at the 2012 IEEE International Symposium on Information Theory, Cambridge, MA, USA. Updated to incorporate reviewers' comments and add new figures
null
10.1109/ISIT.2012.6284061
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability).
[ { "created": "Tue, 7 Feb 2012 16:31:54 GMT", "version": "v1" }, { "created": "Mon, 21 May 2012 01:19:20 GMT", "version": "v2" } ]
2016-11-15
[ [ "Williamson", "Adam R.", "" ], [ "Chen", "Tsung-Yi", "" ], [ "Wesel", "Richard D.", "" ] ]
Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability).
1803.08634
Zhifei Mao
Zhifei Mao, Yuming Jiang, Xiaoqiang Di, and Yordanos Woldeyohannes
Joint Head Selection and Airtime Allocation for Data Dissemination in Mobile Social Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile social networks (MSNs) enable people with similar interests to interact without Internet access. By forming a temporary group, users can disseminate their data to other interested users in proximity with short-range communication technologies. However, due to user mobility, airtime available for users in the same group to disseminate data is limited. In addition, for practical consideration, a star network topology among users in the group is expected. For the former, unfair airtime allocation among the users will undermine their willingness to participate in MSNs. For the latter, a group head is required to connect other users. These two problems have to be properly addressed to enable real implementation and adoption of MSNs. To this aim, we propose a Nash bargaining-based joint head selection and airtime allocation scheme for data dissemination within the group. Specifically, the bargaining game of joint head selection and airtime allocation is first formulated. Then, Nash bargaining solution (NBS) based optimization problems are proposed for a homogeneous case and a more general heterogeneous case. For both cases, the existence of solution to the optimization problem is proved, which guarantees Pareto optimality and proportional fairness. Next, an algorithm, allowing distributed implementation, for join head selection and airtime allocation is introduced. Finally, numerical results are presented to evaluate the performance, validate intuitions and derive insights of the proposed scheme.
[ { "created": "Fri, 23 Mar 2018 01:53:34 GMT", "version": "v1" }, { "created": "Tue, 8 Jan 2019 21:51:50 GMT", "version": "v2" } ]
2019-01-10
[ [ "Mao", "Zhifei", "" ], [ "Jiang", "Yuming", "" ], [ "Di", "Xiaoqiang", "" ], [ "Woldeyohannes", "Yordanos", "" ] ]
Mobile social networks (MSNs) enable people with similar interests to interact without Internet access. By forming a temporary group, users can disseminate their data to other interested users in proximity with short-range communication technologies. However, due to user mobility, airtime available for users in the same group to disseminate data is limited. In addition, for practical consideration, a star network topology among users in the group is expected. For the former, unfair airtime allocation among the users will undermine their willingness to participate in MSNs. For the latter, a group head is required to connect other users. These two problems have to be properly addressed to enable real implementation and adoption of MSNs. To this aim, we propose a Nash bargaining-based joint head selection and airtime allocation scheme for data dissemination within the group. Specifically, the bargaining game of joint head selection and airtime allocation is first formulated. Then, Nash bargaining solution (NBS) based optimization problems are proposed for a homogeneous case and a more general heterogeneous case. For both cases, the existence of solution to the optimization problem is proved, which guarantees Pareto optimality and proportional fairness. Next, an algorithm, allowing distributed implementation, for join head selection and airtime allocation is introduced. Finally, numerical results are presented to evaluate the performance, validate intuitions and derive insights of the proposed scheme.
1709.10237
Biswadip Dey
Kevin S. Galloway and Biswadip Dey
Beacon-referenced Mutual Pursuit in Three Dimensions
null
null
null
null
cs.SY cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by station-keeping applications in various unmanned settings, this paper introduces a steering control law for a pair of agents operating in the vicinity of a fixed beacon in a three-dimensional environment. This feedback law is a modification of the previously studied three-dimensional constant bearing (CB) pursuit law, in the sense that it incorporates an additional term to allocate attention to the beacon. We investigate the behavior of the closed-loop dynamics for a two agent mutual pursuit system in which each agent employs the beacon-referenced CB pursuit law with regards to the other agent and a stationary beacon. Under certain assumptions on the associated control parameters, we demonstrate that this problem admits circling equilibria wherein the agents move on circular orbits with a common radius, in planes perpendicular to a common axis passing through the beacon. As the common radius and distances from the beacon are determined by choice of parameters in the feedback law, this approach provides a means to engineer desired formations in a three-dimensional setting.
[ { "created": "Fri, 29 Sep 2017 04:52:57 GMT", "version": "v1" } ]
2017-10-02
[ [ "Galloway", "Kevin S.", "" ], [ "Dey", "Biswadip", "" ] ]
Motivated by station-keeping applications in various unmanned settings, this paper introduces a steering control law for a pair of agents operating in the vicinity of a fixed beacon in a three-dimensional environment. This feedback law is a modification of the previously studied three-dimensional constant bearing (CB) pursuit law, in the sense that it incorporates an additional term to allocate attention to the beacon. We investigate the behavior of the closed-loop dynamics for a two agent mutual pursuit system in which each agent employs the beacon-referenced CB pursuit law with regards to the other agent and a stationary beacon. Under certain assumptions on the associated control parameters, we demonstrate that this problem admits circling equilibria wherein the agents move on circular orbits with a common radius, in planes perpendicular to a common axis passing through the beacon. As the common radius and distances from the beacon are determined by choice of parameters in the feedback law, this approach provides a means to engineer desired formations in a three-dimensional setting.
1302.7082
Meena Kabilan
A.Meena, K.Raja
K Means Segmentation of Alzheimers Disease in PET scan datasets: An implementation
International Joint Conference on Advances in Signal Processing and Information Technology, SPIT2012
LNICST, ISSN:1867 To 8211 pp. 158 To 162, 2012
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Positron Emission Tomography (PET) scan image requires expertise in the segmentation where clustering algorithm plays an important role in the automation process. The algorithm optimization is concluded based on the performance, quality and number of clusters extracted. This paper is proposed to study the commonly used K Means clustering algorithm and to discuss a brief list of toolboxes for reproducing and extending works presented in medical image analysis. This work is compiled using AForge .NET framework in windows environment and MATrix LABoratory (MATLAB 7.0.1)
[ { "created": "Thu, 28 Feb 2013 04:50:31 GMT", "version": "v1" } ]
2013-03-01
[ [ "Meena", "A.", "" ], [ "Raja", "K.", "" ] ]
The Positron Emission Tomography (PET) scan image requires expertise in the segmentation where clustering algorithm plays an important role in the automation process. The algorithm optimization is concluded based on the performance, quality and number of clusters extracted. This paper is proposed to study the commonly used K Means clustering algorithm and to discuss a brief list of toolboxes for reproducing and extending works presented in medical image analysis. This work is compiled using AForge .NET framework in windows environment and MATrix LABoratory (MATLAB 7.0.1)
2204.12039
Yuqing Liu
Yuqing Liu, Qi Jia, Jian Zhang, Xin Fan, Shanshe Wang, Siwei Ma, Wen Gao
Learning Weighting Map for Bit-Depth Expansion within a Rational Range
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bit-depth expansion (BDE) is one of the emerging technologies to display high bit-depth (HBD) image from low bit-depth (LBD) source. Existing BDE methods have no unified solution for various BDE situations, and directly learn a mapping for each pixel from LBD image to the desired value in HBD image, which may change the given high-order bits and lead to a huge deviation from the ground truth. In this paper, we design a bit restoration network (BRNet) to learn a weight for each pixel, which indicates the ratio of the replenished value within a rational range, invoking an accurate solution without modifying the given high-order bit information. To make the network adaptive for any bit-depth degradation, we investigate the issue in an optimization perspective and train the network under progressive training strategy for better performance. Moreover, we employ Wasserstein distance as a visual quality indicator to evaluate the difference of color distribution between restored image and the ground truth. Experimental results show our method can restore colorful images with fewer artifacts and false contours, and outperforms state-of-the-art methods with higher PSNR/SSIM results and lower Wasserstein distance. The source code will be made available at https://github.com/yuqing-liu-dut/bit-depth-expansion
[ { "created": "Tue, 26 Apr 2022 02:27:39 GMT", "version": "v1" } ]
2022-04-27
[ [ "Liu", "Yuqing", "" ], [ "Jia", "Qi", "" ], [ "Zhang", "Jian", "" ], [ "Fan", "Xin", "" ], [ "Wang", "Shanshe", "" ], [ "Ma", "Siwei", "" ], [ "Gao", "Wen", "" ] ]
Bit-depth expansion (BDE) is one of the emerging technologies to display high bit-depth (HBD) image from low bit-depth (LBD) source. Existing BDE methods have no unified solution for various BDE situations, and directly learn a mapping for each pixel from LBD image to the desired value in HBD image, which may change the given high-order bits and lead to a huge deviation from the ground truth. In this paper, we design a bit restoration network (BRNet) to learn a weight for each pixel, which indicates the ratio of the replenished value within a rational range, invoking an accurate solution without modifying the given high-order bit information. To make the network adaptive for any bit-depth degradation, we investigate the issue in an optimization perspective and train the network under progressive training strategy for better performance. Moreover, we employ Wasserstein distance as a visual quality indicator to evaluate the difference of color distribution between restored image and the ground truth. Experimental results show our method can restore colorful images with fewer artifacts and false contours, and outperforms state-of-the-art methods with higher PSNR/SSIM results and lower Wasserstein distance. The source code will be made available at https://github.com/yuqing-liu-dut/bit-depth-expansion
1006.0386
Haitham Rashwan
Haitham Rashwan, Ernst M. Gabidulin, Bahram Honary
A Smart Approach for GPT Cryptosystem Based on Rank Codes
5 pages. to appear in Proceedings of IEEE ISIT2010
null
10.1109/ISIT.2010.5513549
#1223: ISIT 2010
cs.IT cs.CR math.IT
http://creativecommons.org/licenses/by/3.0/
The concept of Public- key cryptosystem was innovated by McEliece's cryptosystem. The public key cryptosystem based on rank codes was presented in 1991 by Gabidulin -Paramonov-Trejtakov(GPT). The use of rank codes in cryptographic applications is advantageous since it is practically impossible to utilize combinatoric decoding. This has enabled using public keys of a smaller size. Respective structural attacks against this system were proposed by Gibson and recently by Overbeck. Overbeck's attacks break many versions of the GPT cryptosystem and are turned out to be either polynomial or exponential depending on parameters of the cryptosystem. In this paper, we introduce a new approach, called the Smart approach, which is based on a proper choice of the distortion matrix X. The Smart approach allows for withstanding all known attacks even if the column scrambler matrix P over the base field Fq.
[ { "created": "Wed, 2 Jun 2010 14:18:25 GMT", "version": "v1" } ]
2016-11-17
[ [ "Rashwan", "Haitham", "" ], [ "Gabidulin", "Ernst M.", "" ], [ "Honary", "Bahram", "" ] ]
The concept of Public- key cryptosystem was innovated by McEliece's cryptosystem. The public key cryptosystem based on rank codes was presented in 1991 by Gabidulin -Paramonov-Trejtakov(GPT). The use of rank codes in cryptographic applications is advantageous since it is practically impossible to utilize combinatoric decoding. This has enabled using public keys of a smaller size. Respective structural attacks against this system were proposed by Gibson and recently by Overbeck. Overbeck's attacks break many versions of the GPT cryptosystem and are turned out to be either polynomial or exponential depending on parameters of the cryptosystem. In this paper, we introduce a new approach, called the Smart approach, which is based on a proper choice of the distortion matrix X. The Smart approach allows for withstanding all known attacks even if the column scrambler matrix P over the base field Fq.
2311.10832
Elaheh Jafarigol
Elaheh Jafarigol, Theodore Trafalis, Talayeh Razzaghi, Mona Zamankhani
Exploring Machine Learning Models for Federated Learning: A Review of Approaches, Performance, and Limitations
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In the growing world of artificial intelligence, federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data. Federated learning lays the groundwork for collaborative research in areas where the data is sensitive. Federated learning has several implications for real-world problems. In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data. This distributed approach enables us to leverage information from multiple sources and gain more diverse insights. This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Specifically, we have presented an extensive review of supervised/unsupervised machine learning algorithms, ensemble methods, meta-heuristic approaches, blockchain technology, and reinforcement learning used in the framework of federated learning, in addition to an overview of federated learning applications. This paper reviews the literature on the components of federated learning and its applications in the last few years. The main purpose of this work is to provide researchers and practitioners with a comprehensive overview of federated learning from the machine learning point of view. A discussion of some open problems and future research directions in federated learning is also provided.
[ { "created": "Fri, 17 Nov 2023 19:23:21 GMT", "version": "v1" } ]
2023-11-21
[ [ "Jafarigol", "Elaheh", "" ], [ "Trafalis", "Theodore", "" ], [ "Razzaghi", "Talayeh", "" ], [ "Zamankhani", "Mona", "" ] ]
In the growing world of artificial intelligence, federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data. Federated learning lays the groundwork for collaborative research in areas where the data is sensitive. Federated learning has several implications for real-world problems. In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data. This distributed approach enables us to leverage information from multiple sources and gain more diverse insights. This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Specifically, we have presented an extensive review of supervised/unsupervised machine learning algorithms, ensemble methods, meta-heuristic approaches, blockchain technology, and reinforcement learning used in the framework of federated learning, in addition to an overview of federated learning applications. This paper reviews the literature on the components of federated learning and its applications in the last few years. The main purpose of this work is to provide researchers and practitioners with a comprehensive overview of federated learning from the machine learning point of view. A discussion of some open problems and future research directions in federated learning is also provided.
2207.12710
Christoffer Loeffler
Christoffer Loeffler, Kion Fallah, Stefano Fenu, Dario Zanca, Bjoern Eskofier, Christopher John Rozell, Christopher Mutschler
Active Learning of Ordinal Embeddings: A User Study on Football Data
23 pages, 17 figures
Transactions on Machine Learning Research 04/2023 https://openreview.net/forum?id=oq3tx5kinu
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans innately measure distance between instances in an unlabeled dataset using an unknown similarity function. Distance metrics can only serve as proxy for similarity in information retrieval of similar instances. Learning a good similarity function from human annotations improves the quality of retrievals. This work uses deep metric learning to learn these user-defined similarity functions from few annotations for a large football trajectory dataset. We adapt an entropy-based active learning method with recent work from triplet mining to collect easy-to-answer but still informative annotations from human participants and use them to train a deep convolutional network that generalizes to unseen samples. Our user study shows that our approach improves the quality of the information retrieval compared to a previous deep metric learning approach that relies on a Siamese network. Specifically, we shed light on the strengths and weaknesses of passive sampling heuristics and active learners alike by analyzing the participants' response efficacy. To this end, we collect accuracy, algorithmic time complexity, the participants' fatigue and time-to-response, qualitative self-assessment and statements, as well as the effects of mixed-expertise annotators and their consistency on model performance and transfer-learning.
[ { "created": "Tue, 26 Jul 2022 07:55:23 GMT", "version": "v1" }, { "created": "Thu, 10 Nov 2022 09:49:18 GMT", "version": "v2" } ]
2023-04-25
[ [ "Loeffler", "Christoffer", "" ], [ "Fallah", "Kion", "" ], [ "Fenu", "Stefano", "" ], [ "Zanca", "Dario", "" ], [ "Eskofier", "Bjoern", "" ], [ "Rozell", "Christopher John", "" ], [ "Mutschler", "Christopher", "" ] ]
Humans innately measure distance between instances in an unlabeled dataset using an unknown similarity function. Distance metrics can only serve as proxy for similarity in information retrieval of similar instances. Learning a good similarity function from human annotations improves the quality of retrievals. This work uses deep metric learning to learn these user-defined similarity functions from few annotations for a large football trajectory dataset. We adapt an entropy-based active learning method with recent work from triplet mining to collect easy-to-answer but still informative annotations from human participants and use them to train a deep convolutional network that generalizes to unseen samples. Our user study shows that our approach improves the quality of the information retrieval compared to a previous deep metric learning approach that relies on a Siamese network. Specifically, we shed light on the strengths and weaknesses of passive sampling heuristics and active learners alike by analyzing the participants' response efficacy. To this end, we collect accuracy, algorithmic time complexity, the participants' fatigue and time-to-response, qualitative self-assessment and statements, as well as the effects of mixed-expertise annotators and their consistency on model performance and transfer-learning.
1808.07910
Nicolas Ford
Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, and George E. Dahl
The Importance of Generation Order in Language Modeling
null
null
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural language models are a critical component of state-of-the-art systems for machine translation, summarization, audio transcription, and other tasks. These language models are almost universally autoregressive in nature, generating sentences one token at a time from left to right. This paper studies the influence of token generation order on model quality via a novel two-pass language model that produces partially-filled sentence "templates" and then fills in missing tokens. We compare various strategies for structuring these two passes and observe a surprisingly large variation in model quality. We find the most effective strategy generates function words in the first pass followed by content words in the second. We believe these experimental results justify a more extensive investigation of generation order for neural language models.
[ { "created": "Thu, 23 Aug 2018 19:17:24 GMT", "version": "v1" } ]
2018-08-27
[ [ "Ford", "Nicolas", "" ], [ "Duckworth", "Daniel", "" ], [ "Norouzi", "Mohammad", "" ], [ "Dahl", "George E.", "" ] ]
Neural language models are a critical component of state-of-the-art systems for machine translation, summarization, audio transcription, and other tasks. These language models are almost universally autoregressive in nature, generating sentences one token at a time from left to right. This paper studies the influence of token generation order on model quality via a novel two-pass language model that produces partially-filled sentence "templates" and then fills in missing tokens. We compare various strategies for structuring these two passes and observe a surprisingly large variation in model quality. We find the most effective strategy generates function words in the first pass followed by content words in the second. We believe these experimental results justify a more extensive investigation of generation order for neural language models.
2007.05906
Khawar Islam Mr
Khawar Islam, Uzma Afzal
Framework for Passenger Seat Availability Using Face Detection in Passenger Bus
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Advancements in Intelligent Transportation System (IES) improve passenger traveling by providing information systems for bus arrival time and counting the number of passengers and buses in cities. Passengers still face bus waiting and seat unavailability issues which have adverse effects on traffic management and controlling authority. We propose a Face Detection based Framework (FDF) to determine passenger seat availability in a camera-equipped bus through face detection which is based on background subtraction to count empty, filled, and total seats. FDF has an integrated smartphone Passenger Application (PA) to identify the nearest bus stop. We evaluate FDF in a live test environment and results show that it gives 90% accuracy. We believe our results have the potential to address traffic management concerns and assist passengers to save their valuable time
[ { "created": "Sun, 12 Jul 2020 04:31:28 GMT", "version": "v1" } ]
2020-07-14
[ [ "Islam", "Khawar", "" ], [ "Afzal", "Uzma", "" ] ]
Advancements in Intelligent Transportation System (IES) improve passenger traveling by providing information systems for bus arrival time and counting the number of passengers and buses in cities. Passengers still face bus waiting and seat unavailability issues which have adverse effects on traffic management and controlling authority. We propose a Face Detection based Framework (FDF) to determine passenger seat availability in a camera-equipped bus through face detection which is based on background subtraction to count empty, filled, and total seats. FDF has an integrated smartphone Passenger Application (PA) to identify the nearest bus stop. We evaluate FDF in a live test environment and results show that it gives 90% accuracy. We believe our results have the potential to address traffic management concerns and assist passengers to save their valuable time
1205.5055
Matthew Anderson
Matthew Anderson, Maciej Brodowicz, Hartmut Kaiser, Bryce Adelstein-Lelbach, and Thomas Sterling
Neutron Star Evolutions using Tabulated Equations of State with a New Execution Model
9 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1110.1131
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The addition of nuclear and neutrino physics to general relativistic fluid codes allows for a more realistic description of hot nuclear matter in neutron star and black hole systems. This additional microphysics requires that each processor have access to large tables of data, such as equations of state, and in large simulations the memory required to store these tables locally can become excessive unless an alternative execution model is used. In this work we present relativistic fluid evolutions of a neutron star obtained using a message driven multi-threaded execution model known as ParalleX. These neutron star simulations would require substantial memory overhead dedicated entirely to the equation of state table if using a more traditional execution model. We introduce a ParalleX component based on Futures for accessing large tables of data, including out-of-core sized tables, which does not require substantial memory overhead and effectively hides any increased network latency.
[ { "created": "Tue, 22 May 2012 20:46:11 GMT", "version": "v1" } ]
2012-05-24
[ [ "Anderson", "Matthew", "" ], [ "Brodowicz", "Maciej", "" ], [ "Kaiser", "Hartmut", "" ], [ "Adelstein-Lelbach", "Bryce", "" ], [ "Sterling", "Thomas", "" ] ]
The addition of nuclear and neutrino physics to general relativistic fluid codes allows for a more realistic description of hot nuclear matter in neutron star and black hole systems. This additional microphysics requires that each processor have access to large tables of data, such as equations of state, and in large simulations the memory required to store these tables locally can become excessive unless an alternative execution model is used. In this work we present relativistic fluid evolutions of a neutron star obtained using a message driven multi-threaded execution model known as ParalleX. These neutron star simulations would require substantial memory overhead dedicated entirely to the equation of state table if using a more traditional execution model. We introduce a ParalleX component based on Futures for accessing large tables of data, including out-of-core sized tables, which does not require substantial memory overhead and effectively hides any increased network latency.
2202.08901
Anubrata Das
Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, Jacek Gwidzka
The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims
null
Published in ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR 2022), March 14 --- 18, 2022, Regensburg, Germany
10.1145/3498366.3505786
null
cs.HC cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
We conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions, such as dwell time, attention, and mental resources involved in using the system. A within-subject experiment was conducted, where participants used an interactive and a non-interactive version of a mock AI fact-checking system and rated their perceived correctness of COVID-19 related claims. We collected web-page interactions, eye-tracking data, and mental workload using NASA-TLX. We found that the presence of the affordance of interactively manipulating the AI system's prediction parameters affected users' dwell times, and eye-fixations on AOIs, but not mental workload. In the interactive system, participants spent the most time evaluating claims' correctness, followed by reading news. This promising result shows a positive role of interactivity in a mixed-initiative AI-powered system.
[ { "created": "Thu, 17 Feb 2022 21:08:57 GMT", "version": "v1" }, { "created": "Mon, 14 Mar 2022 20:47:34 GMT", "version": "v2" } ]
2022-03-16
[ [ "Shi", "Li", "" ], [ "Bhattacharya", "Nilavra", "" ], [ "Das", "Anubrata", "" ], [ "Lease", "Matthew", "" ], [ "Gwidzka", "Jacek", "" ] ]
We conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions, such as dwell time, attention, and mental resources involved in using the system. A within-subject experiment was conducted, where participants used an interactive and a non-interactive version of a mock AI fact-checking system and rated their perceived correctness of COVID-19 related claims. We collected web-page interactions, eye-tracking data, and mental workload using NASA-TLX. We found that the presence of the affordance of interactively manipulating the AI system's prediction parameters affected users' dwell times, and eye-fixations on AOIs, but not mental workload. In the interactive system, participants spent the most time evaluating claims' correctness, followed by reading news. This promising result shows a positive role of interactivity in a mixed-initiative AI-powered system.
2404.09842
Tao Wu
Tao Wu, Mengqi Cao, Ziteng Gao, Gangshan Wu, Limin Wang
STMixer: A One-Stage Sparse Action Detector
Extended version of the paper arXiv:2303.15879 presented at CVPR 2023. Accepted by TPAMI 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Traditional video action detectors typically adopt the two-stage pipeline, where a person detector is first employed to generate actor boxes and then 3D RoIAlign is used to extract actor-specific features for classification. This detection paradigm requires multi-stage training and inference, and the feature sampling is constrained inside the box, failing to effectively leverage richer context information outside. Recently, a few query-based action detectors have been proposed to predict action instances in an end-to-end manner. However, they still lack adaptability in feature sampling and decoding, thus suffering from the issues of inferior performance or slower convergence. In this paper, we propose two core designs for a more flexible one-stage sparse action detector. First, we present a query-based adaptive feature sampling module, which endows the detector with the flexibility of mining a group of discriminative features from the entire spatio-temporal domain. Second, we devise a decoupled feature mixing module, which dynamically attends to and mixes video features along the spatial and temporal dimensions respectively for better feature decoding. Based on these designs, we instantiate two detection pipelines, that is, STMixer-K for keyframe action detection and STMixer-T for action tubelet detection. Without bells and whistles, our STMixer detectors obtain state-of-the-art results on five challenging spatio-temporal action detection benchmarks for keyframe action detection or action tube detection.
[ { "created": "Mon, 15 Apr 2024 14:52:02 GMT", "version": "v1" } ]
2024-04-16
[ [ "Wu", "Tao", "" ], [ "Cao", "Mengqi", "" ], [ "Gao", "Ziteng", "" ], [ "Wu", "Gangshan", "" ], [ "Wang", "Limin", "" ] ]
Traditional video action detectors typically adopt the two-stage pipeline, where a person detector is first employed to generate actor boxes and then 3D RoIAlign is used to extract actor-specific features for classification. This detection paradigm requires multi-stage training and inference, and the feature sampling is constrained inside the box, failing to effectively leverage richer context information outside. Recently, a few query-based action detectors have been proposed to predict action instances in an end-to-end manner. However, they still lack adaptability in feature sampling and decoding, thus suffering from the issues of inferior performance or slower convergence. In this paper, we propose two core designs for a more flexible one-stage sparse action detector. First, we present a query-based adaptive feature sampling module, which endows the detector with the flexibility of mining a group of discriminative features from the entire spatio-temporal domain. Second, we devise a decoupled feature mixing module, which dynamically attends to and mixes video features along the spatial and temporal dimensions respectively for better feature decoding. Based on these designs, we instantiate two detection pipelines, that is, STMixer-K for keyframe action detection and STMixer-T for action tubelet detection. Without bells and whistles, our STMixer detectors obtain state-of-the-art results on five challenging spatio-temporal action detection benchmarks for keyframe action detection or action tube detection.
2209.05243
Christofer Fellicious
Christofer Fellicious, Stewart Sentanoe, Michael Granitzer, Hans P. Reiser
SmartKex: Machine Learning Assisted SSH Keys Extraction From The Heap Dump
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Digital forensics is the process of extracting, preserving, and documenting evidence in digital devices. A commonly used method in digital forensics is to extract data from the main memory of a digital device. However, the main challenge is identifying the important data to be extracted. Several pieces of crucial information reside in the main memory, like usernames, passwords, and cryptographic keys such as SSH session keys. In this paper, we propose SmartKex, a machine-learning assisted method to extract session keys from heap memory snapshots of an OpenSSH process. In addition, we release an openly available dataset and the corresponding toolchain for creating additional data. Finally, we compare SmartKex with naive brute-force methods and empirically show that SmartKex can extract the session keys with high accuracy and high throughput. With the provided resources, we intend to strengthen the research on the intersection between digital forensics, cybersecurity, and machine learning.
[ { "created": "Mon, 12 Sep 2022 13:36:54 GMT", "version": "v1" }, { "created": "Tue, 13 Sep 2022 08:50:56 GMT", "version": "v2" } ]
2022-09-14
[ [ "Fellicious", "Christofer", "" ], [ "Sentanoe", "Stewart", "" ], [ "Granitzer", "Michael", "" ], [ "Reiser", "Hans P.", "" ] ]
Digital forensics is the process of extracting, preserving, and documenting evidence in digital devices. A commonly used method in digital forensics is to extract data from the main memory of a digital device. However, the main challenge is identifying the important data to be extracted. Several pieces of crucial information reside in the main memory, like usernames, passwords, and cryptographic keys such as SSH session keys. In this paper, we propose SmartKex, a machine-learning assisted method to extract session keys from heap memory snapshots of an OpenSSH process. In addition, we release an openly available dataset and the corresponding toolchain for creating additional data. Finally, we compare SmartKex with naive brute-force methods and empirically show that SmartKex can extract the session keys with high accuracy and high throughput. With the provided resources, we intend to strengthen the research on the intersection between digital forensics, cybersecurity, and machine learning.
1311.2495
Moritz Hardt
Moritz Hardt and Eric Price
The Noisy Power Method: A Meta Algorithm with Applications
NIPS 2014
null
null
null
cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications including streaming PCA and privacy-preserving singular vector computation.
[ { "created": "Mon, 11 Nov 2013 16:47:25 GMT", "version": "v1" }, { "created": "Mon, 15 Sep 2014 19:17:32 GMT", "version": "v2" }, { "created": "Mon, 8 Dec 2014 21:53:05 GMT", "version": "v3" }, { "created": "Tue, 3 Feb 2015 23:43:37 GMT", "version": "v4" } ]
2015-02-05
[ [ "Hardt", "Moritz", "" ], [ "Price", "Eric", "" ] ]
We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications including streaming PCA and privacy-preserving singular vector computation.
1905.08526
Yukiko Yamauchi
Yukiko Yamauchi and Masafumi Yamashita
Coding theory for noiseless channels realized by anonymous oblivious mobile robots
null
null
null
null
cs.DC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an information transmission scheme by a swarm of anonymous oblivious mobile robots on a graph. The swarm of robots travel from a sender vertex to a receiver vertex to transmit a symbol generated at the sender. The codeword for a symbol is a pair of an initial configuration at the sender and a set of terminal configurations at the receiver. The set of such codewords forms a code. We analyze the performance of the proposed scheme in terms of its code size and transmission delay. We first demonstrate that a lower bound of the transmission delay depends on the size of the swarm, and the code size is upper bounded by an exponent of the size of the swarm. We then give two algorithms for a swarm of a fixed size. The first algorithm realizes a near optimal code size with a large transmission delay. The second algorithm realizes an optimal transmission delay with a smaller code size. We then consider information transmission by swarms of different sizes and present upper bounds of the expected swarm size by the two algorithms. We also present lower bounds by Shannon's lemma and noiseless coding theorem.
[ { "created": "Tue, 21 May 2019 10:02:49 GMT", "version": "v1" } ]
2019-05-22
[ [ "Yamauchi", "Yukiko", "" ], [ "Yamashita", "Masafumi", "" ] ]
We propose an information transmission scheme by a swarm of anonymous oblivious mobile robots on a graph. The swarm of robots travel from a sender vertex to a receiver vertex to transmit a symbol generated at the sender. The codeword for a symbol is a pair of an initial configuration at the sender and a set of terminal configurations at the receiver. The set of such codewords forms a code. We analyze the performance of the proposed scheme in terms of its code size and transmission delay. We first demonstrate that a lower bound of the transmission delay depends on the size of the swarm, and the code size is upper bounded by an exponent of the size of the swarm. We then give two algorithms for a swarm of a fixed size. The first algorithm realizes a near optimal code size with a large transmission delay. The second algorithm realizes an optimal transmission delay with a smaller code size. We then consider information transmission by swarms of different sizes and present upper bounds of the expected swarm size by the two algorithms. We also present lower bounds by Shannon's lemma and noiseless coding theorem.
2406.15888
Khai Le-Duc
Khai Le-Duc, Khai-Nguyen Nguyen, Long Vo-Dang, Truong-Son Hy
Real-time Speech Summarization for Medical Conversations
Interspeech 2024
null
null
null
cs.CL cs.AI cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
In doctor-patient conversations, identifying medically relevant information is crucial, posing the need for conversation summarization. In this work, we propose the first deployable real-time speech summarization system for real-world applications in industry, which generates a local summary after every N speech utterances within a conversation and a global summary after the end of a conversation. Our system could enhance user experience from a business standpoint, while also reducing computational costs from a technical perspective. Secondly, we present VietMed-Sum which, to our knowledge, is the first speech summarization dataset for medical conversations. Thirdly, we are the first to utilize LLM and human annotators collaboratively to create gold standard and synthetic summaries for medical conversation summarization. Finally, we present baseline results of state-of-the-art models on VietMed-Sum. All code, data (English-translated and Vietnamese) and models are available online: https://github.com/leduckhai/MultiMed
[ { "created": "Sat, 22 Jun 2024 16:37:51 GMT", "version": "v1" } ]
2024-06-25
[ [ "Le-Duc", "Khai", "" ], [ "Nguyen", "Khai-Nguyen", "" ], [ "Vo-Dang", "Long", "" ], [ "Hy", "Truong-Son", "" ] ]
In doctor-patient conversations, identifying medically relevant information is crucial, posing the need for conversation summarization. In this work, we propose the first deployable real-time speech summarization system for real-world applications in industry, which generates a local summary after every N speech utterances within a conversation and a global summary after the end of a conversation. Our system could enhance user experience from a business standpoint, while also reducing computational costs from a technical perspective. Secondly, we present VietMed-Sum which, to our knowledge, is the first speech summarization dataset for medical conversations. Thirdly, we are the first to utilize LLM and human annotators collaboratively to create gold standard and synthetic summaries for medical conversation summarization. Finally, we present baseline results of state-of-the-art models on VietMed-Sum. All code, data (English-translated and Vietnamese) and models are available online: https://github.com/leduckhai/MultiMed
2304.11718
Ihab Bendidi
Ihab Bendidi, Adrien Bardes, Ethan Cohen, Alexis Lamiable, Guillaume Bollot, Auguste Genovesio
No Free Lunch in Self Supervised Representation Learning
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Self-supervised representation learning in computer vision relies heavily on hand-crafted image transformations to learn meaningful and invariant features. However few extensive explorations of the impact of transformation design have been conducted in the literature. In particular, the dependence of downstream performances to transformation design has been established, but not studied in depth. In this work, we explore this relationship, its impact on a domain other than natural images, and show that designing the transformations can be viewed as a form of supervision. First, we demonstrate that not only do transformations have an effect on downstream performance and relevance of clustering, but also that each category in a supervised dataset can be impacted in a different way. Following this, we explore the impact of transformation design on microscopy images, a domain where the difference between classes is more subtle and fuzzy than in natural images. In this case, we observe a greater impact on downstream tasks performances. Finally, we demonstrate that transformation design can be leveraged as a form of supervision, as careful selection of these by a domain expert can lead to a drastic increase in performance on a given downstream task.
[ { "created": "Sun, 23 Apr 2023 18:14:19 GMT", "version": "v1" } ]
2023-04-25
[ [ "Bendidi", "Ihab", "" ], [ "Bardes", "Adrien", "" ], [ "Cohen", "Ethan", "" ], [ "Lamiable", "Alexis", "" ], [ "Bollot", "Guillaume", "" ], [ "Genovesio", "Auguste", "" ] ]
Self-supervised representation learning in computer vision relies heavily on hand-crafted image transformations to learn meaningful and invariant features. However few extensive explorations of the impact of transformation design have been conducted in the literature. In particular, the dependence of downstream performances to transformation design has been established, but not studied in depth. In this work, we explore this relationship, its impact on a domain other than natural images, and show that designing the transformations can be viewed as a form of supervision. First, we demonstrate that not only do transformations have an effect on downstream performance and relevance of clustering, but also that each category in a supervised dataset can be impacted in a different way. Following this, we explore the impact of transformation design on microscopy images, a domain where the difference between classes is more subtle and fuzzy than in natural images. In this case, we observe a greater impact on downstream tasks performances. Finally, we demonstrate that transformation design can be leveraged as a form of supervision, as careful selection of these by a domain expert can lead to a drastic increase in performance on a given downstream task.
2107.04683
Isma\"el Jecker
Isma\"el Jecker, Nicolas Mazzocchi, Petra Wolf
Decomposing Permutation Automata
null
null
null
null
cs.FL
http://creativecommons.org/licenses/by-nc-nd/4.0/
A deterministic finite automaton (DFA) is composite if its language can be decomposed into an intersection of languages of smaller DFAs. Otherwise, A is prime. This notion of primality was introduced by Kupferman and Mosheiff in 2013, and while they proved that we can decide whether a DFA is composite, the precise complexity of this problem is still open, with a doubly-exponential gap between the upper and lower bounds. In this work, we focus on permutation DFAs, i.e., those for which the transition monoid is a group. We provide an NP algorithm to decide whether a permutation DFA is composite, and show that the difficulty of this problem comes from the number of non-accepting states of the instance: we give a fixed-parameter tractable algorithm with the number of rejecting states as the parameter. Moreover, we investigate the class of commutative permutation DFAs. Their structural properties allow us to decide compositionality in NLOGSPACE, and even in LOGSPACE if the alphabet size is fixed. Despite this low complexity, we show that complex behaviors still arise in this class: we provide a family of composite DFAs each requiring polynomially many factors with respect to its size. We also consider the variant of the problem that asks whether a DFA is k-factor composite, that is, decomposable into k smaller DFAs, for some given integer k. We show that, for commutative permutation DFAs, restricting the number of factors makes the decision computationally harder, and yields a problem with tight bounds: it is NP-complete. Finally, we show that in general, this problem is in PSPACE, and it is in LOGSPACE for DFAs with a singleton alphabet.
[ { "created": "Fri, 9 Jul 2021 21:20:39 GMT", "version": "v1" } ]
2021-07-13
[ [ "Jecker", "Ismaël", "" ], [ "Mazzocchi", "Nicolas", "" ], [ "Wolf", "Petra", "" ] ]
A deterministic finite automaton (DFA) is composite if its language can be decomposed into an intersection of languages of smaller DFAs. Otherwise, A is prime. This notion of primality was introduced by Kupferman and Mosheiff in 2013, and while they proved that we can decide whether a DFA is composite, the precise complexity of this problem is still open, with a doubly-exponential gap between the upper and lower bounds. In this work, we focus on permutation DFAs, i.e., those for which the transition monoid is a group. We provide an NP algorithm to decide whether a permutation DFA is composite, and show that the difficulty of this problem comes from the number of non-accepting states of the instance: we give a fixed-parameter tractable algorithm with the number of rejecting states as the parameter. Moreover, we investigate the class of commutative permutation DFAs. Their structural properties allow us to decide compositionality in NLOGSPACE, and even in LOGSPACE if the alphabet size is fixed. Despite this low complexity, we show that complex behaviors still arise in this class: we provide a family of composite DFAs each requiring polynomially many factors with respect to its size. We also consider the variant of the problem that asks whether a DFA is k-factor composite, that is, decomposable into k smaller DFAs, for some given integer k. We show that, for commutative permutation DFAs, restricting the number of factors makes the decision computationally harder, and yields a problem with tight bounds: it is NP-complete. Finally, we show that in general, this problem is in PSPACE, and it is in LOGSPACE for DFAs with a singleton alphabet.
2104.10319
Frederico Araujo
Frederico Araujo and Dhilung Kirat and Xiaokui Shu and Teryl Taylor and Jiyong Jang
Evidential Cyber Threat Hunting
5 pages, SDM AI4CS 2021
In Proceedings of the 2021 SIAM AI/ML for Cybersecurity Workshop (AI4CS)
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
A formal cyber reasoning framework for automating the threat hunting process is described. The new cyber reasoning methodology introduces an operational semantics that operates over three subspaces -- knowledge, hypothesis, and action -- to enable human-machine co-creation of threat hypotheses and protective recommendations. An implementation of this framework shows that the approach is practical and can be used to generalize evidence-based multi-criteria threat investigations.
[ { "created": "Wed, 21 Apr 2021 02:38:29 GMT", "version": "v1" } ]
2021-04-22
[ [ "Araujo", "Frederico", "" ], [ "Kirat", "Dhilung", "" ], [ "Shu", "Xiaokui", "" ], [ "Taylor", "Teryl", "" ], [ "Jang", "Jiyong", "" ] ]
A formal cyber reasoning framework for automating the threat hunting process is described. The new cyber reasoning methodology introduces an operational semantics that operates over three subspaces -- knowledge, hypothesis, and action -- to enable human-machine co-creation of threat hypotheses and protective recommendations. An implementation of this framework shows that the approach is practical and can be used to generalize evidence-based multi-criteria threat investigations.
2009.06724
Mourad Oulghelou
M. Oulghelou, C. Beghein, C. Allery
Data-Driven Optimization Approach for Inverse Problems : Application to Turbulent Mixed-Convection Flows
null
null
null
null
cs.CE physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimal control of turbulent mixed-convection flows has attracted considerable attention from researchers. Numerical algorithms such as Genetic Algorithms (GAs) are powerful tools that allow to perform global optimization. These algorithms are particularly of great interest in complex optimization problems where cost functionals may lack smoothness and regularity. In turbulent flow optimization, the hybridization of GA with high fidelity Computational Fluid Dynamics (CFD) is extremely demanding in terms of computational time and memory storage. Thus, alternative approaches aiming to alleviate these requirements are of great interest. Nowadays, data driven approaches gained attention due to their potential in predicting flow solutions based only on preexisting data. In the present paper, we propose a near-real time data-driven genetic algorithm (DDGA) for inverse parameter identification problems involving turbulent flows. In this optimization framework, the parametrized flow data are used in their reduced form obtained by the POD (Proper Orthogonal Decomposition) and solutions prediction is made by interpolating the temporal and the spatial POD subspaces through a recently developed Riemannian barycentric interpolation. The validation of the proposed optimization approach is carried out in the parameter identification problem of the turbulent mixed-convection flow in a cavity. The objective is to determine the inflow temperature and inflow velocity corresponding to a given temperature distribution in a restricted area of the spatial domain. The results show that the proposed genetic programming optimization framework is able to deliver good approximations of the optimal solutions within less than two minutes.
[ { "created": "Thu, 10 Sep 2020 18:08:18 GMT", "version": "v1" }, { "created": "Thu, 24 Sep 2020 07:38:26 GMT", "version": "v2" } ]
2020-09-25
[ [ "Oulghelou", "M.", "" ], [ "Beghein", "C.", "" ], [ "Allery", "C.", "" ] ]
Optimal control of turbulent mixed-convection flows has attracted considerable attention from researchers. Numerical algorithms such as Genetic Algorithms (GAs) are powerful tools that allow to perform global optimization. These algorithms are particularly of great interest in complex optimization problems where cost functionals may lack smoothness and regularity. In turbulent flow optimization, the hybridization of GA with high fidelity Computational Fluid Dynamics (CFD) is extremely demanding in terms of computational time and memory storage. Thus, alternative approaches aiming to alleviate these requirements are of great interest. Nowadays, data driven approaches gained attention due to their potential in predicting flow solutions based only on preexisting data. In the present paper, we propose a near-real time data-driven genetic algorithm (DDGA) for inverse parameter identification problems involving turbulent flows. In this optimization framework, the parametrized flow data are used in their reduced form obtained by the POD (Proper Orthogonal Decomposition) and solutions prediction is made by interpolating the temporal and the spatial POD subspaces through a recently developed Riemannian barycentric interpolation. The validation of the proposed optimization approach is carried out in the parameter identification problem of the turbulent mixed-convection flow in a cavity. The objective is to determine the inflow temperature and inflow velocity corresponding to a given temperature distribution in a restricted area of the spatial domain. The results show that the proposed genetic programming optimization framework is able to deliver good approximations of the optimal solutions within less than two minutes.
1903.09054
Dan Wang
Dan Wang, Xu Chen
An Optimal Stable Selective Model Inversion for Nonminimum-phase Systems
We are withdrawing this draft. Some technical issues need resolving
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stably inverting a dynamic system model is the foundation of numerous servo designs. Existing inversion techniques have provided accurate model approximations that are often highly effective in feedforward controls. However, when the inverse is implemented in a feedback system, additional considerations are needed for assuring causality, closed-loop stability, and robustness. In pursuit of bridging the gap between the best model matching and a robust feedback performance under closed-loop constraints, this paper provides a modern review of frequency-domain model inversion techniques and a new treatment of unstable zeros. We provide first a pole-zero-map-based intuitive inverse tuning for motion control systems. Then for general nonminimum-phase and unstable systems, we propose an optimal inversion algorithm that can attain model accuracy at the frequency regions of interest and meanwhile constrain noise amplification elsewhere to guarantee system robustness. The design goals are achieved by a multi-objective H infinity formulation and all-pass factorization that consider model matching, causality of transfer functions, frequency-domain gain constraints, and factorization of unstable system modes in a unified scheme. The proposed algorithm is validated on motion control systems and complex high-order systems.
[ { "created": "Thu, 21 Mar 2019 15:24:10 GMT", "version": "v1" }, { "created": "Fri, 15 Nov 2019 19:10:16 GMT", "version": "v2" } ]
2019-11-19
[ [ "Wang", "Dan", "" ], [ "Chen", "Xu", "" ] ]
Stably inverting a dynamic system model is the foundation of numerous servo designs. Existing inversion techniques have provided accurate model approximations that are often highly effective in feedforward controls. However, when the inverse is implemented in a feedback system, additional considerations are needed for assuring causality, closed-loop stability, and robustness. In pursuit of bridging the gap between the best model matching and a robust feedback performance under closed-loop constraints, this paper provides a modern review of frequency-domain model inversion techniques and a new treatment of unstable zeros. We provide first a pole-zero-map-based intuitive inverse tuning for motion control systems. Then for general nonminimum-phase and unstable systems, we propose an optimal inversion algorithm that can attain model accuracy at the frequency regions of interest and meanwhile constrain noise amplification elsewhere to guarantee system robustness. The design goals are achieved by a multi-objective H infinity formulation and all-pass factorization that consider model matching, causality of transfer functions, frequency-domain gain constraints, and factorization of unstable system modes in a unified scheme. The proposed algorithm is validated on motion control systems and complex high-order systems.
1808.09729
Wouter Meulemans
Thom Castermans, Mereke van Garderen, Wouter Meulemans, Martin N\"ollenburg and Xiaoru Yuan
Short Plane Supports for Spatial Hypergraphs
Appears in the Proceedings of the 26th International Symposium on Graph Drawing and Network Visualization (GD 2018)
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A graph $G=(V,E)$ is a support of a hypergraph $H=(V,S)$ if every hyperedge induces a connected subgraph in $G$. Supports are used for certain types of hypergraph visualizations. In this paper we consider visualizing spatial hypergraphs, where each vertex has a fixed location in the plane. This is the case, e.g., when modeling set systems of geospatial locations as hypergraphs. By applying established aesthetic quality criteria we are interested in finding supports that yield plane straight-line drawings with minimum total edge length on the input point set $V$. We first show, from a theoretical point of view, that the problem is NP-hard already under rather mild conditions as well as a negative approximability results. Therefore, the main focus of the paper lies on practical heuristic algorithms as well as an exact, ILP-based approach for computing short plane supports. We report results from computational experiments that investigate the effect of requiring planarity and acyclicity on the resulting support length. Further, we evaluate the performance and trade-offs between solution quality and speed of several heuristics relative to each other and compared to optimal solutions.
[ { "created": "Wed, 29 Aug 2018 11:12:55 GMT", "version": "v1" } ]
2018-08-30
[ [ "Castermans", "Thom", "" ], [ "van Garderen", "Mereke", "" ], [ "Meulemans", "Wouter", "" ], [ "Nöllenburg", "Martin", "" ], [ "Yuan", "Xiaoru", "" ] ]
A graph $G=(V,E)$ is a support of a hypergraph $H=(V,S)$ if every hyperedge induces a connected subgraph in $G$. Supports are used for certain types of hypergraph visualizations. In this paper we consider visualizing spatial hypergraphs, where each vertex has a fixed location in the plane. This is the case, e.g., when modeling set systems of geospatial locations as hypergraphs. By applying established aesthetic quality criteria we are interested in finding supports that yield plane straight-line drawings with minimum total edge length on the input point set $V$. We first show, from a theoretical point of view, that the problem is NP-hard already under rather mild conditions as well as a negative approximability results. Therefore, the main focus of the paper lies on practical heuristic algorithms as well as an exact, ILP-based approach for computing short plane supports. We report results from computational experiments that investigate the effect of requiring planarity and acyclicity on the resulting support length. Further, we evaluate the performance and trade-offs between solution quality and speed of several heuristics relative to each other and compared to optimal solutions.
2205.04093
Abhinav Ramesh Kashyap
Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min-Yen Kan, Roger Zimmermann, Soujanya Poria
So Different Yet So Alike! Constrained Unsupervised Text Style Transfer
Accepted to ACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Automatic transfer of text between domains has become popular in recent times. One of its aims is to preserve the semantic content of text being translated from source to target domain. However, it does not explicitly maintain other attributes between the source and translated text, for e.g., text length and descriptiveness. Maintaining constraints in transfer has several downstream applications, including data augmentation and de-biasing. We introduce a method for such constrained unsupervised text style transfer by introducing two complementary losses to the generative adversarial network (GAN) family of models. Unlike the competing losses used in GANs, we introduce cooperative losses where the discriminator and the generator cooperate and reduce the same loss. The first is a contrastive loss and the second is a classification loss, aiming to regularize the latent space further and bring similar sentences across domains closer together. We demonstrate that such training retains lexical, syntactic, and domain-specific constraints between domains for multiple benchmark datasets, including ones where more than one attribute change. We show that the complementary cooperative losses improve text quality, according to both automated and human evaluation measures.
[ { "created": "Mon, 9 May 2022 07:46:40 GMT", "version": "v1" } ]
2022-05-10
[ [ "Kashyap", "Abhinav Ramesh", "" ], [ "Hazarika", "Devamanyu", "" ], [ "Kan", "Min-Yen", "" ], [ "Zimmermann", "Roger", "" ], [ "Poria", "Soujanya", "" ] ]
Automatic transfer of text between domains has become popular in recent times. One of its aims is to preserve the semantic content of text being translated from source to target domain. However, it does not explicitly maintain other attributes between the source and translated text, for e.g., text length and descriptiveness. Maintaining constraints in transfer has several downstream applications, including data augmentation and de-biasing. We introduce a method for such constrained unsupervised text style transfer by introducing two complementary losses to the generative adversarial network (GAN) family of models. Unlike the competing losses used in GANs, we introduce cooperative losses where the discriminator and the generator cooperate and reduce the same loss. The first is a contrastive loss and the second is a classification loss, aiming to regularize the latent space further and bring similar sentences across domains closer together. We demonstrate that such training retains lexical, syntactic, and domain-specific constraints between domains for multiple benchmark datasets, including ones where more than one attribute change. We show that the complementary cooperative losses improve text quality, according to both automated and human evaluation measures.
2009.08511
Sudipta Banerjee
Sudipta Banerjee and Arun Ross
Smartphone Camera De-identification while Preserving Biometric Utility
null
Proc. of 10th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Tampa, USA), September 2019
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
The principle of Photo Response Non Uniformity (PRNU) is often exploited to deduce the identity of the smartphone device whose camera or sensor was used to acquire a certain image. In this work, we design an algorithm that perturbs a face image acquired using a smartphone camera such that (a) sensor-specific details pertaining to the smartphone camera are suppressed (sensor anonymization); (b) the sensor pattern of a different device is incorporated (sensor spoofing); and (c) biometric matching using the perturbed image is not affected (biometric utility). We employ a simple approach utilizing Discrete Cosine Transform to achieve the aforementioned objectives. Experiments conducted on the MICHE-I and OULU-NPU datasets, which contain periocular and facial data acquired using 12 smartphone cameras, demonstrate the efficacy of the proposed de-identification algorithm on three different PRNU-based sensor identification schemes. This work has application in sensor forensics and personal privacy.
[ { "created": "Thu, 17 Sep 2020 19:48:43 GMT", "version": "v1" } ]
2020-09-21
[ [ "Banerjee", "Sudipta", "" ], [ "Ross", "Arun", "" ] ]
The principle of Photo Response Non Uniformity (PRNU) is often exploited to deduce the identity of the smartphone device whose camera or sensor was used to acquire a certain image. In this work, we design an algorithm that perturbs a face image acquired using a smartphone camera such that (a) sensor-specific details pertaining to the smartphone camera are suppressed (sensor anonymization); (b) the sensor pattern of a different device is incorporated (sensor spoofing); and (c) biometric matching using the perturbed image is not affected (biometric utility). We employ a simple approach utilizing Discrete Cosine Transform to achieve the aforementioned objectives. Experiments conducted on the MICHE-I and OULU-NPU datasets, which contain periocular and facial data acquired using 12 smartphone cameras, demonstrate the efficacy of the proposed de-identification algorithm on three different PRNU-based sensor identification schemes. This work has application in sensor forensics and personal privacy.
2002.02545
Lichen Wang
Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu
Contradictory Structure Learning for Semi-supervised Domain Adaptation
8 pages without citations
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current adversarial adaptation methods attempt to align the cross-domain features, whereas two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain. To solve these challenges, we propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA). UODA consists of a generator and two classifiers (i.e., the source-scattering classifier and the target-clustering classifier), which are trained for contradictory purposes. The target-clustering classifier attempts to cluster the target features to improve intra-class density and enlarge inter-class divergence. Meanwhile, the source-scattering classifier is designed to scatter the source features to enhance the decision boundary's smoothness. Through the alternation of source-feature expansion and target-feature clustering procedures, the target features are well-enclosed within the dilated boundary of the corresponding source features. This strategy can make the cross-domain features to be precisely aligned against the source bias simultaneously. Moreover, to overcome the model collapse through training, we progressively update the measurement of feature's distance and their representation via an adversarial training paradigm. Extensive experiments on the benchmarks of DomainNet and Office-home datasets demonstrate the superiority of our approach over the state-of-the-art methods.
[ { "created": "Thu, 6 Feb 2020 22:58:20 GMT", "version": "v1" }, { "created": "Sun, 14 Feb 2021 19:58:09 GMT", "version": "v2" } ]
2021-02-16
[ [ "Qin", "Can", "" ], [ "Wang", "Lichen", "" ], [ "Ma", "Qianqian", "" ], [ "Yin", "Yu", "" ], [ "Wang", "Huan", "" ], [ "Fu", "Yun", "" ] ]
Current adversarial adaptation methods attempt to align the cross-domain features, whereas two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain. To solve these challenges, we propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA). UODA consists of a generator and two classifiers (i.e., the source-scattering classifier and the target-clustering classifier), which are trained for contradictory purposes. The target-clustering classifier attempts to cluster the target features to improve intra-class density and enlarge inter-class divergence. Meanwhile, the source-scattering classifier is designed to scatter the source features to enhance the decision boundary's smoothness. Through the alternation of source-feature expansion and target-feature clustering procedures, the target features are well-enclosed within the dilated boundary of the corresponding source features. This strategy can make the cross-domain features to be precisely aligned against the source bias simultaneously. Moreover, to overcome the model collapse through training, we progressively update the measurement of feature's distance and their representation via an adversarial training paradigm. Extensive experiments on the benchmarks of DomainNet and Office-home datasets demonstrate the superiority of our approach over the state-of-the-art methods.
2207.00288
Miguel Suau
Miguel Suau, Jinke He, Mustafa Mert \c{C}elikok, Matthijs T. J. Spaan, Frans A. Oliehoek
Distributed Influence-Augmented Local Simulators for Parallel MARL in Large Networked Systems
null
null
null
null
cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
Due to its high sample complexity, simulation is, as of today, critical for the successful application of reinforcement learning. Many real-world problems, however, exhibit overly complex dynamics, which makes their full-scale simulation computationally slow. In this paper, we show how to decompose large networked systems of many agents into multiple local components such that we can build separate simulators that run independently and in parallel. To monitor the influence that the different local components exert on one another, each of these simulators is equipped with a learned model that is periodically trained on real trajectories. Our empirical results reveal that distributing the simulation among different processes not only makes it possible to train large multi-agent systems in just a few hours but also helps mitigate the negative effects of simultaneous learning.
[ { "created": "Fri, 1 Jul 2022 09:33:33 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2024 08:36:33 GMT", "version": "v2" } ]
2024-03-04
[ [ "Suau", "Miguel", "" ], [ "He", "Jinke", "" ], [ "Çelikok", "Mustafa Mert", "" ], [ "Spaan", "Matthijs T. J.", "" ], [ "Oliehoek", "Frans A.", "" ] ]
Due to its high sample complexity, simulation is, as of today, critical for the successful application of reinforcement learning. Many real-world problems, however, exhibit overly complex dynamics, which makes their full-scale simulation computationally slow. In this paper, we show how to decompose large networked systems of many agents into multiple local components such that we can build separate simulators that run independently and in parallel. To monitor the influence that the different local components exert on one another, each of these simulators is equipped with a learned model that is periodically trained on real trajectories. Our empirical results reveal that distributing the simulation among different processes not only makes it possible to train large multi-agent systems in just a few hours but also helps mitigate the negative effects of simultaneous learning.
1901.02636
Jianan Zhang
Jianan Zhang, Hyang-Won Lee, Eytan Modiano
On the Robustness of Distributed Computing Networks
International Conference on the Design of Reliable Communication Networks (DRCN)
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic flows in a distributed computing network require both transmission and processing, and can be interdicted by removing either communication or computation resources. We study the robustness of a distributed computing network under the failures of communication links and computation nodes. We define cut metrics that measure the connectivity, and show a non-zero gap between the maximum flow and the minimum cut. Moreover, we study a network flow interdiction problem that minimizes the maximum flow by removing communication and computation resources within a given budget. We develop mathematical programs to compute the optimal interdiction, and polynomial-time approximation algorithms that achieve near-optimal interdiction in simulation.
[ { "created": "Wed, 9 Jan 2019 08:38:38 GMT", "version": "v1" }, { "created": "Mon, 14 Jan 2019 10:47:18 GMT", "version": "v2" }, { "created": "Fri, 26 Nov 2021 10:26:06 GMT", "version": "v3" } ]
2021-11-29
[ [ "Zhang", "Jianan", "" ], [ "Lee", "Hyang-Won", "" ], [ "Modiano", "Eytan", "" ] ]
Traffic flows in a distributed computing network require both transmission and processing, and can be interdicted by removing either communication or computation resources. We study the robustness of a distributed computing network under the failures of communication links and computation nodes. We define cut metrics that measure the connectivity, and show a non-zero gap between the maximum flow and the minimum cut. Moreover, we study a network flow interdiction problem that minimizes the maximum flow by removing communication and computation resources within a given budget. We develop mathematical programs to compute the optimal interdiction, and polynomial-time approximation algorithms that achieve near-optimal interdiction in simulation.
2107.05563
Adnan Aijaz
Adnan Aijaz
Infrastructure-less Wireless Connectivity for Mobile Robotic Systems in Logistics: Why Bluetooth Mesh Networking is Important?
To appear in IEEE ETFA 2021
null
null
null
cs.RO cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile robots have disrupted the material handling industry which is witnessing radical changes. The requirement for enhanced automation across various industry segments often entails mobile robotic systems operating in logistics facilities with little/no infrastructure. In such environments, out-of-box low-cost robotic solutions are desirable. Wireless connectivity plays a crucial role in successful operation of such mobile robotic systems. A wireless mesh network of mobile robots is an attractive solution; however, a number of system-level challenges create unique and stringent service requirements. The focus of this paper is the role of Bluetooth mesh technology, which is the latest addition to the Internet-of-Things (IoT) connectivity landscape, in addressing the challenges of infrastructure-less connectivity for mobile robotic systems. It articulates the key system-level design challenges from communication, control, cooperation, coverage, security, and navigation/localization perspectives, and explores different capabilities of Bluetooth mesh technology for such challenges. It also provides performance insights through real-world experimental evaluation of Bluetooth mesh while investigating its differentiating features against competing solutions.
[ { "created": "Mon, 12 Jul 2021 16:34:04 GMT", "version": "v1" } ]
2021-07-13
[ [ "Aijaz", "Adnan", "" ] ]
Mobile robots have disrupted the material handling industry which is witnessing radical changes. The requirement for enhanced automation across various industry segments often entails mobile robotic systems operating in logistics facilities with little/no infrastructure. In such environments, out-of-box low-cost robotic solutions are desirable. Wireless connectivity plays a crucial role in successful operation of such mobile robotic systems. A wireless mesh network of mobile robots is an attractive solution; however, a number of system-level challenges create unique and stringent service requirements. The focus of this paper is the role of Bluetooth mesh technology, which is the latest addition to the Internet-of-Things (IoT) connectivity landscape, in addressing the challenges of infrastructure-less connectivity for mobile robotic systems. It articulates the key system-level design challenges from communication, control, cooperation, coverage, security, and navigation/localization perspectives, and explores different capabilities of Bluetooth mesh technology for such challenges. It also provides performance insights through real-world experimental evaluation of Bluetooth mesh while investigating its differentiating features against competing solutions.
1711.08191
Alberto Molinari
Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, Pietro Sala
Interval vs. Point Temporal Logic Model Checking: an Expressiveness Comparison
null
ACM Trans. Comput. Logic 20 (2018) 4:1-4:31
10.1145/3281028
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last years, model checking with interval temporal logics is emerging as a viable alternative to model checking with standard point-based temporal logics, such as LTL, CTL, CTL*, and the like. The behavior of the system is modeled by means of (finite) Kripke structures, as usual. However, while temporal logics which are interpreted "point-wise" describe how the system evolves state-by-state, and predicate properties of system states, those which are interpreted "interval-wise" express properties of computation stretches, spanning a sequence of states. A proposition letter is assumed to hold over a computation stretch (interval) if and only if it holds over each component state (homogeneity assumption). A natural question arises: is there any advantage in replacing points by intervals as the primary temporal entities, or is it just a matter of taste? In this paper, we study the expressiveness of Halpern and Shoham's interval temporal logic (HS) in model checking, in comparison with those of LTL, CTL, and CTL*. To this end, we consider three semantic variants of HS: the state-based one, introduced by Montanari et al., that allows time to branch both in the past and in the future, the computation-tree-based one, that allows time to branch in the future only, and the trace-based variant, that disallows time to branch. These variants are compared among themselves and to the aforementioned standard logics, getting a complete picture. In particular, we show that HS with trace-based semantics is equivalent to LTL (but at least exponentially more succinct), HS with computation-tree-based semantics is equivalent to finitary CTL*, and HS with state-based semantics is incomparable with all of them (LTL, CTL, and CTL*).
[ { "created": "Wed, 22 Nov 2017 09:33:35 GMT", "version": "v1" }, { "created": "Mon, 24 Sep 2018 14:50:28 GMT", "version": "v2" } ]
2019-02-07
[ [ "Bozzelli", "Laura", "" ], [ "Molinari", "Alberto", "" ], [ "Montanari", "Angelo", "" ], [ "Peron", "Adriano", "" ], [ "Sala", "Pietro", "" ] ]
In the last years, model checking with interval temporal logics is emerging as a viable alternative to model checking with standard point-based temporal logics, such as LTL, CTL, CTL*, and the like. The behavior of the system is modeled by means of (finite) Kripke structures, as usual. However, while temporal logics which are interpreted "point-wise" describe how the system evolves state-by-state, and predicate properties of system states, those which are interpreted "interval-wise" express properties of computation stretches, spanning a sequence of states. A proposition letter is assumed to hold over a computation stretch (interval) if and only if it holds over each component state (homogeneity assumption). A natural question arises: is there any advantage in replacing points by intervals as the primary temporal entities, or is it just a matter of taste? In this paper, we study the expressiveness of Halpern and Shoham's interval temporal logic (HS) in model checking, in comparison with those of LTL, CTL, and CTL*. To this end, we consider three semantic variants of HS: the state-based one, introduced by Montanari et al., that allows time to branch both in the past and in the future, the computation-tree-based one, that allows time to branch in the future only, and the trace-based variant, that disallows time to branch. These variants are compared among themselves and to the aforementioned standard logics, getting a complete picture. In particular, we show that HS with trace-based semantics is equivalent to LTL (but at least exponentially more succinct), HS with computation-tree-based semantics is equivalent to finitary CTL*, and HS with state-based semantics is incomparable with all of them (LTL, CTL, and CTL*).
2306.02317
Alexandra Antonova
Alexandra Antonova, Evelina Bakhturina, Boris Ginsburg
SpellMapper: A non-autoregressive neural spellchecker for ASR customization with candidate retrieval based on n-gram mappings
Accepted by INTERSPEECH 2023
null
null
null
cs.CL cs.AI cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contextual spelling correction models are an alternative to shallow fusion to improve automatic speech recognition (ASR) quality given user vocabulary. To deal with large user vocabularies, most of these models include candidate retrieval mechanisms, usually based on minimum edit distance between fragments of ASR hypothesis and user phrases. However, the edit-distance approach is slow, non-trainable, and may have low recall as it relies only on common letters. We propose: 1) a novel algorithm for candidate retrieval, based on misspelled n-gram mappings, which gives up to 90% recall with just the top 10 candidates on Spoken Wikipedia; 2) a non-autoregressive neural model based on BERT architecture, where the initial transcript and ten candidates are combined into one input. The experiments on Spoken Wikipedia show 21.4% word error rate improvement compared to a baseline ASR system.
[ { "created": "Sun, 4 Jun 2023 10:00:12 GMT", "version": "v1" } ]
2023-06-06
[ [ "Antonova", "Alexandra", "" ], [ "Bakhturina", "Evelina", "" ], [ "Ginsburg", "Boris", "" ] ]
Contextual spelling correction models are an alternative to shallow fusion to improve automatic speech recognition (ASR) quality given user vocabulary. To deal with large user vocabularies, most of these models include candidate retrieval mechanisms, usually based on minimum edit distance between fragments of ASR hypothesis and user phrases. However, the edit-distance approach is slow, non-trainable, and may have low recall as it relies only on common letters. We propose: 1) a novel algorithm for candidate retrieval, based on misspelled n-gram mappings, which gives up to 90% recall with just the top 10 candidates on Spoken Wikipedia; 2) a non-autoregressive neural model based on BERT architecture, where the initial transcript and ten candidates are combined into one input. The experiments on Spoken Wikipedia show 21.4% word error rate improvement compared to a baseline ASR system.
2301.05499
Vidit Vidit
Vidit Vidit, Martin Engilberge, Mathieu Salzmann
CLIP the Gap: A Single Domain Generalization Approach for Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Single Domain Generalization (SDG) tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain. While this has been well studied for image classification, the literature on SDG object detection remains almost non-existent. To address the challenges of simultaneously learning robust object localization and representation, we propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts. We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss. Our experiments evidence the benefits of our approach, outperforming by 10% the only existing SDG object detection method, Single-DGOD [49], on their own diverse weather-driving benchmark.
[ { "created": "Fri, 13 Jan 2023 12:01:18 GMT", "version": "v1" }, { "created": "Mon, 6 Mar 2023 13:35:22 GMT", "version": "v2" } ]
2023-03-07
[ [ "Vidit", "Vidit", "" ], [ "Engilberge", "Martin", "" ], [ "Salzmann", "Mathieu", "" ] ]
Single Domain Generalization (SDG) tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain. While this has been well studied for image classification, the literature on SDG object detection remains almost non-existent. To address the challenges of simultaneously learning robust object localization and representation, we propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts. We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss. Our experiments evidence the benefits of our approach, outperforming by 10% the only existing SDG object detection method, Single-DGOD [49], on their own diverse weather-driving benchmark.
2305.16333
Zhuangqun Huang
Zhuangqun Huang, Gil Keren, Ziran Jiang, Shashank Jain, David Goss-Grubbs, Nelson Cheng, Farnaz Abtahi, Duc Le, David Zhang, Antony D'Avirro, Ethan Campbell-Taylor, Jessie Salas, Irina-Elena Veliche, Xi Chen
Text Generation with Speech Synthesis for ASR Data Augmentation
null
null
null
null
cs.CL cs.AI cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Aiming at reducing the reliance on expensive human annotations, data synthesis for Automatic Speech Recognition (ASR) has remained an active area of research. While prior work mainly focuses on synthetic speech generation for ASR data augmentation, its combination with text generation methods is considerably less explored. In this work, we explore text augmentation for ASR using large-scale pre-trained neural networks, and systematically compare those to traditional text augmentation methods. The generated synthetic texts are then converted to synthetic speech using a text-to-speech (TTS) system and added to the ASR training data. In experiments conducted on three datasets, we find that neural models achieve 9%-15% relative WER improvement and outperform traditional methods. We conclude that text augmentation, particularly through modern neural approaches, is a viable tool for improving the accuracy of ASR systems.
[ { "created": "Mon, 22 May 2023 18:45:20 GMT", "version": "v1" } ]
2023-05-29
[ [ "Huang", "Zhuangqun", "" ], [ "Keren", "Gil", "" ], [ "Jiang", "Ziran", "" ], [ "Jain", "Shashank", "" ], [ "Goss-Grubbs", "David", "" ], [ "Cheng", "Nelson", "" ], [ "Abtahi", "Farnaz", "" ], [ "Le", "Duc", "" ], [ "Zhang", "David", "" ], [ "D'Avirro", "Antony", "" ], [ "Campbell-Taylor", "Ethan", "" ], [ "Salas", "Jessie", "" ], [ "Veliche", "Irina-Elena", "" ], [ "Chen", "Xi", "" ] ]
Aiming at reducing the reliance on expensive human annotations, data synthesis for Automatic Speech Recognition (ASR) has remained an active area of research. While prior work mainly focuses on synthetic speech generation for ASR data augmentation, its combination with text generation methods is considerably less explored. In this work, we explore text augmentation for ASR using large-scale pre-trained neural networks, and systematically compare those to traditional text augmentation methods. The generated synthetic texts are then converted to synthetic speech using a text-to-speech (TTS) system and added to the ASR training data. In experiments conducted on three datasets, we find that neural models achieve 9%-15% relative WER improvement and outperform traditional methods. We conclude that text augmentation, particularly through modern neural approaches, is a viable tool for improving the accuracy of ASR systems.
2207.10635
Connor Wagaman
S\'ilvia Casacuberta, Michael Shoemate, Salil Vadhan, Connor Wagaman
Widespread Underestimation of Sensitivity in Differentially Private Libraries and How to Fix It
Full version of the paper presented at ACM CCS 2022 and TPDP 2022
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
We identify a new class of vulnerabilities in implementations of differential privacy. Specifically, they arise when computing basic statistics such as sums, thanks to discrepancies between the implemented arithmetic using finite data types (namely, ints or floats) and idealized arithmetic over the reals or integers. These discrepancies cause the sensitivity of the implemented statistics (i.e., how much one individual's data can affect the result) to be much larger than the sensitivity we expect. Consequently, essentially all differential privacy libraries fail to introduce enough noise to meet the requirements of differential privacy, and we show that this may be exploited in realistic attacks that can extract individual-level information from private query systems. In addition to presenting these vulnerabilities, we also provide a number of solutions, which modify or constrain the way in which the sum is implemented in order to recover the idealized or near-idealized bounds on sensitivity.
[ { "created": "Thu, 21 Jul 2022 17:45:25 GMT", "version": "v1" }, { "created": "Thu, 10 Nov 2022 18:51:08 GMT", "version": "v2" } ]
2022-11-11
[ [ "Casacuberta", "Sílvia", "" ], [ "Shoemate", "Michael", "" ], [ "Vadhan", "Salil", "" ], [ "Wagaman", "Connor", "" ] ]
We identify a new class of vulnerabilities in implementations of differential privacy. Specifically, they arise when computing basic statistics such as sums, thanks to discrepancies between the implemented arithmetic using finite data types (namely, ints or floats) and idealized arithmetic over the reals or integers. These discrepancies cause the sensitivity of the implemented statistics (i.e., how much one individual's data can affect the result) to be much larger than the sensitivity we expect. Consequently, essentially all differential privacy libraries fail to introduce enough noise to meet the requirements of differential privacy, and we show that this may be exploited in realistic attacks that can extract individual-level information from private query systems. In addition to presenting these vulnerabilities, we also provide a number of solutions, which modify or constrain the way in which the sum is implemented in order to recover the idealized or near-idealized bounds on sensitivity.
2001.11819
Dan Piponi
Dan Piponi, Dave Moore, Joshua V. Dillon
Joint Distributions for TensorFlow Probability
Based on extended abstract submitted to PROBPROG 2020
null
null
null
cs.PL cs.LG stat.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central tenet of probabilistic programming is that a model is specified exactly once in a canonical representation which is usable by inference algorithms. We describe JointDistributions, a family of declarative representations of directed graphical models in TensorFlow Probability.
[ { "created": "Wed, 22 Jan 2020 01:00:35 GMT", "version": "v1" } ]
2020-02-03
[ [ "Piponi", "Dan", "" ], [ "Moore", "Dave", "" ], [ "Dillon", "Joshua V.", "" ] ]
A central tenet of probabilistic programming is that a model is specified exactly once in a canonical representation which is usable by inference algorithms. We describe JointDistributions, a family of declarative representations of directed graphical models in TensorFlow Probability.
2103.13267
Iretiayo Akinola
Iretiayo Akinola, Zizhao Wang, and Peter Allen
CLAMGen: Closed-Loop Arm Motion Generation via Multi-view Vision-Based RL
null
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a vision-based reinforcement learning (RL) approach for closed-loop trajectory generation in an arm reaching problem. Arm trajectory generation is a fundamental robotics problem which entails finding collision-free paths to move the robot's body (e.g. arm) in order to satisfy a goal (e.g. place end-effector at a point). While classical methods typically require the model of the environment to solve a planning, search or optimization problem, learning-based approaches hold the promise of directly mapping from observations to robot actions. However, learning a collision-avoidance policy using RL remains a challenge for various reasons, including, but not limited to, partial observability, poor exploration, low sample efficiency, and learning instabilities. To address these challenges, we present a residual-RL method that leverages a greedy goal-reaching RL policy as the base to improve exploration, and the base policy is augmented with residual state-action values and residual actions learned from images to avoid obstacles. Further more, we introduce novel learning objectives and techniques to improve 3D understanding from multiple image views and sample efficiency of our algorithm. Compared to RL baselines, our method achieves superior performance in terms of success rate.
[ { "created": "Wed, 24 Mar 2021 15:33:03 GMT", "version": "v1" } ]
2021-03-25
[ [ "Akinola", "Iretiayo", "" ], [ "Wang", "Zizhao", "" ], [ "Allen", "Peter", "" ] ]
We propose a vision-based reinforcement learning (RL) approach for closed-loop trajectory generation in an arm reaching problem. Arm trajectory generation is a fundamental robotics problem which entails finding collision-free paths to move the robot's body (e.g. arm) in order to satisfy a goal (e.g. place end-effector at a point). While classical methods typically require the model of the environment to solve a planning, search or optimization problem, learning-based approaches hold the promise of directly mapping from observations to robot actions. However, learning a collision-avoidance policy using RL remains a challenge for various reasons, including, but not limited to, partial observability, poor exploration, low sample efficiency, and learning instabilities. To address these challenges, we present a residual-RL method that leverages a greedy goal-reaching RL policy as the base to improve exploration, and the base policy is augmented with residual state-action values and residual actions learned from images to avoid obstacles. Further more, we introduce novel learning objectives and techniques to improve 3D understanding from multiple image views and sample efficiency of our algorithm. Compared to RL baselines, our method achieves superior performance in terms of success rate.
2011.08315
Omid Ardakanian
Omid Hajihassani, Omid Ardakanian, Hamzeh Khazaei
Anonymizing Sensor Data on the Edge: A Representation Learning and Transformation Approach
25 pages, 11 figures; Title updated
ACM Transactions on Internet of Things 3 (2022) 1-26
10.1145/3485820
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The abundance of data collected by sensors in Internet of Things (IoT) devices, and the success of deep neural networks in uncovering hidden patterns in time series data have led to mounting privacy concerns. This is because private and sensitive information can be potentially learned from sensor data by applications that have access to this data. In this paper, we aim to examine the tradeoff between utility and privacy loss by learning low-dimensional representations that are useful for data obfuscation. We propose deterministic and probabilistic transformations in the latent space of a variational autoencoder to synthesize time series data such that intrusive inferences are prevented while desired inferences can still be made with sufficient accuracy. In the deterministic case, we use a linear transformation to move the representation of input data in the latent space such that the reconstructed data is likely to have the same public attribute but a different private attribute than the original input data. In the probabilistic case, we apply the linear transformation to the latent representation of input data with some probability. We compare our technique with autoencoder-based anonymization techniques and additionally show that it can anonymize data in real time on resource-constrained edge devices.
[ { "created": "Mon, 16 Nov 2020 22:32:30 GMT", "version": "v1" }, { "created": "Tue, 17 Aug 2021 22:21:43 GMT", "version": "v2" }, { "created": "Fri, 27 Aug 2021 21:11:42 GMT", "version": "v3" } ]
2022-06-02
[ [ "Hajihassani", "Omid", "" ], [ "Ardakanian", "Omid", "" ], [ "Khazaei", "Hamzeh", "" ] ]
The abundance of data collected by sensors in Internet of Things (IoT) devices, and the success of deep neural networks in uncovering hidden patterns in time series data have led to mounting privacy concerns. This is because private and sensitive information can be potentially learned from sensor data by applications that have access to this data. In this paper, we aim to examine the tradeoff between utility and privacy loss by learning low-dimensional representations that are useful for data obfuscation. We propose deterministic and probabilistic transformations in the latent space of a variational autoencoder to synthesize time series data such that intrusive inferences are prevented while desired inferences can still be made with sufficient accuracy. In the deterministic case, we use a linear transformation to move the representation of input data in the latent space such that the reconstructed data is likely to have the same public attribute but a different private attribute than the original input data. In the probabilistic case, we apply the linear transformation to the latent representation of input data with some probability. We compare our technique with autoencoder-based anonymization techniques and additionally show that it can anonymize data in real time on resource-constrained edge devices.
2203.14695
Daniel Russo
Daniel Russo
Recruiting Software Engineers on Prolific
null
The 1st International Workshop on Recruiting Participants for Empirical Software Engineering, May 17, 2022, Pittsburgh, PA, USA
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recruiting participants for software engineering research has been a primary concern of the human factors community. This is particularly true for quantitative investigations that require a minimum sample size not to be statistically underpowered. Traditional data collection techniques, such as mailing lists, are highly doubtful due to self-selection biases. The introduction of crowdsourcing platforms allows researchers to select informants with the exact requirements foreseen by the study design, gather data in a concise time frame, compensate their work with fair hourly pay, and most importantly, have a high degree of control over the entire data collection process. This experience report discusses our experience conducting sample studies using Prolific, an academic crowdsourcing platform. Topics discussed are the type of studies, selection processes, and power computation.
[ { "created": "Mon, 28 Mar 2022 12:49:27 GMT", "version": "v1" } ]
2022-03-29
[ [ "Russo", "Daniel", "" ] ]
Recruiting participants for software engineering research has been a primary concern of the human factors community. This is particularly true for quantitative investigations that require a minimum sample size not to be statistically underpowered. Traditional data collection techniques, such as mailing lists, are highly doubtful due to self-selection biases. The introduction of crowdsourcing platforms allows researchers to select informants with the exact requirements foreseen by the study design, gather data in a concise time frame, compensate their work with fair hourly pay, and most importantly, have a high degree of control over the entire data collection process. This experience report discusses our experience conducting sample studies using Prolific, an academic crowdsourcing platform. Topics discussed are the type of studies, selection processes, and power computation.
2402.15944
Shuang Li
Gang Li, Qiuwei Li, Shuang Li, and Wu Angela Li
On A Class of Greedy Sparse Recovery Algorithms -- A High Dimensional Approach
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/publicdomain/zero/1.0/
Sparse signal recovery deals with finding the sparest solution of an under-determined linear system $x = Qs$. In this paper, we propose a novel greedy approach to addressing the challenges from such a problem. Such an approach is based on a characterization of solutions to the system, which allows us to work on the sparse recovery in the $s$-space directly with a given measure. With $l_2$-based measure, two OMP-type algorithms are proposed, which significantly outperform the classical OMP algorithm in terms of recovery accuracy while maintaining comparable computational complexity. An $l_1$-based algorithm, denoted as $\text{Alg}_{GBP}$ (greedy basis pursuit) algorithm, is derived. Such an algorithm significantly outperforms the classical BP algorithm. A CoSaMP-type algorithm is also proposed to further enhance the performance of the two proposed OMP-type algorithms. The superior performance of our proposed algorithms is demonstrated through extensive numerical simulations using synthetic data as well as video signals, highlighting their potential for various applications in compressed sensing and signal processing.
[ { "created": "Sun, 25 Feb 2024 01:05:39 GMT", "version": "v1" } ]
2024-02-27
[ [ "Li", "Gang", "" ], [ "Li", "Qiuwei", "" ], [ "Li", "Shuang", "" ], [ "Li", "Wu Angela", "" ] ]
Sparse signal recovery deals with finding the sparest solution of an under-determined linear system $x = Qs$. In this paper, we propose a novel greedy approach to addressing the challenges from such a problem. Such an approach is based on a characterization of solutions to the system, which allows us to work on the sparse recovery in the $s$-space directly with a given measure. With $l_2$-based measure, two OMP-type algorithms are proposed, which significantly outperform the classical OMP algorithm in terms of recovery accuracy while maintaining comparable computational complexity. An $l_1$-based algorithm, denoted as $\text{Alg}_{GBP}$ (greedy basis pursuit) algorithm, is derived. Such an algorithm significantly outperforms the classical BP algorithm. A CoSaMP-type algorithm is also proposed to further enhance the performance of the two proposed OMP-type algorithms. The superior performance of our proposed algorithms is demonstrated through extensive numerical simulations using synthetic data as well as video signals, highlighting their potential for various applications in compressed sensing and signal processing.
2405.15182
Peihua Mai
Peihua Mai, Ran Yan, Yan Pang
RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
22 pages
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks. To address the privacy concern, secure aggregation (SecAgg) is often used to obtain the aggregation of gradients on sever without inspecting individual user updates. Unfortunately, existing defense strategies against poisoning attacks rely on the analysis of local updates in plaintext, making them incompatible with SecAgg. To reconcile the conflicts, we propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol. Our framework computes the cosine similarity between local updates and server updates to conduct robust aggregation. Furthermore, we leverage verifiable packed Shamir secret sharing to achieve reduced communication cost of $O(M+N)$ per user, and design a novel dot-product aggregation algorithm to resolve the issue of increased information leakage. Our experimental results show that RFLPA significantly reduces communication and computation overhead by over $75\%$ compared to the state-of-the-art method, BREA, while maintaining competitive accuracy.
[ { "created": "Fri, 24 May 2024 03:31:10 GMT", "version": "v1" } ]
2024-05-27
[ [ "Mai", "Peihua", "" ], [ "Yan", "Ran", "" ], [ "Pang", "Yan", "" ] ]
Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks. To address the privacy concern, secure aggregation (SecAgg) is often used to obtain the aggregation of gradients on sever without inspecting individual user updates. Unfortunately, existing defense strategies against poisoning attacks rely on the analysis of local updates in plaintext, making them incompatible with SecAgg. To reconcile the conflicts, we propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol. Our framework computes the cosine similarity between local updates and server updates to conduct robust aggregation. Furthermore, we leverage verifiable packed Shamir secret sharing to achieve reduced communication cost of $O(M+N)$ per user, and design a novel dot-product aggregation algorithm to resolve the issue of increased information leakage. Our experimental results show that RFLPA significantly reduces communication and computation overhead by over $75\%$ compared to the state-of-the-art method, BREA, while maintaining competitive accuracy.
1909.06344
Paul Emmerich
Paul Emmerich, Simon Ellmann, Fabian Bonk, Alex Egger, Esa\'u Garc\'ia S\'anchez-Torija, Thomas G\"unzel, Sebastian Di Luzio, Alexandru Obada, Maximilian Stadlmeier, Sebastian Voit, Georg Carle
The Case for Writing Network Drivers in High-Level Programming Languages
null
ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS 2019), 2019
null
null
cs.NI cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drivers are written in C or restricted subsets of C++ on all production-grade server, desktop, and mobile operating systems. They account for 66% of the code in Linux, but 39 out of 40 security bugs related to memory safety found in Linux in 2017 are located in drivers. These bugs could have been prevented by using high-level languages for drivers. We present user space drivers for the Intel ixgbe 10 Gbit/s network cards implemented in Rust, Go, C#, Java, OCaml, Haskell, Swift, JavaScript, and Python written from scratch in idiomatic style for the respective languages. We quantify costs and benefits of using these languages: High-level languages are safer (fewer bugs, more safety checks), but run-time safety checks reduce throughput and garbage collection leads to latency spikes. Out-of-order CPUs mitigate the cost of safety checks: Our Rust driver executes 63% more instructions per packet but is only 4% slower than a reference C implementation. Go's garbage collector keeps latencies below 100 $\mu$s even under heavy load. Other languages fare worse, but their unique properties make for an interesting case study. All implementations are available as free and open source at https://github.com/ixy-languages/ixy-languages.
[ { "created": "Fri, 13 Sep 2019 17:41:43 GMT", "version": "v1" } ]
2019-09-16
[ [ "Emmerich", "Paul", "" ], [ "Ellmann", "Simon", "" ], [ "Bonk", "Fabian", "" ], [ "Egger", "Alex", "" ], [ "Sánchez-Torija", "Esaú García", "" ], [ "Günzel", "Thomas", "" ], [ "Di Luzio", "Sebastian", "" ], [ "Obada", "Alexandru", "" ], [ "Stadlmeier", "Maximilian", "" ], [ "Voit", "Sebastian", "" ], [ "Carle", "Georg", "" ] ]
Drivers are written in C or restricted subsets of C++ on all production-grade server, desktop, and mobile operating systems. They account for 66% of the code in Linux, but 39 out of 40 security bugs related to memory safety found in Linux in 2017 are located in drivers. These bugs could have been prevented by using high-level languages for drivers. We present user space drivers for the Intel ixgbe 10 Gbit/s network cards implemented in Rust, Go, C#, Java, OCaml, Haskell, Swift, JavaScript, and Python written from scratch in idiomatic style for the respective languages. We quantify costs and benefits of using these languages: High-level languages are safer (fewer bugs, more safety checks), but run-time safety checks reduce throughput and garbage collection leads to latency spikes. Out-of-order CPUs mitigate the cost of safety checks: Our Rust driver executes 63% more instructions per packet but is only 4% slower than a reference C implementation. Go's garbage collector keeps latencies below 100 $\mu$s even under heavy load. Other languages fare worse, but their unique properties make for an interesting case study. All implementations are available as free and open source at https://github.com/ixy-languages/ixy-languages.
1604.04772
Thejaka Kanewala
Thejaka Amila Kanewala, Marcin Zalewski and Andrew Lumsdaine
Abstract Graph Machine
10 pages, including Appendix and References
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An Abstract Graph Machine(AGM) is an abstract model for distributed memory parallel stabilizing graph algorithms. A stabilizing algorithm starts from a particular initial state and goes through series of different state changes until it converges. The AGM adds work dependency to the stabilizing algorithm. The work is processed within the processing function. All processes in the system execute the same processing function. Before feeding work into the processing function, work is ordered using a strict weak ordering relation. The strict weak ordering relation divides work into equivalence classes, hence work within a single equivalence class can be processed in parallel, but work in different equivalence classes must be executed in the order they appear in equivalence classes. The paper presents the AGM model, semantics and AGM models for several existing distributed memory parallel graph algorithms.
[ { "created": "Sat, 16 Apr 2016 16:26:29 GMT", "version": "v1" }, { "created": "Thu, 28 Apr 2016 19:13:02 GMT", "version": "v2" } ]
2016-04-29
[ [ "Kanewala", "Thejaka Amila", "" ], [ "Zalewski", "Marcin", "" ], [ "Lumsdaine", "Andrew", "" ] ]
An Abstract Graph Machine(AGM) is an abstract model for distributed memory parallel stabilizing graph algorithms. A stabilizing algorithm starts from a particular initial state and goes through series of different state changes until it converges. The AGM adds work dependency to the stabilizing algorithm. The work is processed within the processing function. All processes in the system execute the same processing function. Before feeding work into the processing function, work is ordered using a strict weak ordering relation. The strict weak ordering relation divides work into equivalence classes, hence work within a single equivalence class can be processed in parallel, but work in different equivalence classes must be executed in the order they appear in equivalence classes. The paper presents the AGM model, semantics and AGM models for several existing distributed memory parallel graph algorithms.
2408.02623
Duc Manh Nguyen Dang
Duc Manh Nguyen Dang, Viet Hang Duong, Jia Ching Wang, Nhan Bui Duc
YOWOv3: An Efficient and Generalized Framework for Human Action Detection and Recognition
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a new framework called YOWOv3, which is an improved version of YOWOv2, designed specifically for the task of Human Action Detection and Recognition. This framework is designed to facilitate extensive experimentation with different configurations and supports easy customization of various components within the model, reducing efforts required for understanding and modifying the code. YOWOv3 demonstrates its superior performance compared to YOWOv2 on two widely used datasets for Human Action Detection and Recognition: UCF101-24 and AVAv2.2. Specifically, the predecessor model YOWOv2 achieves an mAP of 85.2% and 20.3% on UCF101-24 and AVAv2.2, respectively, with 109.7M parameters and 53.6 GFLOPs. In contrast, our model - YOWOv3, with only 59.8M parameters and 39.8 GFLOPs, achieves an mAP of 88.33% and 20.31% on UCF101-24 and AVAv2.2, respectively. The results demonstrate that YOWOv3 significantly reduces the number of parameters and GFLOPs while still achieving comparable performance.
[ { "created": "Mon, 5 Aug 2024 16:48:03 GMT", "version": "v1" }, { "created": "Fri, 9 Aug 2024 00:17:51 GMT", "version": "v2" } ]
2024-08-12
[ [ "Dang", "Duc Manh Nguyen", "" ], [ "Duong", "Viet Hang", "" ], [ "Wang", "Jia Ching", "" ], [ "Duc", "Nhan Bui", "" ] ]
In this paper, we propose a new framework called YOWOv3, which is an improved version of YOWOv2, designed specifically for the task of Human Action Detection and Recognition. This framework is designed to facilitate extensive experimentation with different configurations and supports easy customization of various components within the model, reducing efforts required for understanding and modifying the code. YOWOv3 demonstrates its superior performance compared to YOWOv2 on two widely used datasets for Human Action Detection and Recognition: UCF101-24 and AVAv2.2. Specifically, the predecessor model YOWOv2 achieves an mAP of 85.2% and 20.3% on UCF101-24 and AVAv2.2, respectively, with 109.7M parameters and 53.6 GFLOPs. In contrast, our model - YOWOv3, with only 59.8M parameters and 39.8 GFLOPs, achieves an mAP of 88.33% and 20.31% on UCF101-24 and AVAv2.2, respectively. The results demonstrate that YOWOv3 significantly reduces the number of parameters and GFLOPs while still achieving comparable performance.
2209.07529
Haeyong Kang
Haeyong Kang, Jaehong Yoon, Sultan Rizky Hikmawan Madjid, Sung Ju Hwang, Chang D. Yoo
On the Soft-Subnetwork for Few-shot Class Incremental Learning
The Eleventh International Conference on Learning Representations (ICLR, 2023)
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes that there exist smooth (non-binary) subnetworks within a dense network that achieve the competitive performance of the dense network, we propose a few-shot class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks (SoftNet)}. Our objective is to learn a sequence of sessions incrementally, where each session only includes a few training instances per class while preserving the knowledge of the previously learned ones. SoftNet jointly learns the model weights and adaptive non-binary soft masks at a base training session in which each mask consists of the major and minor subnetwork; the former aims to minimize catastrophic forgetting during training, and the latter aims to avoid overfitting to a few samples in each new training session. We provide comprehensive empirical validations demonstrating that our SoftNet effectively tackles the few-shot incremental learning problem by surpassing the performance of state-of-the-art baselines over benchmark datasets.
[ { "created": "Thu, 15 Sep 2022 04:54:02 GMT", "version": "v1" }, { "created": "Wed, 1 Mar 2023 12:21:06 GMT", "version": "v2" } ]
2023-03-02
[ [ "Kang", "Haeyong", "" ], [ "Yoon", "Jaehong", "" ], [ "Madjid", "Sultan Rizky Hikmawan", "" ], [ "Hwang", "Sung Ju", "" ], [ "Yoo", "Chang D.", "" ] ]
Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes that there exist smooth (non-binary) subnetworks within a dense network that achieve the competitive performance of the dense network, we propose a few-shot class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks (SoftNet)}. Our objective is to learn a sequence of sessions incrementally, where each session only includes a few training instances per class while preserving the knowledge of the previously learned ones. SoftNet jointly learns the model weights and adaptive non-binary soft masks at a base training session in which each mask consists of the major and minor subnetwork; the former aims to minimize catastrophic forgetting during training, and the latter aims to avoid overfitting to a few samples in each new training session. We provide comprehensive empirical validations demonstrating that our SoftNet effectively tackles the few-shot incremental learning problem by surpassing the performance of state-of-the-art baselines over benchmark datasets.
2212.02997
Evangelos Ververas
Evangelos Ververas, Polydefkis Gkagkos, Jiankang Deng, Michail Christos Doukas, Jia Guo, Stefanos Zafeiriou
3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from Synthetic Views
17 pages, 13 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Developing gaze estimation models that generalize well to unseen domains and in-the-wild conditions remains a challenge with no known best solution. This is mostly due to the difficulty of acquiring ground truth data that cover the distribution of faces, head poses, and environments that exist in the real world. Most recent methods attempt to close the gap between specific source and target domains using domain adaptation. In this work, we propose to train general gaze estimation models which can be directly employed in novel environments without adaptation. To do so, we leverage the observation that head, body, and hand pose estimation benefit from revising them as dense 3D coordinate prediction, and similarly express gaze estimation as regression of dense 3D eye meshes. To close the gap between image domains, we create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene, and design a multi-view supervision framework to balance their effect during training. We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 30% compared to state-of-the-art when no ground truth data are available, and up to 10% when they are. The project material are available for research purposes at https://github.com/Vagver/3DGazeNet.
[ { "created": "Tue, 6 Dec 2022 14:15:17 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2023 15:57:15 GMT", "version": "v2" }, { "created": "Tue, 12 Dec 2023 13:39:34 GMT", "version": "v3" } ]
2023-12-15
[ [ "Ververas", "Evangelos", "" ], [ "Gkagkos", "Polydefkis", "" ], [ "Deng", "Jiankang", "" ], [ "Doukas", "Michail Christos", "" ], [ "Guo", "Jia", "" ], [ "Zafeiriou", "Stefanos", "" ] ]
Developing gaze estimation models that generalize well to unseen domains and in-the-wild conditions remains a challenge with no known best solution. This is mostly due to the difficulty of acquiring ground truth data that cover the distribution of faces, head poses, and environments that exist in the real world. Most recent methods attempt to close the gap between specific source and target domains using domain adaptation. In this work, we propose to train general gaze estimation models which can be directly employed in novel environments without adaptation. To do so, we leverage the observation that head, body, and hand pose estimation benefit from revising them as dense 3D coordinate prediction, and similarly express gaze estimation as regression of dense 3D eye meshes. To close the gap between image domains, we create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene, and design a multi-view supervision framework to balance their effect during training. We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 30% compared to state-of-the-art when no ground truth data are available, and up to 10% when they are. The project material are available for research purposes at https://github.com/Vagver/3DGazeNet.
2010.08056
Yudi Dong
Yudi Dong and Yu-Dong Yao
IoT Platform for COVID-19 Prevention and Control: A Survey
12 pages; Submitted to IEEE Internet of Things Journal
IEEE Access 2021
10.1109/ACCESS.2021.3068276
null
cs.HC cs.AI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a result of the worldwide transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), coronavirus disease 2019 (COVID-19) has evolved into an unprecedented pandemic. Currently, with unavailable pharmaceutical treatments and vaccines, this novel coronavirus results in a great impact on public health, human society, and global economy, which is likely to last for many years. One of the lessons learned from the COVID-19 pandemic is that a long-term system with non-pharmaceutical interventions for preventing and controlling new infectious diseases is desirable to be implemented. Internet of things (IoT) platform is preferred to be utilized to achieve this goal, due to its ubiquitous sensing ability and seamless connectivity. IoT technology is changing our lives through smart healthcare, smart home, and smart city, which aims to build a more convenient and intelligent community. This paper presents how the IoT could be incorporated into the epidemic prevention and control system. Specifically, we demonstrate a potential fog-cloud combined IoT platform that can be used in the systematic and intelligent COVID-19 prevention and control, which involves five interventions including COVID-19 Symptom Diagnosis, Quarantine Monitoring, Contact Tracing & Social Distancing, COVID-19 Outbreak Forecasting, and SARS-CoV-2 Mutation Tracking. We investigate and review the state-of-the-art literatures of these five interventions to present the capabilities of IoT in countering against the current COVID-19 pandemic or future infectious disease epidemics.
[ { "created": "Thu, 15 Oct 2020 22:43:03 GMT", "version": "v1" }, { "created": "Thu, 29 Oct 2020 18:04:14 GMT", "version": "v2" } ]
2021-03-26
[ [ "Dong", "Yudi", "" ], [ "Yao", "Yu-Dong", "" ] ]
As a result of the worldwide transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), coronavirus disease 2019 (COVID-19) has evolved into an unprecedented pandemic. Currently, with unavailable pharmaceutical treatments and vaccines, this novel coronavirus results in a great impact on public health, human society, and global economy, which is likely to last for many years. One of the lessons learned from the COVID-19 pandemic is that a long-term system with non-pharmaceutical interventions for preventing and controlling new infectious diseases is desirable to be implemented. Internet of things (IoT) platform is preferred to be utilized to achieve this goal, due to its ubiquitous sensing ability and seamless connectivity. IoT technology is changing our lives through smart healthcare, smart home, and smart city, which aims to build a more convenient and intelligent community. This paper presents how the IoT could be incorporated into the epidemic prevention and control system. Specifically, we demonstrate a potential fog-cloud combined IoT platform that can be used in the systematic and intelligent COVID-19 prevention and control, which involves five interventions including COVID-19 Symptom Diagnosis, Quarantine Monitoring, Contact Tracing & Social Distancing, COVID-19 Outbreak Forecasting, and SARS-CoV-2 Mutation Tracking. We investigate and review the state-of-the-art literatures of these five interventions to present the capabilities of IoT in countering against the current COVID-19 pandemic or future infectious disease epidemics.
2003.05746
Camille Bourgaux
Meghyn Bienvenu and Camille Bourgaux
Querying and Repairing Inconsistent Prioritized Knowledge Bases: Complexity Analysis and Links with Abstract Argumentation
This is an extended version of a paper appearing at the 17th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). This version corrects the statement of Theorem 43 (missing hypothesis). 27 pages
null
null
null
cs.LO cs.AI cs.DB
http://creativecommons.org/licenses/by/4.0/
In this paper, we explore the issue of inconsistency handling over prioritized knowledge bases (KBs), which consist of an ontology, a set of facts, and a priority relation between conflicting facts. In the database setting, a closely related scenario has been studied and led to the definition of three different notions of optimal repairs (global, Pareto, and completion) of a prioritized inconsistent database. After transferring the notions of globally-, Pareto- and completion-optimal repairs to our setting, we study the data complexity of the core reasoning tasks: query entailment under inconsistency-tolerant semantics based upon optimal repairs, existence of a unique optimal repair, and enumeration of all optimal repairs. Our results provide a nearly complete picture of the data complexity of these tasks for ontologies formulated in common DL-Lite dialects. The second contribution of our work is to clarify the relationship between optimal repairs and different notions of extensions for (set-based) argumentation frameworks. Among our results, we show that Pareto-optimal repairs correspond precisely to stable extensions (and often also to preferred extensions), and we propose a novel semantics for prioritized KBs which is inspired by grounded extensions and enjoys favourable computational properties. Our study also yields some results of independent interest concerning preference-based argumentation frameworks.
[ { "created": "Thu, 12 Mar 2020 12:38:37 GMT", "version": "v1" }, { "created": "Mon, 29 Jun 2020 16:15:30 GMT", "version": "v2" }, { "created": "Fri, 7 Jun 2024 06:42:55 GMT", "version": "v3" } ]
2024-06-10
[ [ "Bienvenu", "Meghyn", "" ], [ "Bourgaux", "Camille", "" ] ]
In this paper, we explore the issue of inconsistency handling over prioritized knowledge bases (KBs), which consist of an ontology, a set of facts, and a priority relation between conflicting facts. In the database setting, a closely related scenario has been studied and led to the definition of three different notions of optimal repairs (global, Pareto, and completion) of a prioritized inconsistent database. After transferring the notions of globally-, Pareto- and completion-optimal repairs to our setting, we study the data complexity of the core reasoning tasks: query entailment under inconsistency-tolerant semantics based upon optimal repairs, existence of a unique optimal repair, and enumeration of all optimal repairs. Our results provide a nearly complete picture of the data complexity of these tasks for ontologies formulated in common DL-Lite dialects. The second contribution of our work is to clarify the relationship between optimal repairs and different notions of extensions for (set-based) argumentation frameworks. Among our results, we show that Pareto-optimal repairs correspond precisely to stable extensions (and often also to preferred extensions), and we propose a novel semantics for prioritized KBs which is inspired by grounded extensions and enjoys favourable computational properties. Our study also yields some results of independent interest concerning preference-based argumentation frameworks.
2403.00157
Ziqin Chen
Ziqin Chen and Yongqiang Wang
Privacy-Preserving Distributed Optimization and Learning
Accepted as a chapter in the Encyclopedia of Systems and Control Engineering published by Elsevier
null
null
null
cs.LG cs.CR cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed optimization and learning has recently garnered great attention due to its wide applications in sensor networks, smart grids, machine learning, and so forth. Despite rapid development, existing distributed optimization and learning algorithms require each agent to exchange messages with its neighbors, which may expose sensitive information and raise significant privacy concerns. In this survey paper, we overview privacy-preserving distributed optimization and learning methods. We first discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation and indicate their pros and cons for privacy protection in distributed optimization and learning. We believe that among these approaches, differential privacy is most promising due to its low computational and communication complexities, which are extremely appealing for modern learning based applications with high dimensions of optimization variables. We then introduce several differential-privacy algorithms that can simultaneously ensure privacy and optimization accuracy. Moreover, we provide example applications in several machine learning problems to confirm the real-world effectiveness of these algorithms. Finally, we highlight some challenges in this research domain and discuss future directions.
[ { "created": "Thu, 29 Feb 2024 22:18:05 GMT", "version": "v1" } ]
2024-03-04
[ [ "Chen", "Ziqin", "" ], [ "Wang", "Yongqiang", "" ] ]
Distributed optimization and learning has recently garnered great attention due to its wide applications in sensor networks, smart grids, machine learning, and so forth. Despite rapid development, existing distributed optimization and learning algorithms require each agent to exchange messages with its neighbors, which may expose sensitive information and raise significant privacy concerns. In this survey paper, we overview privacy-preserving distributed optimization and learning methods. We first discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation and indicate their pros and cons for privacy protection in distributed optimization and learning. We believe that among these approaches, differential privacy is most promising due to its low computational and communication complexities, which are extremely appealing for modern learning based applications with high dimensions of optimization variables. We then introduce several differential-privacy algorithms that can simultaneously ensure privacy and optimization accuracy. Moreover, we provide example applications in several machine learning problems to confirm the real-world effectiveness of these algorithms. Finally, we highlight some challenges in this research domain and discuss future directions.
2305.16793
Xikun Jiang
Xikun Jiang, Chenhao Ying, Lei Li, Boris D\"udder, Haiqin Wu, Haiming Jin and Yuan Luo
Incentive Mechanism for Uncertain Tasks under Differential Privacy
null
null
null
null
cs.GT cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile crowd sensing (MCS) has emerged as an increasingly popular sensing paradigm due to its cost-effectiveness. This approach relies on platforms to outsource tasks to participating workers when prompted by task publishers. Although incentive mechanisms have been devised to foster widespread participation in MCS, most of them focus only on static tasks (i.e., tasks for which the timing and type are known in advance) and do not protect the privacy of worker bids. In a dynamic and resource-constrained environment, tasks are often uncertain (i.e., the platform lacks a priori knowledge about the tasks) and worker bids may be vulnerable to inference attacks. This paper presents HERALD*, an incentive mechanism that addresses these issues through the use of uncertainty and hidden bids. Theoretical analysis reveals that HERALD* satisfies a range of critical criteria, including truthfulness, individual rationality, differential privacy, low computational complexity, and low social cost. These properties are then corroborated through a series of evaluations.
[ { "created": "Fri, 26 May 2023 10:15:02 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2024 15:16:51 GMT", "version": "v2" } ]
2024-03-07
[ [ "Jiang", "Xikun", "" ], [ "Ying", "Chenhao", "" ], [ "Li", "Lei", "" ], [ "Düdder", "Boris", "" ], [ "Wu", "Haiqin", "" ], [ "Jin", "Haiming", "" ], [ "Luo", "Yuan", "" ] ]
Mobile crowd sensing (MCS) has emerged as an increasingly popular sensing paradigm due to its cost-effectiveness. This approach relies on platforms to outsource tasks to participating workers when prompted by task publishers. Although incentive mechanisms have been devised to foster widespread participation in MCS, most of them focus only on static tasks (i.e., tasks for which the timing and type are known in advance) and do not protect the privacy of worker bids. In a dynamic and resource-constrained environment, tasks are often uncertain (i.e., the platform lacks a priori knowledge about the tasks) and worker bids may be vulnerable to inference attacks. This paper presents HERALD*, an incentive mechanism that addresses these issues through the use of uncertainty and hidden bids. Theoretical analysis reveals that HERALD* satisfies a range of critical criteria, including truthfulness, individual rationality, differential privacy, low computational complexity, and low social cost. These properties are then corroborated through a series of evaluations.
2007.00211
Marc Law
Marc T. Law and Jos Stam
Ultrahyperbolic Representation Learning
NeurIPS 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the nondegenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations.
[ { "created": "Wed, 1 Jul 2020 03:49:24 GMT", "version": "v1" }, { "created": "Fri, 23 Oct 2020 15:15:44 GMT", "version": "v2" }, { "created": "Mon, 26 Oct 2020 15:56:04 GMT", "version": "v3" }, { "created": "Wed, 28 Oct 2020 14:10:44 GMT", "version": "v4" }, { "created": "Mon, 11 Jan 2021 02:49:43 GMT", "version": "v5" } ]
2021-01-12
[ [ "Law", "Marc T.", "" ], [ "Stam", "Jos", "" ] ]
In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the nondegenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations.
1404.5828
Dimitra Panagou
Dimitra Panagou
Motion planning and Collision Avoidance using Non-Gradient Vector Fields. Technical Report
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel feedback method on the motion planning for unicycle robots in environments with static obstacles, along with an extension to the distributed planning and coordination in multi-robot systems. The method employs a family of 2-dimensional analytic vector fields, whose integral curves exhibit various patterns depending on the value of a parameter lambda. More specifically, for an a priori known value of lambda, the vector field has a unique singular point of dipole type and can be used to steer the unicycle to a goal configuration. Furthermore, for the unique value of lambda that the vector field has a continuum of singular points, the integral curves are used to define flows around obstacles. An almost global feedback motion plan can then be constructed by suitably blending attractive and repulsive vector fields in a static obstacle environment. The method does not suffer from the appearance of sinks (stable nodes) away from goal point. Compared to other similar methods which are free of local minima, the proposed approach does not require any parameter tuning to render the desired convergence properties. The paper also addresses the extension of the method to the distributed coordination and control of multiple robots, where each robot needs to navigate to a goal configuration while avoiding collisions with the remaining robots, and while using local information only. More specifically, based on the results which apply to the single-robot case, a motion coordination protocol is presented which guarantees the safety of the multi-robot system and the almost global convergence of the robots to their goal configurations. The efficacy of the proposed methodology is demonstrated via simulation results in static and dynamic environments.
[ { "created": "Wed, 23 Apr 2014 14:12:36 GMT", "version": "v1" }, { "created": "Thu, 24 Apr 2014 15:55:14 GMT", "version": "v2" }, { "created": "Mon, 13 Oct 2014 20:55:13 GMT", "version": "v3" } ]
2014-10-22
[ [ "Panagou", "Dimitra", "" ] ]
This paper presents a novel feedback method on the motion planning for unicycle robots in environments with static obstacles, along with an extension to the distributed planning and coordination in multi-robot systems. The method employs a family of 2-dimensional analytic vector fields, whose integral curves exhibit various patterns depending on the value of a parameter lambda. More specifically, for an a priori known value of lambda, the vector field has a unique singular point of dipole type and can be used to steer the unicycle to a goal configuration. Furthermore, for the unique value of lambda that the vector field has a continuum of singular points, the integral curves are used to define flows around obstacles. An almost global feedback motion plan can then be constructed by suitably blending attractive and repulsive vector fields in a static obstacle environment. The method does not suffer from the appearance of sinks (stable nodes) away from goal point. Compared to other similar methods which are free of local minima, the proposed approach does not require any parameter tuning to render the desired convergence properties. The paper also addresses the extension of the method to the distributed coordination and control of multiple robots, where each robot needs to navigate to a goal configuration while avoiding collisions with the remaining robots, and while using local information only. More specifically, based on the results which apply to the single-robot case, a motion coordination protocol is presented which guarantees the safety of the multi-robot system and the almost global convergence of the robots to their goal configurations. The efficacy of the proposed methodology is demonstrated via simulation results in static and dynamic environments.
2210.05714
Chenguang Huang
Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard
Visual Language Maps for Robot Navigation
Accepted at the 2023 IEEE International Conference on Robotics and Automation (ICRA). Project page: https://vlmaps.github.io
null
null
null
cs.RO cs.AI cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e.g., image captions). While this is useful for matching images to natural language descriptions of object goals, it remains disjoint from the process of mapping the environment, so that it lacks the spatial precision of classic geometric maps. To address this problem, we propose VLMaps, a spatial map representation that directly fuses pretrained visual-language features with a 3D reconstruction of the physical world. VLMaps can be autonomously built from video feed on robots using standard exploration approaches and enables natural language indexing of the map without additional labeled data. Specifically, when combined with large language models (LLMs), VLMaps can be used to (i) translate natural language commands into a sequence of open-vocabulary navigation goals (which, beyond prior work, can be spatial by construction, e.g., "in between the sofa and TV" or "three meters to the right of the chair") directly localized in the map, and (ii) can be shared among multiple robots with different embodiments to generate new obstacle maps on-the-fly (by using a list of obstacle categories). Extensive experiments carried out in simulated and real world environments show that VLMaps enable navigation according to more complex language instructions than existing methods. Videos are available at https://vlmaps.github.io.
[ { "created": "Tue, 11 Oct 2022 18:13:20 GMT", "version": "v1" }, { "created": "Thu, 13 Oct 2022 09:37:38 GMT", "version": "v2" }, { "created": "Mon, 17 Oct 2022 14:46:08 GMT", "version": "v3" }, { "created": "Wed, 8 Mar 2023 10:30:41 GMT", "version": "v4" } ]
2023-03-09
[ [ "Huang", "Chenguang", "" ], [ "Mees", "Oier", "" ], [ "Zeng", "Andy", "" ], [ "Burgard", "Wolfram", "" ] ]
Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e.g., image captions). While this is useful for matching images to natural language descriptions of object goals, it remains disjoint from the process of mapping the environment, so that it lacks the spatial precision of classic geometric maps. To address this problem, we propose VLMaps, a spatial map representation that directly fuses pretrained visual-language features with a 3D reconstruction of the physical world. VLMaps can be autonomously built from video feed on robots using standard exploration approaches and enables natural language indexing of the map without additional labeled data. Specifically, when combined with large language models (LLMs), VLMaps can be used to (i) translate natural language commands into a sequence of open-vocabulary navigation goals (which, beyond prior work, can be spatial by construction, e.g., "in between the sofa and TV" or "three meters to the right of the chair") directly localized in the map, and (ii) can be shared among multiple robots with different embodiments to generate new obstacle maps on-the-fly (by using a list of obstacle categories). Extensive experiments carried out in simulated and real world environments show that VLMaps enable navigation according to more complex language instructions than existing methods. Videos are available at https://vlmaps.github.io.
2405.13238
Peng Liu
Peng Liu, Nian Wang, Cong Xu, Ming Zhao, Bin Wang, Yi Ren
Enhancing User Interest based on Stream Clustering and Memory Networks in Large-Scale Recommender Systems
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender Systems (RSs) provide personalized recommendation service based on user interest, which are widely used in various platforms. However, there are lots of users with sparse interest due to lacking consumption behaviors, which leads to poor recommendation results for them. This problem is widespread in large-scale RSs and is particularly difficult to address. To solve this problem, we propose a novel solution named User Interest Enhancement (UIE) which enhances user interest including user profile and user history behavior sequences using the enhancement vectors and personalized enhancement vector generated based on stream clustering and memory networks from different perspectives. UIE not only remarkably improves model performance on the users with sparse interest but also significantly enhance model performance on other users. UIE is an end-to-end solution which is easy to be implemented based on ranking model. Moreover, we expand our solution and apply similar methods to long-tail items, which also achieves excellent improvement. Furthermore, we conduct extensive offline and online experiments in a large-scale industrial RS. The results demonstrate that our model outperforms other models remarkably, especially for the users with sparse interest. Until now, UIE has been fully deployed in multiple large-scale RSs and achieved remarkable improvements.
[ { "created": "Tue, 21 May 2024 22:53:00 GMT", "version": "v1" }, { "created": "Sun, 26 May 2024 23:18:53 GMT", "version": "v2" } ]
2024-05-28
[ [ "Liu", "Peng", "" ], [ "Wang", "Nian", "" ], [ "Xu", "Cong", "" ], [ "Zhao", "Ming", "" ], [ "Wang", "Bin", "" ], [ "Ren", "Yi", "" ] ]
Recommender Systems (RSs) provide personalized recommendation service based on user interest, which are widely used in various platforms. However, there are lots of users with sparse interest due to lacking consumption behaviors, which leads to poor recommendation results for them. This problem is widespread in large-scale RSs and is particularly difficult to address. To solve this problem, we propose a novel solution named User Interest Enhancement (UIE) which enhances user interest including user profile and user history behavior sequences using the enhancement vectors and personalized enhancement vector generated based on stream clustering and memory networks from different perspectives. UIE not only remarkably improves model performance on the users with sparse interest but also significantly enhance model performance on other users. UIE is an end-to-end solution which is easy to be implemented based on ranking model. Moreover, we expand our solution and apply similar methods to long-tail items, which also achieves excellent improvement. Furthermore, we conduct extensive offline and online experiments in a large-scale industrial RS. The results demonstrate that our model outperforms other models remarkably, especially for the users with sparse interest. Until now, UIE has been fully deployed in multiple large-scale RSs and achieved remarkable improvements.
1804.04076
Faraz Saeedan
Faraz Saeedan, Nicolas Weber, Michael Goesele, Stefan Roth
Detail-Preserving Pooling in Deep Networks
To appear at CVPR 2018
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network's discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detail-preserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches.
[ { "created": "Wed, 11 Apr 2018 16:28:11 GMT", "version": "v1" } ]
2018-04-13
[ [ "Saeedan", "Faraz", "" ], [ "Weber", "Nicolas", "" ], [ "Goesele", "Michael", "" ], [ "Roth", "Stefan", "" ] ]
Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network's discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detail-preserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches.
2310.19124
Theodoros Plessas
Evangelia Panourgia (Athens University of Economics and Business), Theodoros Plessas (Athens University of Economics and Business), Ilias Balampanis (Athens University of Economics and Business), Diomidis Spinellis (Athens University of Economics and Business, Delft University of Technology)
Good Tools are Half the Work: Tool Usage in Deep Learning Projects
null
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rising popularity of deep learning (DL) methods and techniques has invigorated interest in the topic of SE4DL (Software Engineering for Deep Learning), the application of software engineering (SE) practices on deep learning software. Despite the novel engineering challenges brought on by the data-driven and non-deterministic paradigm of DL software, little work has been invested into developing DL-targeted SE tools. On the other hand, tools tackling non-SE issues specific to DL are actively used and referred to under the umbrella term "MLOps (Machine Learning Operations) tools". Nevertheless, the available literature supports the utility of conventional SE tooling in DL software development. Building upon previous mining software repositories (MSR) research on tool usage in open-source software works, we identify conventional and MLOps tools adopted in popular applied DL projects that use Python as the main programming language. About 63\% of the GitHub repositories we examined contained at least one conventional SE tool. Software construction tools are the most widely adopted, while the opposite applies to management and maintenance tools. Relatively few MLOps tools were found to be use, with only 20 tools out of a sample of 74 used in at least one repository. The majority of them were open-source rather than proprietary. One of these tools, TensorBoard, was found to be adopted in about half of the repositories in our study. Consequently, the widespread use of conventional SE tooling demonstrates its relevance to DL software. Further research is recommended on the adoption of MLOps tooling, focusing on the relevance of particular tool types, the development of required tools, as well as ways to promote the use of already available tools.
[ { "created": "Sun, 29 Oct 2023 19:21:33 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 16:13:22 GMT", "version": "v2" } ]
2024-05-29
[ [ "Panourgia", "Evangelia", "", "Athens University of Economics and Business" ], [ "Plessas", "Theodoros", "", "Athens University of Economics and Business" ], [ "Balampanis", "Ilias", "", "Athens University of Economics and Business" ], [ "Spinellis", "Diomidis", "", "Athens University of Economics and Business, Delft University of Technology" ] ]
The rising popularity of deep learning (DL) methods and techniques has invigorated interest in the topic of SE4DL (Software Engineering for Deep Learning), the application of software engineering (SE) practices on deep learning software. Despite the novel engineering challenges brought on by the data-driven and non-deterministic paradigm of DL software, little work has been invested into developing DL-targeted SE tools. On the other hand, tools tackling non-SE issues specific to DL are actively used and referred to under the umbrella term "MLOps (Machine Learning Operations) tools". Nevertheless, the available literature supports the utility of conventional SE tooling in DL software development. Building upon previous mining software repositories (MSR) research on tool usage in open-source software works, we identify conventional and MLOps tools adopted in popular applied DL projects that use Python as the main programming language. About 63\% of the GitHub repositories we examined contained at least one conventional SE tool. Software construction tools are the most widely adopted, while the opposite applies to management and maintenance tools. Relatively few MLOps tools were found to be use, with only 20 tools out of a sample of 74 used in at least one repository. The majority of them were open-source rather than proprietary. One of these tools, TensorBoard, was found to be adopted in about half of the repositories in our study. Consequently, the widespread use of conventional SE tooling demonstrates its relevance to DL software. Further research is recommended on the adoption of MLOps tooling, focusing on the relevance of particular tool types, the development of required tools, as well as ways to promote the use of already available tools.
2408.08258
Hossein Jafarinia
Hossein Jafarinia, Alireza Alipanah, Danial Hamdi, Saeed Razavi, Nahal Mirzaie, Mohammad Hossein Rohban
Snuffy: Efficient Whole Slide Image Classifier
Accepted for ECCV 2024
null
null
null
cs.CV cs.AI cs.LG cs.NE eess.IV
http://creativecommons.org/licenses/by/4.0/
Whole Slide Image (WSI) classification with multiple instance learning (MIL) in digital pathology faces significant computational challenges. Current methods mostly rely on extensive self-supervised learning (SSL) for satisfactory performance, requiring long training periods and considerable computational resources. At the same time, no pre-training affects performance due to domain shifts from natural images to WSIs. We introduce \textbf{\textit{Snuffy}} architecture, a novel MIL-pooling method based on sparse transformers that mitigates performance loss with limited pre-training and enables continual few-shot pre-training as a competitive option. Our sparsity pattern is tailored for pathology and is theoretically proven to be a universal approximator with the tightest probabilistic sharp bound on the number of layers for sparse transformers, to date. We demonstrate Snuffy's effectiveness on CAMELYON16 and TCGA Lung cancer datasets, achieving superior WSI and patch-level accuracies. The code is available on \url{https://github.com/jafarinia/snuffy}.
[ { "created": "Thu, 15 Aug 2024 16:59:15 GMT", "version": "v1" } ]
2024-08-16
[ [ "Jafarinia", "Hossein", "" ], [ "Alipanah", "Alireza", "" ], [ "Hamdi", "Danial", "" ], [ "Razavi", "Saeed", "" ], [ "Mirzaie", "Nahal", "" ], [ "Rohban", "Mohammad Hossein", "" ] ]
Whole Slide Image (WSI) classification with multiple instance learning (MIL) in digital pathology faces significant computational challenges. Current methods mostly rely on extensive self-supervised learning (SSL) for satisfactory performance, requiring long training periods and considerable computational resources. At the same time, no pre-training affects performance due to domain shifts from natural images to WSIs. We introduce \textbf{\textit{Snuffy}} architecture, a novel MIL-pooling method based on sparse transformers that mitigates performance loss with limited pre-training and enables continual few-shot pre-training as a competitive option. Our sparsity pattern is tailored for pathology and is theoretically proven to be a universal approximator with the tightest probabilistic sharp bound on the number of layers for sparse transformers, to date. We demonstrate Snuffy's effectiveness on CAMELYON16 and TCGA Lung cancer datasets, achieving superior WSI and patch-level accuracies. The code is available on \url{https://github.com/jafarinia/snuffy}.
2209.00650
Fran\c{c}ois Renaville
Fran\c{c}ois Renaville, Fabienne Prosmans, Isabelle Gilles
Analyse fonctionnelle de l'outil de gestion de planning LibStaffer
17 pages, in French
Cahiers de la Documentation (2022)1/2, 6-17
null
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
LibStaffer is a staff scheduling tool for libraries provided by Springshare. In this article, we present the analysis that we implemented to determine the potential of LibStaffer in order to simplify staff scheduling in library branches with various configurations, as well as in transversal units with a focus on public services. As a starting point, we focused on the questions available in Philippe Lenepveu and Marc Maisonneuve's work on scheduling tools for libraries. We then enriched the initial list with questions that emerged among several colleagues. After a two-month LibStaffer trial period, we were able to answer those questions and were convinced to take out a subscription to the tool. -- LibStaffer est un outil de gestion de planning de service pour les biblioth\`eques propos\'e par Springshare. Dans cet article, nous exposons l'analyse que nous avons mise en {\oe}uvre pour d\'eterminer le potentiel de LibStaffer en mati\`ere de simplification de la gestion de planning d'accueil, sur des implantations de configurations diverses, ainsi que d'activit\'es transversales de service au public. Nous nous sommes fond\'es sur les questions \'etablies par Philippe Lenepveu et Marc Maisonneuve, dans leur ouvrage consacr\'e aux logiciels de gestion de planning pour les biblioth\`eques, et les avons enrichies par les interrogations de plusieurs coll\`egues. B\'en\'eficier d'un test de LibStaffer, de pr\`es de deux mois, nous a permis de r\'epondre \`a ces questions et nous a convaincus de prendre une souscription \`a l'outil.
[ { "created": "Tue, 30 Aug 2022 06:39:24 GMT", "version": "v1" } ]
2022-09-05
[ [ "Renaville", "François", "" ], [ "Prosmans", "Fabienne", "" ], [ "Gilles", "Isabelle", "" ] ]
LibStaffer is a staff scheduling tool for libraries provided by Springshare. In this article, we present the analysis that we implemented to determine the potential of LibStaffer in order to simplify staff scheduling in library branches with various configurations, as well as in transversal units with a focus on public services. As a starting point, we focused on the questions available in Philippe Lenepveu and Marc Maisonneuve's work on scheduling tools for libraries. We then enriched the initial list with questions that emerged among several colleagues. After a two-month LibStaffer trial period, we were able to answer those questions and were convinced to take out a subscription to the tool. -- LibStaffer est un outil de gestion de planning de service pour les biblioth\`eques propos\'e par Springshare. Dans cet article, nous exposons l'analyse que nous avons mise en {\oe}uvre pour d\'eterminer le potentiel de LibStaffer en mati\`ere de simplification de la gestion de planning d'accueil, sur des implantations de configurations diverses, ainsi que d'activit\'es transversales de service au public. Nous nous sommes fond\'es sur les questions \'etablies par Philippe Lenepveu et Marc Maisonneuve, dans leur ouvrage consacr\'e aux logiciels de gestion de planning pour les biblioth\`eques, et les avons enrichies par les interrogations de plusieurs coll\`egues. B\'en\'eficier d'un test de LibStaffer, de pr\`es de deux mois, nous a permis de r\'epondre \`a ces questions et nous a convaincus de prendre une souscription \`a l'outil.
1901.06637
Bin Li
Bin Li, Zesong Fei, and Yan Zhang
UAV Communications for 5G and Beyond: Recent Advances and Future Trends
53 pages, 9 figures
null
10.1109/JIOT.2018.2887086
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Providing ubiquitous connectivity to diverse device types is the key challenge for 5G and beyond 5G (B5G). Unmanned aerial vehicles (UAVs) are expected to be an important component of the upcoming wireless networks that can potentially facilitate wireless broadcast and support high rate transmissions. Compared to the communications with fixed infrastructure, UAV has salient attributes, such as flexible deployment, strong line-of-sight (LoS) connection links, and additional design degrees of freedom with the controlled mobility. In this paper, a comprehensive survey on UAV communication towards 5G/B5G wireless networks is presented. We first briefly introduce essential background and the space-air-ground integrated networks, as well as discuss related research challenges faced by the emerging integrated network architecture. We then provide an exhaustive review of various 5G techniques based on UAV platforms, which we categorize by different domains including physical layer, network layer, and joint communication, computing and caching. In addition, a great number of open research problems are outlined and identified as possible future research directions.
[ { "created": "Sun, 20 Jan 2019 07:53:08 GMT", "version": "v1" } ]
2019-01-23
[ [ "Li", "Bin", "" ], [ "Fei", "Zesong", "" ], [ "Zhang", "Yan", "" ] ]
Providing ubiquitous connectivity to diverse device types is the key challenge for 5G and beyond 5G (B5G). Unmanned aerial vehicles (UAVs) are expected to be an important component of the upcoming wireless networks that can potentially facilitate wireless broadcast and support high rate transmissions. Compared to the communications with fixed infrastructure, UAV has salient attributes, such as flexible deployment, strong line-of-sight (LoS) connection links, and additional design degrees of freedom with the controlled mobility. In this paper, a comprehensive survey on UAV communication towards 5G/B5G wireless networks is presented. We first briefly introduce essential background and the space-air-ground integrated networks, as well as discuss related research challenges faced by the emerging integrated network architecture. We then provide an exhaustive review of various 5G techniques based on UAV platforms, which we categorize by different domains including physical layer, network layer, and joint communication, computing and caching. In addition, a great number of open research problems are outlined and identified as possible future research directions.
1912.06986
Ziyi Wang
Ziyi Wang, Zhaohao Wang, Yansong Xu, Bi Wu, and Weisheng Zhao
Erase-hidden and Drivability-improved Magnetic Non-Volatile Flip-Flops with NAND-SPIN Devices
This article has been accepted in a future issue of IEEE Transactions on Nanotechnology: Regular Papers
null
10.1109/TNANO.2020.2999751
null
cs.ET eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-volatile flip-flops (NVFFs) using power gating techniques promise to overcome the soaring leakage power consumption issue with the scaling of CMOS technology. Magnetic tunnel junction (MTJ) is a good candidate for constructing the NVFF thanks to its low power, high speed, good CMOS compatibility, etc. In this paper, we propose a novel magnetic NVFF based on an emerging memory device called NAND-SPIN. The data writing of NAND-SPIN is achieved by successively applying two unidirectional currents, which respectively generate the spin orbit torque (SOT) and spin transfer torque (STT) for erase and programming operations. This characteristic allows us to design an erase-hidden and drivability-improved magnetic NVFF. Furthermore, more design flexibility could be obtained since the backup operation of the proposed NVFF is not limited by the inherent slave latch. Simulation results show that our proposed NVFF achieves performance improvement in terms of power, delay and area, compared with conventional slave-latch-driven SOT-NVFF designs.
[ { "created": "Sun, 15 Dec 2019 05:59:33 GMT", "version": "v1" }, { "created": "Thu, 18 Jun 2020 03:17:51 GMT", "version": "v2" } ]
2020-07-15
[ [ "Wang", "Ziyi", "" ], [ "Wang", "Zhaohao", "" ], [ "Xu", "Yansong", "" ], [ "Wu", "Bi", "" ], [ "Zhao", "Weisheng", "" ] ]
Non-volatile flip-flops (NVFFs) using power gating techniques promise to overcome the soaring leakage power consumption issue with the scaling of CMOS technology. Magnetic tunnel junction (MTJ) is a good candidate for constructing the NVFF thanks to its low power, high speed, good CMOS compatibility, etc. In this paper, we propose a novel magnetic NVFF based on an emerging memory device called NAND-SPIN. The data writing of NAND-SPIN is achieved by successively applying two unidirectional currents, which respectively generate the spin orbit torque (SOT) and spin transfer torque (STT) for erase and programming operations. This characteristic allows us to design an erase-hidden and drivability-improved magnetic NVFF. Furthermore, more design flexibility could be obtained since the backup operation of the proposed NVFF is not limited by the inherent slave latch. Simulation results show that our proposed NVFF achieves performance improvement in terms of power, delay and area, compared with conventional slave-latch-driven SOT-NVFF designs.
1008.3845
Kalyana Babu Nakshatrala
S. Srinivasan and K. B. Nakshatrala
A stabilized mixed formulation for unsteady Brinkman equation based on the method of horizontal lines
null
null
null
null
cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a stabilized mixed formulation for unsteady Brinkman equation. The formulation is systematically derived based on the variational multiscale formalism and the method of horizontal lines. The derivation does not need the assumption that the fine-scale variables do not depend on the time, which is the case with the conventional derivation of multiscale stabilized formulations for transient mixed problems. An expression for the stabilization parameter is obtained in terms of a bubble function, and appropriate bubble functions for various finite elements are also presented. Under the proposed formulation, equal-order interpolation for the velocity and pressure (which is computationally the most convenient) is stable. Representative numerical results are presented to illustrate the performance of the proposed formulation. Spatial and temporal convergence studies are also performed, and the proposed formulation performed well.
[ { "created": "Thu, 15 Jul 2010 21:47:04 GMT", "version": "v1" }, { "created": "Tue, 30 Nov 2010 06:18:59 GMT", "version": "v2" } ]
2010-12-01
[ [ "Srinivasan", "S.", "" ], [ "Nakshatrala", "K. B.", "" ] ]
In this paper, we present a stabilized mixed formulation for unsteady Brinkman equation. The formulation is systematically derived based on the variational multiscale formalism and the method of horizontal lines. The derivation does not need the assumption that the fine-scale variables do not depend on the time, which is the case with the conventional derivation of multiscale stabilized formulations for transient mixed problems. An expression for the stabilization parameter is obtained in terms of a bubble function, and appropriate bubble functions for various finite elements are also presented. Under the proposed formulation, equal-order interpolation for the velocity and pressure (which is computationally the most convenient) is stable. Representative numerical results are presented to illustrate the performance of the proposed formulation. Spatial and temporal convergence studies are also performed, and the proposed formulation performed well.
2206.08918
Nikos Zarifis
Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
Learning a Single Neuron with Adversarial Label Noise via Gradient Descent
null
null
null
null
cs.LG cs.DS math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the fundamental problem of learning a single neuron, i.e., a function of the form $\mathbf{x}\mapsto\sigma(\mathbf{w}\cdot\mathbf{x})$ for monotone activations $\sigma:\mathbb{R}\mapsto\mathbb{R}$, with respect to the $L_2^2$-loss in the presence of adversarial label noise. Specifically, we are given labeled examples from a distribution $D$ on $(\mathbf{x}, y)\in\mathbb{R}^d \times \mathbb{R}$ such that there exists $\mathbf{w}^\ast\in\mathbb{R}^d$ achieving $F(\mathbf{w}^\ast)=\epsilon$, where $F(\mathbf{w})=\mathbf{E}_{(\mathbf{x},y)\sim D}[(\sigma(\mathbf{w}\cdot \mathbf{x})-y)^2]$. The goal of the learner is to output a hypothesis vector $\mathbf{w}$ such that $F(\mathbb{w})=C\, \epsilon$ with high probability, where $C>1$ is a universal constant. As our main contribution, we give efficient constant-factor approximate learners for a broad class of distributions (including log-concave distributions) and activation functions. Concretely, for the class of isotropic log-concave distributions, we obtain the following important corollaries: For the logistic activation, we obtain the first polynomial-time constant factor approximation (even under the Gaussian distribution). Our algorithm has sample complexity $\widetilde{O}(d/\epsilon)$, which is tight within polylogarithmic factors. For the ReLU activation, we give an efficient algorithm with sample complexity $\tilde{O}(d\, \polylog(1/\epsilon))$. Prior to our work, the best known constant-factor approximate learner had sample complexity $\tilde{\Omega}(d/\epsilon)$. In both of these settings, our algorithms are simple, performing gradient-descent on the (regularized) $L_2^2$-loss. The correctness of our algorithms relies on novel structural results that we establish, showing that (essentially all) stationary points of the underlying non-convex loss are approximately optimal.
[ { "created": "Fri, 17 Jun 2022 17:55:43 GMT", "version": "v1" } ]
2022-06-20
[ [ "Diakonikolas", "Ilias", "" ], [ "Kontonis", "Vasilis", "" ], [ "Tzamos", "Christos", "" ], [ "Zarifis", "Nikos", "" ] ]
We study the fundamental problem of learning a single neuron, i.e., a function of the form $\mathbf{x}\mapsto\sigma(\mathbf{w}\cdot\mathbf{x})$ for monotone activations $\sigma:\mathbb{R}\mapsto\mathbb{R}$, with respect to the $L_2^2$-loss in the presence of adversarial label noise. Specifically, we are given labeled examples from a distribution $D$ on $(\mathbf{x}, y)\in\mathbb{R}^d \times \mathbb{R}$ such that there exists $\mathbf{w}^\ast\in\mathbb{R}^d$ achieving $F(\mathbf{w}^\ast)=\epsilon$, where $F(\mathbf{w})=\mathbf{E}_{(\mathbf{x},y)\sim D}[(\sigma(\mathbf{w}\cdot \mathbf{x})-y)^2]$. The goal of the learner is to output a hypothesis vector $\mathbf{w}$ such that $F(\mathbb{w})=C\, \epsilon$ with high probability, where $C>1$ is a universal constant. As our main contribution, we give efficient constant-factor approximate learners for a broad class of distributions (including log-concave distributions) and activation functions. Concretely, for the class of isotropic log-concave distributions, we obtain the following important corollaries: For the logistic activation, we obtain the first polynomial-time constant factor approximation (even under the Gaussian distribution). Our algorithm has sample complexity $\widetilde{O}(d/\epsilon)$, which is tight within polylogarithmic factors. For the ReLU activation, we give an efficient algorithm with sample complexity $\tilde{O}(d\, \polylog(1/\epsilon))$. Prior to our work, the best known constant-factor approximate learner had sample complexity $\tilde{\Omega}(d/\epsilon)$. In both of these settings, our algorithms are simple, performing gradient-descent on the (regularized) $L_2^2$-loss. The correctness of our algorithms relies on novel structural results that we establish, showing that (essentially all) stationary points of the underlying non-convex loss are approximately optimal.
1906.05015
Ming Zhu
Ming Zhu, Xiao-Yang Liu, and Anwar Walid
Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular Networks
28 pages
null
null
null
cs.LG cs.AI cs.RO cs.SY eess.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G communication infrastructure in future smart cities. Hot spots easily appear in road intersections, where effective communication among vehicles is challenging. UAVs may serve as relays with the advantages of low price, easy deployment, line-of-sight links, and flexible mobility. In this paper, we study a UAV-assisted vehicular network where the UAV jointly adjusts its transmission control (power and channel) and 3D flight to maximize the total throughput. First, we formulate a Markov decision process (MDP) problem by modeling the mobility of the UAV/vehicles and the state transitions. Secondly, we solve the target problem using a deep reinforcement learning method, namely, the deep deterministic policy gradient (DDPG), and propose three solutions with different control objectives. Deep reinforcement learning methods obtain the optimal policy through the interactions with the environment without knowing the environment variables. Considering that environment variables in our problem are unknown and unmeasurable, we choose a deep reinforcement learning method to solve it. Moreover, considering the energy consumption of 3D flight, we extend the proposed solutions to maximize the total throughput per unit energy. To encourage or discourage the UAV's mobility according to its prediction, the DDPG framework is modified, where the UAV adjusts its learning rate automatically. Thirdly, in a simplified model with small state space and action space, we verify the optimality of proposed algorithms. Comparing with two baseline schemes, we demonstrate the effectiveness of proposed algorithms in a realistic model.
[ { "created": "Wed, 12 Jun 2019 09:12:50 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2020 02:18:20 GMT", "version": "v10" }, { "created": "Sat, 11 Feb 2023 10:54:11 GMT", "version": "v11" }, { "created": "Tue, 14 Feb 2023 11:53:43 GMT", "version": "v12" }, { "created": "Sun, 23 Jun 2019 08:58:02 GMT", "version": "v2" }, { "created": "Mon, 1 Jul 2019 12:25:15 GMT", "version": "v3" }, { "created": "Mon, 8 Jul 2019 16:00:25 GMT", "version": "v4" }, { "created": "Tue, 9 Jul 2019 01:47:49 GMT", "version": "v5" }, { "created": "Fri, 12 Jul 2019 13:49:35 GMT", "version": "v6" }, { "created": "Sat, 27 Jul 2019 03:50:11 GMT", "version": "v7" }, { "created": "Mon, 12 Aug 2019 10:50:40 GMT", "version": "v8" }, { "created": "Tue, 3 Sep 2019 14:29:54 GMT", "version": "v9" } ]
2023-02-22
[ [ "Zhu", "Ming", "" ], [ "Liu", "Xiao-Yang", "" ], [ "Walid", "Anwar", "" ] ]
Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G communication infrastructure in future smart cities. Hot spots easily appear in road intersections, where effective communication among vehicles is challenging. UAVs may serve as relays with the advantages of low price, easy deployment, line-of-sight links, and flexible mobility. In this paper, we study a UAV-assisted vehicular network where the UAV jointly adjusts its transmission control (power and channel) and 3D flight to maximize the total throughput. First, we formulate a Markov decision process (MDP) problem by modeling the mobility of the UAV/vehicles and the state transitions. Secondly, we solve the target problem using a deep reinforcement learning method, namely, the deep deterministic policy gradient (DDPG), and propose three solutions with different control objectives. Deep reinforcement learning methods obtain the optimal policy through the interactions with the environment without knowing the environment variables. Considering that environment variables in our problem are unknown and unmeasurable, we choose a deep reinforcement learning method to solve it. Moreover, considering the energy consumption of 3D flight, we extend the proposed solutions to maximize the total throughput per unit energy. To encourage or discourage the UAV's mobility according to its prediction, the DDPG framework is modified, where the UAV adjusts its learning rate automatically. Thirdly, in a simplified model with small state space and action space, we verify the optimality of proposed algorithms. Comparing with two baseline schemes, we demonstrate the effectiveness of proposed algorithms in a realistic model.
1912.06119
Caglar Tunc
Caglar Tunc and Shivendra Panwar
Optimal Transmission Policies for Energy Harvesting Age of Information Systems with Battery Recovery
Submitted for publication
null
null
null
cs.IT cs.NI eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider an energy harvesting information update system where a sensor is allowed to choose a transmission mode for each transmission, where each mode consists of a transmission power-error pair. We also incorporate the battery phenomenon called battery recovery effect where a battery replenishes the deliverable energy if kept idle after discharge. For an energy-limited age of information (AoI) system, this phenomenon gives rise to the interesting trade-off of recovering energy after transmissions, at the cost of increased AoI. Considering two metrics, namely peak-age hitting probability and average age as the worst-case and average performance indicators, respectively, we propose a framework that formulates the optimal transmission scheme selection problem as a Markov Decision Process (MDP). We show that the gains obtained by considering both battery dynamics and adjustable transmission power together are much higher than the sum gain achieved if they are considered separately. We also propose a simple methodology to optimize the system performance taking into account worst-case and average performances jointly.
[ { "created": "Thu, 12 Dec 2019 18:34:17 GMT", "version": "v1" } ]
2019-12-13
[ [ "Tunc", "Caglar", "" ], [ "Panwar", "Shivendra", "" ] ]
We consider an energy harvesting information update system where a sensor is allowed to choose a transmission mode for each transmission, where each mode consists of a transmission power-error pair. We also incorporate the battery phenomenon called battery recovery effect where a battery replenishes the deliverable energy if kept idle after discharge. For an energy-limited age of information (AoI) system, this phenomenon gives rise to the interesting trade-off of recovering energy after transmissions, at the cost of increased AoI. Considering two metrics, namely peak-age hitting probability and average age as the worst-case and average performance indicators, respectively, we propose a framework that formulates the optimal transmission scheme selection problem as a Markov Decision Process (MDP). We show that the gains obtained by considering both battery dynamics and adjustable transmission power together are much higher than the sum gain achieved if they are considered separately. We also propose a simple methodology to optimize the system performance taking into account worst-case and average performances jointly.
cs/0007038
Konstantinos Georgatos
Konstantinos Georgatos
Modal Logics for Topological Spaces
25 pages, extened abstract of PHD Dissertation
null
null
null
cs.LO cs.AI
null
In this thesis we shall present two logical systems, MP and MP, for the purpose of reasoning about knowledge and effort. These logical systems will be interpreted in a spatial context and therefore, the abstract concepts of knowledge and effort will be defined by concrete mathematical concepts.
[ { "created": "Wed, 26 Jul 2000 18:41:17 GMT", "version": "v1" } ]
2007-05-23
[ [ "Georgatos", "Konstantinos", "" ] ]
In this thesis we shall present two logical systems, MP and MP, for the purpose of reasoning about knowledge and effort. These logical systems will be interpreted in a spatial context and therefore, the abstract concepts of knowledge and effort will be defined by concrete mathematical concepts.
2202.10450
Ignacio Carlucho
Reuth Mirsky and Ignacio Carlucho and Arrasy Rahman and Elliot Fosong and William Macke and Mohan Sridharan and Peter Stone and Stefano V. Albrecht
A Survey of Ad Hoc Teamwork Research
European Conference on Multi-Agent Systems (EUMAS), 2022
null
null
null
cs.MA cs.AI
http://creativecommons.org/licenses/by/4.0/
Ad hoc teamwork is the research problem of designing agents that can collaborate with new teammates without prior coordination. This survey makes a two-fold contribution: First, it provides a structured description of the different facets of the ad hoc teamwork problem. Second, it discusses the progress that has been made in the field so far, and identifies the immediate and long-term open problems that need to be addressed in ad hoc teamwork.
[ { "created": "Wed, 16 Feb 2022 18:16:27 GMT", "version": "v1" }, { "created": "Mon, 15 Aug 2022 11:09:23 GMT", "version": "v2" }, { "created": "Tue, 16 Aug 2022 16:40:01 GMT", "version": "v3" } ]
2022-08-17
[ [ "Mirsky", "Reuth", "" ], [ "Carlucho", "Ignacio", "" ], [ "Rahman", "Arrasy", "" ], [ "Fosong", "Elliot", "" ], [ "Macke", "William", "" ], [ "Sridharan", "Mohan", "" ], [ "Stone", "Peter", "" ], [ "Albrecht", "Stefano V.", "" ] ]
Ad hoc teamwork is the research problem of designing agents that can collaborate with new teammates without prior coordination. This survey makes a two-fold contribution: First, it provides a structured description of the different facets of the ad hoc teamwork problem. Second, it discusses the progress that has been made in the field so far, and identifies the immediate and long-term open problems that need to be addressed in ad hoc teamwork.
2210.04883
Nicolas Dufour
Nicolas Dufour, David Picard, Vicky Kalogeiton
SCAM! Transferring humans between images with Semantic Cross Attention Modulation
Accepted at ECCV 2022
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large body of recent work targets semantically conditioned image generation. Most such methods focus on the narrower task of pose transfer and ignore the more challenging task of subject transfer that consists in not only transferring the pose but also the appearance and background. In this work, we introduce SCAM (Semantic Cross Attention Modulation), a system that encodes rich and diverse information in each semantic region of the image (including foreground and background), thus achieving precise generation with emphasis on fine details. This is enabled by the Semantic Attention Transformer Encoder that extracts multiple latent vectors for each semantic region, and the corresponding generator that exploits these multiple latents by using semantic cross attention modulation. It is trained only using a reconstruction setup, while subject transfer is performed at test time. Our analysis shows that our proposed architecture is successful at encoding the diversity of appearance in each semantic region. Extensive experiments on the iDesigner and CelebAMask-HD datasets show that SCAM outperforms SEAN and SPADE; moreover, it sets the new state of the art on subject transfer.
[ { "created": "Mon, 10 Oct 2022 17:54:47 GMT", "version": "v1" } ]
2022-10-11
[ [ "Dufour", "Nicolas", "" ], [ "Picard", "David", "" ], [ "Kalogeiton", "Vicky", "" ] ]
A large body of recent work targets semantically conditioned image generation. Most such methods focus on the narrower task of pose transfer and ignore the more challenging task of subject transfer that consists in not only transferring the pose but also the appearance and background. In this work, we introduce SCAM (Semantic Cross Attention Modulation), a system that encodes rich and diverse information in each semantic region of the image (including foreground and background), thus achieving precise generation with emphasis on fine details. This is enabled by the Semantic Attention Transformer Encoder that extracts multiple latent vectors for each semantic region, and the corresponding generator that exploits these multiple latents by using semantic cross attention modulation. It is trained only using a reconstruction setup, while subject transfer is performed at test time. Our analysis shows that our proposed architecture is successful at encoding the diversity of appearance in each semantic region. Extensive experiments on the iDesigner and CelebAMask-HD datasets show that SCAM outperforms SEAN and SPADE; moreover, it sets the new state of the art on subject transfer.
2407.10114
Roni Goldshmidt
Roni Goldshmidt, Miriam Horovicz
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model's response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method's ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems.
[ { "created": "Sun, 14 Jul 2024 08:07:50 GMT", "version": "v1" }, { "created": "Mon, 22 Jul 2024 08:59:07 GMT", "version": "v2" } ]
2024-07-23
[ [ "Goldshmidt", "Roni", "" ], [ "Horovicz", "Miriam", "" ] ]
As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model's response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method's ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems.
1104.3469
Yoo Chung
Yoo Chung and Dongman Lee
Probabilistic Analysis of Loss in Interface Adapter Chaining
20 pages, 2 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interface adapters allow applications written for one interface to be reused with another interface without having to rewrite application code, and chaining interface adapters can significantly reduce the development effort required to create the adapters. However, interface adapters will often be unable to convert interfaces perfectly, so there must be a way to analyze the loss from interface adapter chains in order to improve the quality of interface adaptation. This paper describes a probabilistic approach to analyzing loss in interface adapter chains, which not only models whether a method can be adapted but also how well methods can be adapted. We also show that probabilistic optimal adapter chaining is an NP-complete problem, so we describe a greedy algorithm which can construct an optimal interface adapter chain with exponential time in the worst case.
[ { "created": "Mon, 18 Apr 2011 13:07:00 GMT", "version": "v1" } ]
2011-04-19
[ [ "Chung", "Yoo", "" ], [ "Lee", "Dongman", "" ] ]
Interface adapters allow applications written for one interface to be reused with another interface without having to rewrite application code, and chaining interface adapters can significantly reduce the development effort required to create the adapters. However, interface adapters will often be unable to convert interfaces perfectly, so there must be a way to analyze the loss from interface adapter chains in order to improve the quality of interface adaptation. This paper describes a probabilistic approach to analyzing loss in interface adapter chains, which not only models whether a method can be adapted but also how well methods can be adapted. We also show that probabilistic optimal adapter chaining is an NP-complete problem, so we describe a greedy algorithm which can construct an optimal interface adapter chain with exponential time in the worst case.
1607.03305
Martin Cadik
Martin Cadik and Jan Vasicek and Michal Hradis and Filip Radenovic and Ondrej Chum
Camera Elevation Estimation from a Single Mountain Landscape Photograph
null
In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 30.1-30.12. BMVA Press, September 2015
10.5244/C.29.30
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work addresses the problem of camera elevation estimation from a single photograph in an outdoor environment. We introduce a new benchmark dataset of one-hundred thousand images with annotated camera elevation called Alps100K. We propose and experimentally evaluate two automatic data-driven approaches to camera elevation estimation: one based on convolutional neural networks, the other on local features. To compare the proposed methods to human performance, an experiment with 100 subjects is conducted. The experimental results show that both proposed approaches outperform humans and that the best result is achieved by their combination.
[ { "created": "Tue, 12 Jul 2016 10:47:51 GMT", "version": "v1" } ]
2016-07-13
[ [ "Cadik", "Martin", "" ], [ "Vasicek", "Jan", "" ], [ "Hradis", "Michal", "" ], [ "Radenovic", "Filip", "" ], [ "Chum", "Ondrej", "" ] ]
This work addresses the problem of camera elevation estimation from a single photograph in an outdoor environment. We introduce a new benchmark dataset of one-hundred thousand images with annotated camera elevation called Alps100K. We propose and experimentally evaluate two automatic data-driven approaches to camera elevation estimation: one based on convolutional neural networks, the other on local features. To compare the proposed methods to human performance, an experiment with 100 subjects is conducted. The experimental results show that both proposed approaches outperform humans and that the best result is achieved by their combination.
2206.09068
Sukesh Adiga Vasudeva
Sukesh Adiga V, Jose Dolz, Herve Lombaert
Attention-based Dynamic Subspace Learners for Medical Image Analysis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning similarity is a key aspect in medical image analysis, particularly in recommendation systems or in uncovering the interpretation of anatomical data in images. Most existing methods learn such similarities in the embedding space over image sets using a single metric learner. Images, however, have a variety of object attributes such as color, shape, or artifacts. Encoding such attributes using a single metric learner is inadequate and may fail to generalize. Instead, multiple learners could focus on separate aspects of these attributes in subspaces of an overarching embedding. This, however, implies the number of learners to be found empirically for each new dataset. This work, Dynamic Subspace Learners, proposes to dynamically exploit multiple learners by removing the need of knowing apriori the number of learners and aggregating new subspace learners during training. Furthermore, the visual interpretability of such subspace learning is enforced by integrating an attention module into our method. This integrated attention mechanism provides a visual insight of discriminative image features that contribute to the clustering of image sets and a visual explanation of the embedding features. The benefits of our attention-based dynamic subspace learners are evaluated in the application of image clustering, image retrieval, and weakly supervised segmentation. Our method achieves competitive results with the performances of multiple learners baselines and significantly outperforms the classification network in terms of clustering and retrieval scores on three different public benchmark datasets. Moreover, our attention maps offer a proxy-labels, which improves the segmentation accuracy up to 15% in Dice scores when compared to state-of-the-art interpretation techniques.
[ { "created": "Sat, 18 Jun 2022 00:44:40 GMT", "version": "v1" } ]
2022-06-22
[ [ "Adiga", "Sukesh", "V" ], [ "Dolz", "Jose", "" ], [ "Lombaert", "Herve", "" ] ]
Learning similarity is a key aspect in medical image analysis, particularly in recommendation systems or in uncovering the interpretation of anatomical data in images. Most existing methods learn such similarities in the embedding space over image sets using a single metric learner. Images, however, have a variety of object attributes such as color, shape, or artifacts. Encoding such attributes using a single metric learner is inadequate and may fail to generalize. Instead, multiple learners could focus on separate aspects of these attributes in subspaces of an overarching embedding. This, however, implies the number of learners to be found empirically for each new dataset. This work, Dynamic Subspace Learners, proposes to dynamically exploit multiple learners by removing the need of knowing apriori the number of learners and aggregating new subspace learners during training. Furthermore, the visual interpretability of such subspace learning is enforced by integrating an attention module into our method. This integrated attention mechanism provides a visual insight of discriminative image features that contribute to the clustering of image sets and a visual explanation of the embedding features. The benefits of our attention-based dynamic subspace learners are evaluated in the application of image clustering, image retrieval, and weakly supervised segmentation. Our method achieves competitive results with the performances of multiple learners baselines and significantly outperforms the classification network in terms of clustering and retrieval scores on three different public benchmark datasets. Moreover, our attention maps offer a proxy-labels, which improves the segmentation accuracy up to 15% in Dice scores when compared to state-of-the-art interpretation techniques.