id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1702.04645
Benjamin Chi\^em
Benjamin Chi\^em, Andine Havelange, Paul Van Dooren
A parallel implementation of the Synchronised Louvain method
20 pages
null
null
null
cs.DS cs.DC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection in networks is a very actual and important field of research with applications in many areas. But, given that the amount of processed data increases more and more, existing algorithms need to be adapted for very large graphs. The objective of this project was to parallelise the Synchronised Louvain Method, a community detection algorithm developed by Arnaud Browet, in order to improve its performances in terms of computation time and thus be able to faster detect communities in very large graphs. To reach this goal, we used the API OpenMP to parallelise the algorithm and then carried out performance tests. We studied the computation time and speedup of the parallelised algorithm and were able to bring out some qualitative trends. We obtained a great speedup, compared with the theoretical prediction of Amdahl law. To conclude, using the parallel implementation of the algorithm of Browet on large graphs seems to give good results, both in terms of computation time and speedup. Further tests should be carried out in order to obtain more quantitative results.
[ { "created": "Wed, 15 Feb 2017 15:09:15 GMT", "version": "v1" } ]
2017-02-16
[ [ "Chiêm", "Benjamin", "" ], [ "Havelange", "Andine", "" ], [ "Van Dooren", "Paul", "" ] ]
Community detection in networks is a very actual and important field of research with applications in many areas. But, given that the amount of processed data increases more and more, existing algorithms need to be adapted for very large graphs. The objective of this project was to parallelise the Synchronised Louvain Method, a community detection algorithm developed by Arnaud Browet, in order to improve its performances in terms of computation time and thus be able to faster detect communities in very large graphs. To reach this goal, we used the API OpenMP to parallelise the algorithm and then carried out performance tests. We studied the computation time and speedup of the parallelised algorithm and were able to bring out some qualitative trends. We obtained a great speedup, compared with the theoretical prediction of Amdahl law. To conclude, using the parallel implementation of the algorithm of Browet on large graphs seems to give good results, both in terms of computation time and speedup. Further tests should be carried out in order to obtain more quantitative results.
1606.08135
Xu Zhiqiang
Bing Gao, Zhiqiang Xu
Phaseless Rcovery using Gauss-Newton Method
16 pages
IEEE Trans. Signal Processing. VOL. 65, NO. 22, NOVEMBER 15, 2017
10.1109/TSP.2017.2742981
null
cs.IT math.IT math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a concrete algorithm for phase retrieval, which we refer to as Gauss-Newton algorithm. In short, this algorithm starts with a good initial estimation, which is obtained by a modified spectral method, and then update the iteration point by a Gauss-Newton iteration step. We prove that a re-sampled version of this algorithm quadratically converges to the solution for the real case with the number of random measurements being nearly minimal. Numerical experiments also show that Gauss-Newton method has better performance over the other algorithms.
[ { "created": "Mon, 27 Jun 2016 06:22:45 GMT", "version": "v1" }, { "created": "Fri, 8 Jul 2016 06:40:37 GMT", "version": "v2" }, { "created": "Fri, 9 Feb 2018 01:24:31 GMT", "version": "v3" } ]
2018-02-12
[ [ "Gao", "Bing", "" ], [ "Xu", "Zhiqiang", "" ] ]
In this paper, we develop a concrete algorithm for phase retrieval, which we refer to as Gauss-Newton algorithm. In short, this algorithm starts with a good initial estimation, which is obtained by a modified spectral method, and then update the iteration point by a Gauss-Newton iteration step. We prove that a re-sampled version of this algorithm quadratically converges to the solution for the real case with the number of random measurements being nearly minimal. Numerical experiments also show that Gauss-Newton method has better performance over the other algorithms.
2312.16183
Luca Pantea
Milena Kapralova, Luca Pantea and Andrei Blahovici
LightGCN: Evaluated and Enhanced
Accepted at NeurIPS'23 Workshop on New in ML; 3 pages, 2 figures, 3 tables
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper analyses LightGCN in the context of graph recommendation algorithms. Despite the initial design of Graph Convolutional Networks for graph classification, the non-linear operations are not always essential. LightGCN enables linear propagation of embeddings, enhancing performance. We reproduce the original findings, assess LightGCN's robustness on diverse datasets and metrics, and explore Graph Diffusion as an augmentation of signal propagation in LightGCN.
[ { "created": "Sun, 17 Dec 2023 15:18:18 GMT", "version": "v1" } ]
2023-12-29
[ [ "Kapralova", "Milena", "" ], [ "Pantea", "Luca", "" ], [ "Blahovici", "Andrei", "" ] ]
This paper analyses LightGCN in the context of graph recommendation algorithms. Despite the initial design of Graph Convolutional Networks for graph classification, the non-linear operations are not always essential. LightGCN enables linear propagation of embeddings, enhancing performance. We reproduce the original findings, assess LightGCN's robustness on diverse datasets and metrics, and explore Graph Diffusion as an augmentation of signal propagation in LightGCN.
1805.04157
Stephen Bonner
Nik Khadijah Nik Aznan, Stephen Bonner, Jason D. Connolly, Noura Al Moubayed, Toby P. Breckon
On the Classification of SSVEP-Based Dry-EEG Signals via Convolutional Neural Networks
Accepted as a full paper at the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)
null
10.1109/SMC.2018.00631
null
cs.HC eess.SP q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel Convolutional Neural Network (CNN) approach for the classification of raw dry-EEG signals without any data pre-processing. To illustrate the effectiveness of our approach, we utilise the Steady State Visual Evoked Potential (SSVEP) paradigm as our use case. SSVEP can be utilised to allow people with severe physical disabilities such as Complete Locked-In Syndrome or Amyotrophic Lateral Sclerosis to be aided via BCI applications, as it requires only the subject to fixate upon the sensory stimuli of interest. Here we utilise SSVEP flicker frequencies between 10 to 30 Hz, which we record as subject cortical waveforms via the dry-EEG headset. Our proposed end-to-end CNN allows us to automatically and accurately classify SSVEP stimulation directly from the dry-EEG waveforms. Our CNN architecture utilises a common SSVEP Convolutional Unit (SCU), comprising of a 1D convolutional layer, batch normalization and max pooling. Furthermore, we compare several deep learning neural network variants with our primary CNN architecture, in addition to traditional machine learning classification approaches. Experimental evaluation shows our CNN architecture to be significantly better than competing approaches, achieving a classification accuracy of 96% whilst demonstrating superior cross-subject performance and even being able to generalise well to unseen subjects whose data is entirely absent from the training process.
[ { "created": "Thu, 10 May 2018 20:10:02 GMT", "version": "v1" }, { "created": "Thu, 2 Aug 2018 09:47:21 GMT", "version": "v2" } ]
2019-01-23
[ [ "Aznan", "Nik Khadijah Nik", "" ], [ "Bonner", "Stephen", "" ], [ "Connolly", "Jason D.", "" ], [ "Moubayed", "Noura Al", "" ], [ "Breckon", "Toby P.", "" ] ]
In this paper, we propose a novel Convolutional Neural Network (CNN) approach for the classification of raw dry-EEG signals without any data pre-processing. To illustrate the effectiveness of our approach, we utilise the Steady State Visual Evoked Potential (SSVEP) paradigm as our use case. SSVEP can be utilised to allow people with severe physical disabilities such as Complete Locked-In Syndrome or Amyotrophic Lateral Sclerosis to be aided via BCI applications, as it requires only the subject to fixate upon the sensory stimuli of interest. Here we utilise SSVEP flicker frequencies between 10 to 30 Hz, which we record as subject cortical waveforms via the dry-EEG headset. Our proposed end-to-end CNN allows us to automatically and accurately classify SSVEP stimulation directly from the dry-EEG waveforms. Our CNN architecture utilises a common SSVEP Convolutional Unit (SCU), comprising of a 1D convolutional layer, batch normalization and max pooling. Furthermore, we compare several deep learning neural network variants with our primary CNN architecture, in addition to traditional machine learning classification approaches. Experimental evaluation shows our CNN architecture to be significantly better than competing approaches, achieving a classification accuracy of 96% whilst demonstrating superior cross-subject performance and even being able to generalise well to unseen subjects whose data is entirely absent from the training process.
2111.10663
Xingqin Lin
Xingqin Lin, Mingzhe Chen, Henrik Ryd\'en, Jaeseong Jeong, Heunchul Lee, M{\aa}rten Sundberg, Roy Timo, Hazhir S. Razaghi, H. Vincent Poor
Fueling the Next Quantum Leap in Cellular Networks: Embracing AI in 5G Evolution towards 6G
7 pages, 5 figures, 1 table; submitted for publication
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellular networks, such as 5G systems, are becoming increasingly complex for supporting various deployment scenarios and applications. Embracing artificial intelligence (AI) in 5G evolution is critical to managing the complexity and fueling the next quantum leap in 6G cellular networks. In this article, we share our experience and best practices in applying AI in cellular networks. We first present a primer on the state of the art of AI in cellular networks, including basic concepts and recent key advances. Then we discuss 3GPP standardization aspects and share various design rationales influencing standardization. We also present case studies with real network data to showcase how AI can improve network performance and enable network automation.
[ { "created": "Sat, 20 Nov 2021 19:16:58 GMT", "version": "v1" } ]
2021-11-23
[ [ "Lin", "Xingqin", "" ], [ "Chen", "Mingzhe", "" ], [ "Rydén", "Henrik", "" ], [ "Jeong", "Jaeseong", "" ], [ "Lee", "Heunchul", "" ], [ "Sundberg", "Mårten", "" ], [ "Timo", "Roy", "" ], [ "Razaghi", "Hazhir S.", "" ], [ "Poor", "H. Vincent", "" ] ]
Cellular networks, such as 5G systems, are becoming increasingly complex for supporting various deployment scenarios and applications. Embracing artificial intelligence (AI) in 5G evolution is critical to managing the complexity and fueling the next quantum leap in 6G cellular networks. In this article, we share our experience and best practices in applying AI in cellular networks. We first present a primer on the state of the art of AI in cellular networks, including basic concepts and recent key advances. Then we discuss 3GPP standardization aspects and share various design rationales influencing standardization. We also present case studies with real network data to showcase how AI can improve network performance and enable network automation.
2212.05410
Junwei Su
Junwei Su
ABC: Aggregation before Communication, a Communication Reduction Framework for Distributed Graph Neural Network Training and Effective Partition
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks(GNNs) are a family of neural models tailored for graph-structure data and have shown superior performance in learning representations for graph-structured data. However, training GNNs on large graphs remains challenging and a promising direction is distributed GNN training, which is to partition the input graph and distribute the workload across multiple machines. The key bottleneck of the existing distributed GNNs training framework is the across-machine communication induced by the dependency on the graph data and aggregation operator of GNNs. In this paper, we study the communication complexity during distributed GNNs training and propose a simple lossless communication reduction method, termed the Aggregation before Communication (ABC) method. ABC method exploits the permutation-invariant property of the GNNs layer and leads to a paradigm where vertex-cut is proved to admit a superior communication performance than the currently popular paradigm (edge-cut). In addition, we show that the new partition paradigm is particularly ideal in the case of dynamic graphs where it is infeasible to control the edge placement due to the unknown stochastic of the graph-changing process.
[ { "created": "Sun, 11 Dec 2022 04:54:01 GMT", "version": "v1" } ]
2022-12-13
[ [ "Su", "Junwei", "" ] ]
Graph Neural Networks(GNNs) are a family of neural models tailored for graph-structure data and have shown superior performance in learning representations for graph-structured data. However, training GNNs on large graphs remains challenging and a promising direction is distributed GNN training, which is to partition the input graph and distribute the workload across multiple machines. The key bottleneck of the existing distributed GNNs training framework is the across-machine communication induced by the dependency on the graph data and aggregation operator of GNNs. In this paper, we study the communication complexity during distributed GNNs training and propose a simple lossless communication reduction method, termed the Aggregation before Communication (ABC) method. ABC method exploits the permutation-invariant property of the GNNs layer and leads to a paradigm where vertex-cut is proved to admit a superior communication performance than the currently popular paradigm (edge-cut). In addition, we show that the new partition paradigm is particularly ideal in the case of dynamic graphs where it is infeasible to control the edge placement due to the unknown stochastic of the graph-changing process.
1606.03639
Mohamed Mohamed M.Sc.
Mohamed Seif, Tamer Elbatt, and Karim G. Seddik
Sparse Spectrum Sensing in Infrastructure-less Cognitive Radio Networks via Binary Consensus Algorithms
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressive Sensing has been utilized in Cognitive Radio Networks (CRNs) to exploit the sparse nature of the occupation of the primary users. Also, distributed spectrum sensing has been proposed to tackle the wireless channel problems, like node or link failures, rather than the common (centralized approach) for spectrum sensing. In this paper, we propose a distributed spectrum sensing framework based on consensus algorithms where SU nodes exchange their binary decisions to take global decisions without a fusion center to coordinate the sensing process. Each SU will share its decision with its neighbors, and at every new iteration each SU will take a new decision based on its current decision and the decisions it receives from its neighbors; in the next iteration, each SU will share its new decision with its neighbors. We show via simulations that the detection performance can tend to the performance of majority rule Fusion Center based CRNs.
[ { "created": "Sat, 11 Jun 2016 22:49:53 GMT", "version": "v1" } ]
2016-06-14
[ [ "Seif", "Mohamed", "" ], [ "Elbatt", "Tamer", "" ], [ "Seddik", "Karim G.", "" ] ]
Compressive Sensing has been utilized in Cognitive Radio Networks (CRNs) to exploit the sparse nature of the occupation of the primary users. Also, distributed spectrum sensing has been proposed to tackle the wireless channel problems, like node or link failures, rather than the common (centralized approach) for spectrum sensing. In this paper, we propose a distributed spectrum sensing framework based on consensus algorithms where SU nodes exchange their binary decisions to take global decisions without a fusion center to coordinate the sensing process. Each SU will share its decision with its neighbors, and at every new iteration each SU will take a new decision based on its current decision and the decisions it receives from its neighbors; in the next iteration, each SU will share its new decision with its neighbors. We show via simulations that the detection performance can tend to the performance of majority rule Fusion Center based CRNs.
2110.09069
Prashanth Amireddy
Prashanth Amireddy and Chetan Sai Digumarthi
Diameter constrained Steiner tree and related problems
13 pages, 4 figures
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a dynamic programming solution to find the minimum cost of a diameter constrained Steiner tree in case of directed graphs. Then we show a simple reduction from undirected version to the directed version to realize an algorithm of similar complexity i.e, FPT in number of terminal vertices. Other natural variants of constrained Steiner trees are defined by imposing constraints on the min-degree and size of the Steiner tree and some polynomial time reductions among these problems are proven. To the best of our knowledge, these fairly simple reductions are not present in the literature prior to our work.
[ { "created": "Mon, 18 Oct 2021 07:31:16 GMT", "version": "v1" }, { "created": "Tue, 19 Oct 2021 07:43:04 GMT", "version": "v2" } ]
2021-10-20
[ [ "Amireddy", "Prashanth", "" ], [ "Digumarthi", "Chetan Sai", "" ] ]
We give a dynamic programming solution to find the minimum cost of a diameter constrained Steiner tree in case of directed graphs. Then we show a simple reduction from undirected version to the directed version to realize an algorithm of similar complexity i.e, FPT in number of terminal vertices. Other natural variants of constrained Steiner trees are defined by imposing constraints on the min-degree and size of the Steiner tree and some polynomial time reductions among these problems are proven. To the best of our knowledge, these fairly simple reductions are not present in the literature prior to our work.
1907.06399
Vanessa Pe\~na-Araya
Vanessa Pe\~na-Araya, Emmanuel Pietriga, Anastasia Bezerianos
A Comparison of Visualizations for Identifying Correlation over Space and Time
Accepted for presentation at IEEE VIS 2019, to be held October 20-25 in Vancouver, Canada; will be published in a special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG)
null
10.1109/TVCG.2019.2934807
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Observing the relationship between two or more variables over space and time is essential in many domains. For instance, looking, for different countries, at the evolution of both the life expectancy at birth and the fertility rate will give an overview of their demographics. The choice of visual representation for such multivariate data is key to enabling analysts to extract patterns and trends. Prior work has compared geo-temporal visualization techniques for a single thematic variable that evolves over space and time, or for two variables at a specific point in time. But how effective visualization techniques are at communicating correlation between two variables that evolve over space and time remains to be investigated. We report on a study comparing three techniques that are representative of different strategies to visualize geo-temporal multivariate data: either juxtaposing all locations for a given time step, or juxtaposing all time steps for a given location; and encoding thematic attributes either using symbols overlaid on top of map features, or using visual channels of the map features themselves. Participants performed a series of tasks that required them to identify if two variables were correlated over time and if there was a pattern in their evolution. Tasks varied in granularity for both dimensions: time (all time steps, a subrange of steps, one step only) and space (all locations, locations in a subregion, one location only). Our results show that a visualization's effectiveness depends strongly on the task to be carried out. Based on these findings we present a set of design guidelines about geo-temporal visualization techniques for communicating correlation.
[ { "created": "Mon, 15 Jul 2019 09:49:47 GMT", "version": "v1" }, { "created": "Tue, 15 Oct 2019 13:16:26 GMT", "version": "v2" } ]
2019-10-16
[ [ "Peña-Araya", "Vanessa", "" ], [ "Pietriga", "Emmanuel", "" ], [ "Bezerianos", "Anastasia", "" ] ]
Observing the relationship between two or more variables over space and time is essential in many domains. For instance, looking, for different countries, at the evolution of both the life expectancy at birth and the fertility rate will give an overview of their demographics. The choice of visual representation for such multivariate data is key to enabling analysts to extract patterns and trends. Prior work has compared geo-temporal visualization techniques for a single thematic variable that evolves over space and time, or for two variables at a specific point in time. But how effective visualization techniques are at communicating correlation between two variables that evolve over space and time remains to be investigated. We report on a study comparing three techniques that are representative of different strategies to visualize geo-temporal multivariate data: either juxtaposing all locations for a given time step, or juxtaposing all time steps for a given location; and encoding thematic attributes either using symbols overlaid on top of map features, or using visual channels of the map features themselves. Participants performed a series of tasks that required them to identify if two variables were correlated over time and if there was a pattern in their evolution. Tasks varied in granularity for both dimensions: time (all time steps, a subrange of steps, one step only) and space (all locations, locations in a subregion, one location only). Our results show that a visualization's effectiveness depends strongly on the task to be carried out. Based on these findings we present a set of design guidelines about geo-temporal visualization techniques for communicating correlation.
2302.12422
Chen Wang
Chen Wang, Linxi Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, Anima Anandkumar
MimicPlay: Long-Horizon Imitation Learning by Watching Human Play
7th Conference on Robot Learning (CoRL 2023 oral presentation)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imitation learning from human demonstrations is a promising paradigm for teaching robots manipulation skills in the real world. However, learning complex long-horizon tasks often requires an unattainable amount of demonstrations. To reduce the high data requirement, we resort to human play data - video sequences of people freely interacting with the environment using their hands. Even with different morphologies, we hypothesize that human play data contain rich and salient information about physical interactions that can readily facilitate robot policy learning. Motivated by this, we introduce a hierarchical learning framework named MimicPlay that learns latent plans from human play data to guide low-level visuomotor control trained on a small number of teleoperated demonstrations. With systematic evaluations of 14 long-horizon manipulation tasks in the real world, we show that MimicPlay outperforms state-of-the-art imitation learning methods in task success rate, generalization ability, and robustness to disturbances. Code and videos are available at https://mimic-play.github.io
[ { "created": "Fri, 24 Feb 2023 02:54:15 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 05:44:37 GMT", "version": "v2" } ]
2023-10-16
[ [ "Wang", "Chen", "" ], [ "Fan", "Linxi", "" ], [ "Sun", "Jiankai", "" ], [ "Zhang", "Ruohan", "" ], [ "Fei-Fei", "Li", "" ], [ "Xu", "Danfei", "" ], [ "Zhu", "Yuke", "" ], [ "Anandkumar", "Anima", "" ] ]
Imitation learning from human demonstrations is a promising paradigm for teaching robots manipulation skills in the real world. However, learning complex long-horizon tasks often requires an unattainable amount of demonstrations. To reduce the high data requirement, we resort to human play data - video sequences of people freely interacting with the environment using their hands. Even with different morphologies, we hypothesize that human play data contain rich and salient information about physical interactions that can readily facilitate robot policy learning. Motivated by this, we introduce a hierarchical learning framework named MimicPlay that learns latent plans from human play data to guide low-level visuomotor control trained on a small number of teleoperated demonstrations. With systematic evaluations of 14 long-horizon manipulation tasks in the real world, we show that MimicPlay outperforms state-of-the-art imitation learning methods in task success rate, generalization ability, and robustness to disturbances. Code and videos are available at https://mimic-play.github.io
2206.06934
Tyler Cody
Tyler Cody
A Layered Reference Model for Penetration Testing with Reinforcement Learning and Attack Graphs
null
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers key challenges to using reinforcement learning (RL) with attack graphs to automate penetration testing in real-world applications from a systems perspective. RL approaches to automated penetration testing are actively being developed, but there is no consensus view on the representation of computer networks with which RL should be interacting. Moreover, there are significant open challenges to how those representations can be grounded to the real networks where RL solution methods are applied. This paper elaborates on representation and grounding using topic challenges of interacting with real networks in real-time, emulating realistic adversary behavior, and handling unstable, evolving networks. These challenges are both practical and mathematical, and they directly concern the reliability and dependability of penetration testing systems. This paper proposes a layered reference model to help organize related research and engineering efforts. The presented layered reference model contrasts traditional models of attack graph workflows because it is not scoped to a sequential, feed-forward generation and analysis process, but to broader aspects of lifecycle and continuous deployment. Researchers and practitioners can use the presented layered reference model as a first-principles outline to help orient the systems engineering of their penetration testing systems.
[ { "created": "Tue, 14 Jun 2022 15:53:33 GMT", "version": "v1" } ]
2022-06-15
[ [ "Cody", "Tyler", "" ] ]
This paper considers key challenges to using reinforcement learning (RL) with attack graphs to automate penetration testing in real-world applications from a systems perspective. RL approaches to automated penetration testing are actively being developed, but there is no consensus view on the representation of computer networks with which RL should be interacting. Moreover, there are significant open challenges to how those representations can be grounded to the real networks where RL solution methods are applied. This paper elaborates on representation and grounding using topic challenges of interacting with real networks in real-time, emulating realistic adversary behavior, and handling unstable, evolving networks. These challenges are both practical and mathematical, and they directly concern the reliability and dependability of penetration testing systems. This paper proposes a layered reference model to help organize related research and engineering efforts. The presented layered reference model contrasts traditional models of attack graph workflows because it is not scoped to a sequential, feed-forward generation and analysis process, but to broader aspects of lifecycle and continuous deployment. Researchers and practitioners can use the presented layered reference model as a first-principles outline to help orient the systems engineering of their penetration testing systems.
1802.09808
Marian-Andrei Rizoiu
Marian-Andrei Rizoiu and Timothy Graham and Rui Zhang and Yifei Zhang and Robert Ackland and Lexing Xie
#DebateNight: The Role and Influence of Socialbots on Twitter During the 1st 2016 U.S. Presidential Debate
null
12th International AAAI Conference on Web and Social Media (ICWSM 2018)
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Serious concerns have been raised about the role of 'socialbots' in manipulating public opinion and influencing the outcome of elections by retweeting partisan content to increase its reach. Here we analyze the role and influence of socialbots on Twitter by determining how they contribute to retweet diffusions. We collect a large dataset of tweets during the 1st U.S. Presidential Debate in 2016 (#DebateNight) and we analyze its 1.5 million users from three perspectives: user influence, political behavior (partisanship and engagement) and botness. First, we define a measure of user influence based on the user's active contributions to information diffusions, i.e. their tweets and retweets. Given that Twitter does not expose the retweet structure - it associates all retweets with the original tweet - we model the latent diffusion structure using only tweet time and user features, and we implement a scalable novel approach to estimate influence over all possible unfoldings. Next, we use partisan hashtag analysis to quantify user political polarization and engagement. Finally, we use the BotOrNot API to measure user botness (the likelihood of being a bot). We build a two-dimensional "polarization map" that allows for a nuanced analysis of the interplay between botness, partisanship and influence. We find that not only social bots are more active on Twitter - starting more retweet cascades and retweeting more -- but they are 2.5 times more influential than humans, and more politically engaged. Moreover, pro-Republican bots are both more influential and more politically engaged than their pro-Democrat counterparts. However we caution against blanket statements that software designed to appear human dominates political debates. We find that many highly influential Twitter users are in fact pro-Democrat and that most pro-Republican users are mid-influential and likely to be human (low botness).
[ { "created": "Tue, 27 Feb 2018 10:23:46 GMT", "version": "v1" }, { "created": "Tue, 10 Apr 2018 12:36:39 GMT", "version": "v2" }, { "created": "Thu, 17 May 2018 01:35:13 GMT", "version": "v3" } ]
2018-05-18
[ [ "Rizoiu", "Marian-Andrei", "" ], [ "Graham", "Timothy", "" ], [ "Zhang", "Rui", "" ], [ "Zhang", "Yifei", "" ], [ "Ackland", "Robert", "" ], [ "Xie", "Lexing", "" ] ]
Serious concerns have been raised about the role of 'socialbots' in manipulating public opinion and influencing the outcome of elections by retweeting partisan content to increase its reach. Here we analyze the role and influence of socialbots on Twitter by determining how they contribute to retweet diffusions. We collect a large dataset of tweets during the 1st U.S. Presidential Debate in 2016 (#DebateNight) and we analyze its 1.5 million users from three perspectives: user influence, political behavior (partisanship and engagement) and botness. First, we define a measure of user influence based on the user's active contributions to information diffusions, i.e. their tweets and retweets. Given that Twitter does not expose the retweet structure - it associates all retweets with the original tweet - we model the latent diffusion structure using only tweet time and user features, and we implement a scalable novel approach to estimate influence over all possible unfoldings. Next, we use partisan hashtag analysis to quantify user political polarization and engagement. Finally, we use the BotOrNot API to measure user botness (the likelihood of being a bot). We build a two-dimensional "polarization map" that allows for a nuanced analysis of the interplay between botness, partisanship and influence. We find that not only social bots are more active on Twitter - starting more retweet cascades and retweeting more -- but they are 2.5 times more influential than humans, and more politically engaged. Moreover, pro-Republican bots are both more influential and more politically engaged than their pro-Democrat counterparts. However we caution against blanket statements that software designed to appear human dominates political debates. We find that many highly influential Twitter users are in fact pro-Democrat and that most pro-Republican users are mid-influential and likely to be human (low botness).
1712.05667
Thomas Franssen
Rodrigo Costas, Jeroen van Honk, Thomas Franssen
Scholars on Twitter: who and how many are they?
Conference Paper, International Conference on Scientometrics and Informetrics, 2017, Wuhan, China
null
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a novel methodology for identifying scholars with a Twitter account. By combining bibliometric data from Web of Science and Twitter users identified by Altmetric.com we have obtained the largest set of individual scholars matched with Twitter users made so far. Our methodology consists of a combination of matching algorithms, considering different linguistic elements of both author names and Twitter names; followed by a rule-based scoring system that weights the common occurrence of several elements related with the names, individual elements and activities of both Twitter users and scholars matched. Our results indicate that about 2% of the overall population of scholars in the Web of Science is active on Twitter. By domain we find a strong presence of researchers from the Social Sciences and the Humanities. Natural Sciences is the domain with the lowest level of scholars on Twitter. Researchers on Twitter also tend to be younger than those that are not on Twitter. As this is a bibliometric-based approach, it is important to highlight the reliance of the method on the number of publications produced and tweeted by the scholars, thus the share of scholars on Twitter ranges between 1% and 5% depending on their level of productivity. Further research is suggested in order to improve and expand the methodology.
[ { "created": "Fri, 15 Dec 2017 13:42:01 GMT", "version": "v1" } ]
2017-12-18
[ [ "Costas", "Rodrigo", "" ], [ "van Honk", "Jeroen", "" ], [ "Franssen", "Thomas", "" ] ]
In this paper we present a novel methodology for identifying scholars with a Twitter account. By combining bibliometric data from Web of Science and Twitter users identified by Altmetric.com we have obtained the largest set of individual scholars matched with Twitter users made so far. Our methodology consists of a combination of matching algorithms, considering different linguistic elements of both author names and Twitter names; followed by a rule-based scoring system that weights the common occurrence of several elements related with the names, individual elements and activities of both Twitter users and scholars matched. Our results indicate that about 2% of the overall population of scholars in the Web of Science is active on Twitter. By domain we find a strong presence of researchers from the Social Sciences and the Humanities. Natural Sciences is the domain with the lowest level of scholars on Twitter. Researchers on Twitter also tend to be younger than those that are not on Twitter. As this is a bibliometric-based approach, it is important to highlight the reliance of the method on the number of publications produced and tweeted by the scholars, thus the share of scholars on Twitter ranges between 1% and 5% depending on their level of productivity. Further research is suggested in order to improve and expand the methodology.
2208.09439
Soham Deshmukh
Soham Deshmukh, Charles Lee
Adapting Task-Oriented Dialogue Models for Email Conversations
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Intent detection is a key part of any Natural Language Understanding (NLU) system of a conversational assistant. Detecting the correct intent is essential yet difficult for email conversations where multiple directives and intents are present. In such settings, conversation context can become a key disambiguating factor for detecting the user's request from the assistant. One prominent way of incorporating context is modeling past conversation history like task-oriented dialogue models. However, the nature of email conversations (long form) restricts direct usage of the latest advances in task-oriented dialogue models. So in this paper, we provide an effective transfer learning framework (EMToD) that allows the latest development in dialogue models to be adapted for long-form conversations. We show that the proposed EMToD framework improves intent detection performance over pre-trained language models by 45% and over pre-trained dialogue models by 30% for task-oriented email conversations. Additionally, the modular nature of the proposed framework allows plug-and-play for any future developments in both pre-trained language and task-oriented dialogue models.
[ { "created": "Fri, 19 Aug 2022 16:41:34 GMT", "version": "v1" } ]
2022-08-22
[ [ "Deshmukh", "Soham", "" ], [ "Lee", "Charles", "" ] ]
Intent detection is a key part of any Natural Language Understanding (NLU) system of a conversational assistant. Detecting the correct intent is essential yet difficult for email conversations where multiple directives and intents are present. In such settings, conversation context can become a key disambiguating factor for detecting the user's request from the assistant. One prominent way of incorporating context is modeling past conversation history like task-oriented dialogue models. However, the nature of email conversations (long form) restricts direct usage of the latest advances in task-oriented dialogue models. So in this paper, we provide an effective transfer learning framework (EMToD) that allows the latest development in dialogue models to be adapted for long-form conversations. We show that the proposed EMToD framework improves intent detection performance over pre-trained language models by 45% and over pre-trained dialogue models by 30% for task-oriented email conversations. Additionally, the modular nature of the proposed framework allows plug-and-play for any future developments in both pre-trained language and task-oriented dialogue models.
1409.5557
Eric Tramel
Eric W. Tramel and Santhosh Kumar and Andrei Giurgiu and Andrea Montanari
Statistical Estimation: From Denoising to Sparse Regression and Hidden Cliques
Chapter of "Statistical Physics, Optimization, Inference, and Message-Passing Algorithms", Eds.: F. Krzakala, F. Ricci-Tersenghi, L. Zdeborova, R. Zecchina, E. W. Tramel, L. F. Cugliandolo (Oxford University Press, to appear)
null
null
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
These notes review six lectures given by Prof. Andrea Montanari on the topic of statistical estimation for linear models. The first two lectures cover the principles of signal recovery from linear measurements in terms of minimax risk. Subsequent lectures demonstrate the application of these principles to several practical problems in science and engineering. Specifically, these topics include denoising of error-laden signals, recovery of compressively sensed signals, reconstruction of low-rank matrices, and also the discovery of hidden cliques within large networks.
[ { "created": "Fri, 19 Sep 2014 09:00:20 GMT", "version": "v1" } ]
2014-09-22
[ [ "Tramel", "Eric W.", "" ], [ "Kumar", "Santhosh", "" ], [ "Giurgiu", "Andrei", "" ], [ "Montanari", "Andrea", "" ] ]
These notes review six lectures given by Prof. Andrea Montanari on the topic of statistical estimation for linear models. The first two lectures cover the principles of signal recovery from linear measurements in terms of minimax risk. Subsequent lectures demonstrate the application of these principles to several practical problems in science and engineering. Specifically, these topics include denoising of error-laden signals, recovery of compressively sensed signals, reconstruction of low-rank matrices, and also the discovery of hidden cliques within large networks.
1812.06799
Warit Sirichotedumrong
Warit Sirichotedumrong, Tatsuya Chuman, Hitoshi Kiya
Grayscale-Based Image Encryption Considering Color Sub-sampling Operation for Encryption-then-Compression Systems
Accepted in 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE 2018). arXiv admin note: text overlap with arXiv:1810.13067
null
null
null
cs.CR cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new grayscale-based block scrambling image encryption scheme is presented to enhance the security of Encryption-then-Compression (EtC) systems, which are used to securely transmit images through an untrusted channel provider. The proposed scheme enables the use of a smaller block size and a larger number of blocks than the conventional scheme. Images encrypted using the proposed scheme include less color information due to the use of grayscale images even when the original image has three color channels. These features enhance security against various attacks, such as jigsaw puzzle solver and brute-force attacks. Moreover, it allows the use of color sub-sampling, which can improve the compression performance, although the encrypted images have no color information. In an experiment, encrypted images were uploaded to and then downloaded from Facebook and Twitter, and the results demonstrated that the proposed scheme is effective for EtC systems, while maintaining a high compression performance.
[ { "created": "Fri, 14 Dec 2018 05:15:25 GMT", "version": "v1" } ]
2018-12-18
[ [ "Sirichotedumrong", "Warit", "" ], [ "Chuman", "Tatsuya", "" ], [ "Kiya", "Hitoshi", "" ] ]
A new grayscale-based block scrambling image encryption scheme is presented to enhance the security of Encryption-then-Compression (EtC) systems, which are used to securely transmit images through an untrusted channel provider. The proposed scheme enables the use of a smaller block size and a larger number of blocks than the conventional scheme. Images encrypted using the proposed scheme include less color information due to the use of grayscale images even when the original image has three color channels. These features enhance security against various attacks, such as jigsaw puzzle solver and brute-force attacks. Moreover, it allows the use of color sub-sampling, which can improve the compression performance, although the encrypted images have no color information. In an experiment, encrypted images were uploaded to and then downloaded from Facebook and Twitter, and the results demonstrated that the proposed scheme is effective for EtC systems, while maintaining a high compression performance.
2108.06981
Umberto Cerruti
Umberto Cerruti
One Time Pad and the Short Key Dream
30 pages, 7 figures To appear in a next volume of the book series Collectio Ciphrarum, http://www.aracneeditrice.it/aracneweb/index.php/collana.html?col=CIPH
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
This is a survey on the One Time Pad (OTP) and its derivatives, from its origins to modern times. OTP, if used correctly, is (the only) cryptographic code that no computing power, present or future, can break. Naturally, the discussion shifts to the creation of long random sequences, starting from short ones, which can be easily shared. We could call it the Short Key Dream. Many problems inevitably arise, which affect many fields of computer science, mathematics and knowledge in general. This work presents a vast bibliography that includes fundamental classical works and current papers on randomness, pseudorandom number generators, compressibility, unpredictability and more.
[ { "created": "Mon, 16 Aug 2021 09:19:42 GMT", "version": "v1" } ]
2021-08-17
[ [ "Cerruti", "Umberto", "" ] ]
This is a survey on the One Time Pad (OTP) and its derivatives, from its origins to modern times. OTP, if used correctly, is (the only) cryptographic code that no computing power, present or future, can break. Naturally, the discussion shifts to the creation of long random sequences, starting from short ones, which can be easily shared. We could call it the Short Key Dream. Many problems inevitably arise, which affect many fields of computer science, mathematics and knowledge in general. This work presents a vast bibliography that includes fundamental classical works and current papers on randomness, pseudorandom number generators, compressibility, unpredictability and more.
2208.03320
Kalifou Ren\'e Traor\'e
Kalifou Ren\'e Traor\'e, Andr\'es Camero, Xiao Xiang Zhu
HPO: We won't get fooled again
Accepted at the Late-Breaking Workshop Track of AutoML Conference 2022
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Hyperparameter optimization (HPO) is a well-studied research field. However, the effects and interactions of the components in an HPO pipeline are not yet well investigated. Then, we ask ourselves: can the landscape of HPO be biased by the pipeline used to evaluate individual configurations? To address this question, we proposed to analyze the effect of the HPO pipeline on HPO problems using fitness landscape analysis. Particularly, we studied the DS-2019 HPO benchmark data set, looking for patterns that could indicate evaluation pipeline malfunction, and relate them to HPO performance. Our main findings are: (i) In most instances, large groups of diverse hyperparameters (i.e., multiple configurations) yield the same ill performance, most likely associated with majority class prediction models; (ii) in these cases, a worsened correlation between the observed fitness and average fitness in the neighborhood is observed, potentially making harder the deployment of local-search based HPO strategies. Finally, we concluded that the HPO pipeline definition might negatively affect the HPO landscape.
[ { "created": "Thu, 4 Aug 2022 22:00:42 GMT", "version": "v1" } ]
2022-08-09
[ [ "Traoré", "Kalifou René", "" ], [ "Camero", "Andrés", "" ], [ "Zhu", "Xiao Xiang", "" ] ]
Hyperparameter optimization (HPO) is a well-studied research field. However, the effects and interactions of the components in an HPO pipeline are not yet well investigated. Then, we ask ourselves: can the landscape of HPO be biased by the pipeline used to evaluate individual configurations? To address this question, we proposed to analyze the effect of the HPO pipeline on HPO problems using fitness landscape analysis. Particularly, we studied the DS-2019 HPO benchmark data set, looking for patterns that could indicate evaluation pipeline malfunction, and relate them to HPO performance. Our main findings are: (i) In most instances, large groups of diverse hyperparameters (i.e., multiple configurations) yield the same ill performance, most likely associated with majority class prediction models; (ii) in these cases, a worsened correlation between the observed fitness and average fitness in the neighborhood is observed, potentially making harder the deployment of local-search based HPO strategies. Finally, we concluded that the HPO pipeline definition might negatively affect the HPO landscape.
1310.6011
Yue Lu
Pier Luigi Dragotti and Yue M. Lu
On Sparse Representation in Fourier and Local Bases
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the classical problem of finding the sparse representation of a signal in a pair of bases. When both bases are orthogonal, it is known that the sparse representation is unique when the sparsity $K$ of the signal satisfies $K<1/\mu(D)$, where $\mu(D)$ is the mutual coherence of the dictionary. Furthermore, the sparse representation can be obtained in polynomial time by Basis Pursuit (BP), when $K<0.91/\mu(D)$. Therefore, there is a gap between the unicity condition and the one required to use the polynomial-complexity BP formulation. For the case of general dictionaries, it is also well known that finding the sparse representation under the only constraint of unicity is NP-hard. In this paper, we introduce, for the case of Fourier and canonical bases, a polynomial complexity algorithm that finds all the possible $K$-sparse representations of a signal under the weaker condition that $K<\sqrt{2} /\mu(D)$. Consequently, when $K<1/\mu(D)$, the proposed algorithm solves the unique sparse representation problem for this structured dictionary in polynomial time. We further show that the same method can be extended to many other pairs of bases, one of which must have local atoms. Examples include the union of Fourier and local Fourier bases, the union of discrete cosine transform and canonical bases, and the union of random Gaussian and canonical bases.
[ { "created": "Tue, 22 Oct 2013 18:56:34 GMT", "version": "v1" }, { "created": "Fri, 30 May 2014 14:56:13 GMT", "version": "v2" } ]
2014-06-02
[ [ "Dragotti", "Pier Luigi", "" ], [ "Lu", "Yue M.", "" ] ]
We consider the classical problem of finding the sparse representation of a signal in a pair of bases. When both bases are orthogonal, it is known that the sparse representation is unique when the sparsity $K$ of the signal satisfies $K<1/\mu(D)$, where $\mu(D)$ is the mutual coherence of the dictionary. Furthermore, the sparse representation can be obtained in polynomial time by Basis Pursuit (BP), when $K<0.91/\mu(D)$. Therefore, there is a gap between the unicity condition and the one required to use the polynomial-complexity BP formulation. For the case of general dictionaries, it is also well known that finding the sparse representation under the only constraint of unicity is NP-hard. In this paper, we introduce, for the case of Fourier and canonical bases, a polynomial complexity algorithm that finds all the possible $K$-sparse representations of a signal under the weaker condition that $K<\sqrt{2} /\mu(D)$. Consequently, when $K<1/\mu(D)$, the proposed algorithm solves the unique sparse representation problem for this structured dictionary in polynomial time. We further show that the same method can be extended to many other pairs of bases, one of which must have local atoms. Examples include the union of Fourier and local Fourier bases, the union of discrete cosine transform and canonical bases, and the union of random Gaussian and canonical bases.
2212.07903
Juntao Jiang
Juntao Jiang, Yuan Niu, Yi Tao
The First IEEE UV2022 Mathematical Modelling Competition: Backgrounds and Problems
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Economic growth, people's health, and urban development face challenges in the post-epidemic era. How to promote high-quality and sustainable urban development, improve citizens' sense of happiness, and solve problems in city management have become a heated and crucial topic. Mathematical modeling is a research method that uses mathematical symbols to express practical problems, establish mathematical models, and then propose solutions. The 1$^{st}$ IEEE UV2022 Mathematical Modelling Competition is a satellite activity of the 6$^{th}$ IEEE International Conference on Universal Village, which expects participants to use mathematical modeling methods for practical problems and provide guidelines for sustainable social progress. This short paper introduces the background of the competition and publishes the problems to be solved.
[ { "created": "Thu, 15 Dec 2022 15:37:17 GMT", "version": "v1" }, { "created": "Sat, 8 Jul 2023 20:11:26 GMT", "version": "v2" } ]
2023-07-11
[ [ "Jiang", "Juntao", "" ], [ "Niu", "Yuan", "" ], [ "Tao", "Yi", "" ] ]
Economic growth, people's health, and urban development face challenges in the post-epidemic era. How to promote high-quality and sustainable urban development, improve citizens' sense of happiness, and solve problems in city management have become a heated and crucial topic. Mathematical modeling is a research method that uses mathematical symbols to express practical problems, establish mathematical models, and then propose solutions. The 1$^{st}$ IEEE UV2022 Mathematical Modelling Competition is a satellite activity of the 6$^{th}$ IEEE International Conference on Universal Village, which expects participants to use mathematical modeling methods for practical problems and provide guidelines for sustainable social progress. This short paper introduces the background of the competition and publishes the problems to be solved.
1803.08605
Minxian Xu
Minxian Xu, Adel Nadjaran Toosi, Rajkumar Buyya
iBrownout: An Integrated Approach for Managing Energy and Brownout in Container-based Clouds
14 pages, in press
IEEE Transactions on Sustainable Computing, 2018
10.1109/TSUSC.2018.2808493
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy consumption of Cloud data centers has been a major concern of many researchers, and one of the reasons for huge energy consumption of Clouds lies in the inefficient utilization of computing resources. Besides energy consumption, another challenge of data centers is the unexpected loads, which leads to the overloads and performance degradation. Compared with VM consolidation and Dynamic Voltage Frequency Scaling that cannot function well when the whole data center is overloaded, brownout has shown to be a promising technique to handle both overloads and energy consumption through dynamically deactivating application optional components, which are also identified as containers/microservices. In this work, we propose an integrated approach to manage energy consumption and brownout in container-based cloud data centers. \color{black} We also evaluate our proposed scheduling policies with real traces in a prototype system. The results show that our approach reduces about 40%, 20% and 10% energy than the approach without power-saving techniques, brownout-overbooking approach and auto-scaling approach respectively while ensuring Quality of Service.
[ { "created": "Thu, 22 Mar 2018 23:03:34 GMT", "version": "v1" } ]
2018-03-26
[ [ "Xu", "Minxian", "" ], [ "Toosi", "Adel Nadjaran", "" ], [ "Buyya", "Rajkumar", "" ] ]
Energy consumption of Cloud data centers has been a major concern of many researchers, and one of the reasons for huge energy consumption of Clouds lies in the inefficient utilization of computing resources. Besides energy consumption, another challenge of data centers is the unexpected loads, which leads to the overloads and performance degradation. Compared with VM consolidation and Dynamic Voltage Frequency Scaling that cannot function well when the whole data center is overloaded, brownout has shown to be a promising technique to handle both overloads and energy consumption through dynamically deactivating application optional components, which are also identified as containers/microservices. In this work, we propose an integrated approach to manage energy consumption and brownout in container-based cloud data centers. \color{black} We also evaluate our proposed scheduling policies with real traces in a prototype system. The results show that our approach reduces about 40%, 20% and 10% energy than the approach without power-saving techniques, brownout-overbooking approach and auto-scaling approach respectively while ensuring Quality of Service.
1601.03483
Renato Cordeiro De Amorim
Renato Cordeiro de Amorim
A survey on feature weighting based K-Means algorithms
Journal of Classification (to appear)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means.
[ { "created": "Tue, 22 Sep 2015 08:46:39 GMT", "version": "v1" } ]
2016-01-15
[ [ "de Amorim", "Renato Cordeiro", "" ] ]
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means.
2212.01748
Brae Webb
Mark Utting, Brae J. Webb, Ian J. Hayes
Differential Testing of a Verification Framework for Compiler Optimizations (Experience Paper)
8 pages, 6 figures
null
null
null
cs.LO cs.PL cs.SE
http://creativecommons.org/licenses/by/4.0/
We want to verify the correctness of optimization phases in the GraalVM compiler, which consist of many thousands of lines of complex Java code performing sophisticated graph transformations. We have built high-level models of the data structures and operations of the code using the Isabelle/HOL theorem prover, and can formally verify the correctness of those high-level operations. But the remaining challenge is: how can we be sure that those high-level operations accurately reflect what the Java is doing? This paper addresses that issue by applying several different kinds of differential testing to validate that the formal model and the Java code have the same semantics. Many of these validation techniques should be applicable to other projects that are building formal models of real-world code.
[ { "created": "Sun, 4 Dec 2022 06:14:55 GMT", "version": "v1" } ]
2022-12-06
[ [ "Utting", "Mark", "" ], [ "Webb", "Brae J.", "" ], [ "Hayes", "Ian J.", "" ] ]
We want to verify the correctness of optimization phases in the GraalVM compiler, which consist of many thousands of lines of complex Java code performing sophisticated graph transformations. We have built high-level models of the data structures and operations of the code using the Isabelle/HOL theorem prover, and can formally verify the correctness of those high-level operations. But the remaining challenge is: how can we be sure that those high-level operations accurately reflect what the Java is doing? This paper addresses that issue by applying several different kinds of differential testing to validate that the formal model and the Java code have the same semantics. Many of these validation techniques should be applicable to other projects that are building formal models of real-world code.
2401.10037
Ido Zuckerman
Ido Zuckerman, Nicole Werner, Jonathan Kouchly, Emma Huston, Shannon DiMarco, Paul DiMusto, Shlomi Laufer
Depth Over RGB: Automatic Evaluation of Open Surgery Skills Using Depth Camera
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Purpose: In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras. Methods: Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection, and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect. Results: We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements. Conclusion: Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.
[ { "created": "Thu, 18 Jan 2024 15:00:28 GMT", "version": "v1" } ]
2024-01-19
[ [ "Zuckerman", "Ido", "" ], [ "Werner", "Nicole", "" ], [ "Kouchly", "Jonathan", "" ], [ "Huston", "Emma", "" ], [ "DiMarco", "Shannon", "" ], [ "DiMusto", "Paul", "" ], [ "Laufer", "Shlomi", "" ] ]
Purpose: In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras. Methods: Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection, and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect. Results: We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements. Conclusion: Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.
1907.07833
Fernando Granha Jeronimo
Vedat Levi Alev, Fernando Granha Jeronimo, Madhur Tulsiani
Approximating Constraint Satisfaction Problems on High-Dimensional Expanders
null
null
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of approximately solving constraint satisfaction problems with arity $k > 2$ ($k$-CSPs) on instances satisfying certain expansion properties, when viewed as hypergraphs. Random instances of $k$-CSPs, which are also highly expanding, are well-known to be hard to approximate using known algorithmic techniques (and are widely believed to be hard to approximate in polynomial time). However, we show that this is not necessarily the case for instances where the hypergraph is a high-dimensional expander. We consider the spectral definition of high-dimensional expansion used by Dinur and Kaufman [FOCS 2017] to construct certain primitives related to PCPs. They measure the expansion in terms of a parameter $\gamma$ which is the analogue of the second singular value for expanding graphs. Extending the results by Barak, Raghavendra and Steurer [FOCS 2011] for 2-CSPs, we show that if an instance of MAX k-CSP over alphabet $[q]$ is a high-dimensional expander with parameter $\gamma$, then it is possible to approximate the maximum fraction of satisfiable constraints up to an additive error $\epsilon$ using $q^{O(k)} \cdot (k/\epsilon)^{O(1)}$ levels of the sum-of-squares SDP hierarchy, provided $\gamma \leq \epsilon^{O(1)} \cdot (1/(kq))^{O(k)}$. Based on our analysis, we also suggest a notion of threshold-rank for hypergraphs, which can be used to extend the results for approximating 2-CSPs on low threshold-rank graphs. We show that if an instance of MAX k-CSP has threshold rank $r$ for a threshold $\tau = (\epsilon/k)^{O(1)} \cdot (1/q)^{O(k)}$, then it is possible to approximately solve the instance up to additive error $\epsilon$, using $r \cdot q^{O(k)} \cdot (k/\epsilon)^{O(1)}$ levels of the sum-of-squares hierarchy. As in the case of graphs, high-dimensional expanders (with sufficiently small $\gamma$) have threshold rank 1 according to our definition.
[ { "created": "Thu, 18 Jul 2019 01:31:52 GMT", "version": "v1" } ]
2019-07-19
[ [ "Alev", "Vedat Levi", "" ], [ "Jeronimo", "Fernando Granha", "" ], [ "Tulsiani", "Madhur", "" ] ]
We consider the problem of approximately solving constraint satisfaction problems with arity $k > 2$ ($k$-CSPs) on instances satisfying certain expansion properties, when viewed as hypergraphs. Random instances of $k$-CSPs, which are also highly expanding, are well-known to be hard to approximate using known algorithmic techniques (and are widely believed to be hard to approximate in polynomial time). However, we show that this is not necessarily the case for instances where the hypergraph is a high-dimensional expander. We consider the spectral definition of high-dimensional expansion used by Dinur and Kaufman [FOCS 2017] to construct certain primitives related to PCPs. They measure the expansion in terms of a parameter $\gamma$ which is the analogue of the second singular value for expanding graphs. Extending the results by Barak, Raghavendra and Steurer [FOCS 2011] for 2-CSPs, we show that if an instance of MAX k-CSP over alphabet $[q]$ is a high-dimensional expander with parameter $\gamma$, then it is possible to approximate the maximum fraction of satisfiable constraints up to an additive error $\epsilon$ using $q^{O(k)} \cdot (k/\epsilon)^{O(1)}$ levels of the sum-of-squares SDP hierarchy, provided $\gamma \leq \epsilon^{O(1)} \cdot (1/(kq))^{O(k)}$. Based on our analysis, we also suggest a notion of threshold-rank for hypergraphs, which can be used to extend the results for approximating 2-CSPs on low threshold-rank graphs. We show that if an instance of MAX k-CSP has threshold rank $r$ for a threshold $\tau = (\epsilon/k)^{O(1)} \cdot (1/q)^{O(k)}$, then it is possible to approximately solve the instance up to additive error $\epsilon$, using $r \cdot q^{O(k)} \cdot (k/\epsilon)^{O(1)}$ levels of the sum-of-squares hierarchy. As in the case of graphs, high-dimensional expanders (with sufficiently small $\gamma$) have threshold rank 1 according to our definition.
2404.11016
Jiayang Li
Jiayang Li, Junjun Jiang, Pengwei Liang and Jiayi Ma
MaeFuse: Transferring Omni Features with Pretrained Masked Autoencoders for Infrared and Visible Image Fusion via Guided Training
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In this research, we introduce MaeFuse, a novel autoencoder model designed for infrared and visible image fusion (IVIF). The existing approaches for image fusion often rely on training combined with downstream tasks to obtain high-level visual information, which is effective in emphasizing target objects and delivering impressive results in visual quality and task-specific applications. MaeFuse, however, deviates from the norm. Instead of being driven by downstream tasks, our model utilizes a pretrained encoder from Masked Autoencoders (MAE), which facilities the omni features extraction for low-level reconstruction and high-level vision tasks, to obtain perception friendly features with a low cost. In order to eliminate the domain gap of different modal features and the block effect caused by the MAE encoder, we further develop a guided training strategy. This strategy is meticulously crafted to ensure that the fusion layer seamlessly adjusts to the feature space of the encoder, gradually enhancing the fusion effect. It facilitates the comprehensive integration of feature vectors from both infrared and visible modalities, preserving the rich details inherent in each. MaeFuse not only introduces a novel perspective in the realm of fusion techniques but also stands out with impressive performance across various public datasets.
[ { "created": "Wed, 17 Apr 2024 02:47:39 GMT", "version": "v1" } ]
2024-04-18
[ [ "Li", "Jiayang", "" ], [ "Jiang", "Junjun", "" ], [ "Liang", "Pengwei", "" ], [ "Ma", "Jiayi", "" ] ]
In this research, we introduce MaeFuse, a novel autoencoder model designed for infrared and visible image fusion (IVIF). The existing approaches for image fusion often rely on training combined with downstream tasks to obtain high-level visual information, which is effective in emphasizing target objects and delivering impressive results in visual quality and task-specific applications. MaeFuse, however, deviates from the norm. Instead of being driven by downstream tasks, our model utilizes a pretrained encoder from Masked Autoencoders (MAE), which facilities the omni features extraction for low-level reconstruction and high-level vision tasks, to obtain perception friendly features with a low cost. In order to eliminate the domain gap of different modal features and the block effect caused by the MAE encoder, we further develop a guided training strategy. This strategy is meticulously crafted to ensure that the fusion layer seamlessly adjusts to the feature space of the encoder, gradually enhancing the fusion effect. It facilitates the comprehensive integration of feature vectors from both infrared and visible modalities, preserving the rich details inherent in each. MaeFuse not only introduces a novel perspective in the realm of fusion techniques but also stands out with impressive performance across various public datasets.
1710.05781
Zhuang Chijie
Chijie Zhuang, Yong Zhang, Xin Zhou, Rong Zeng, Jinliang He and Lei Liu
A Fast Tree Algorithm for Electric Field Calculation in Electrical Discharge Simulations
null
null
10.1109/TMAG.2017.2756991
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The simulation of electrical discharges has been attracting a great deal of attention. In such simulations, the electric field computation dominates the computational time. In this paper, we propose a fast tree algorithm that helps to reduce the time complexity from $O(N^2)$ (from using direct summation) to $O(N\log N)$. The implementation details are discussed and the time complexity is analyzed. A rigorous error estimation shows the error of the tree algorithm decays exponentially with the number of truncation terms and can be controlled adaptively. Numerical examples are presented to validate the accuracy and efficiency of the algorithm.
[ { "created": "Mon, 16 Oct 2017 15:24:12 GMT", "version": "v1" } ]
2018-07-04
[ [ "Zhuang", "Chijie", "" ], [ "Zhang", "Yong", "" ], [ "Zhou", "Xin", "" ], [ "Zeng", "Rong", "" ], [ "He", "Jinliang", "" ], [ "Liu", "Lei", "" ] ]
The simulation of electrical discharges has been attracting a great deal of attention. In such simulations, the electric field computation dominates the computational time. In this paper, we propose a fast tree algorithm that helps to reduce the time complexity from $O(N^2)$ (from using direct summation) to $O(N\log N)$. The implementation details are discussed and the time complexity is analyzed. A rigorous error estimation shows the error of the tree algorithm decays exponentially with the number of truncation terms and can be controlled adaptively. Numerical examples are presented to validate the accuracy and efficiency of the algorithm.
2312.11285
Xijun Wang
Decheng Liu, Xijun Wang, Chunlei Peng, Nannan Wang, Ruiming Hu, Xinbo Gao
Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent Diffusion Model
Accepted by AAAI 2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial attacks involve adding perturbations to the source image to cause misclassification by the target model, which demonstrates the potential of attacking face recognition models. Existing adversarial face image generation methods still can't achieve satisfactory performance because of low transferability and high detectability. In this paper, we propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space, which utilizes strong inpainting capabilities of the latent diffusion model to generate realistic adversarial images. Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings. The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness. Extensive qualitative and quantitative experiments on the public FFHQ and CelebA-HQ datasets prove the proposed method achieves superior performance compared with the state-of-the-art methods without an extra generative model training process. The source code is available at https://github.com/kopper-xdu/Adv-Diffusion.
[ { "created": "Mon, 18 Dec 2023 15:25:23 GMT", "version": "v1" }, { "created": "Thu, 28 Dec 2023 07:58:00 GMT", "version": "v2" } ]
2023-12-29
[ [ "Liu", "Decheng", "" ], [ "Wang", "Xijun", "" ], [ "Peng", "Chunlei", "" ], [ "Wang", "Nannan", "" ], [ "Hu", "Ruiming", "" ], [ "Gao", "Xinbo", "" ] ]
Adversarial attacks involve adding perturbations to the source image to cause misclassification by the target model, which demonstrates the potential of attacking face recognition models. Existing adversarial face image generation methods still can't achieve satisfactory performance because of low transferability and high detectability. In this paper, we propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space, which utilizes strong inpainting capabilities of the latent diffusion model to generate realistic adversarial images. Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings. The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness. Extensive qualitative and quantitative experiments on the public FFHQ and CelebA-HQ datasets prove the proposed method achieves superior performance compared with the state-of-the-art methods without an extra generative model training process. The source code is available at https://github.com/kopper-xdu/Adv-Diffusion.
2404.13432
Andrey Rudenko
Irina Rudenko, Andrey Rudenko, Achim J. Lilienthal, Kai O. Arras, Barbara Bruno
The Child Factor in Child-Robot Interaction: Discovering the Impact of Developmental Stage and Individual Characteristics
Pre-print submitted to the International Journal of Social Robotics, accepted March 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Social robots, owing to their embodied physical presence in human spaces and the ability to directly interact with the users and their environment, have a great potential to support children in various activities in education, healthcare and daily life. Child-Robot Interaction (CRI), as any domain involving children, inevitably faces the major challenge of designing generalized strategies to work with unique, turbulent and very diverse individuals. Addressing this challenging endeavor requires to combine the standpoint of the robot-centered perspective, i.e. what robots technically can and are best positioned to do, with that of the child-centered perspective, i.e. what children may gain from the robot and how the robot should act to best support them in reaching the goals of the interaction. This article aims to help researchers bridge the two perspectives and proposes to address the development of CRI scenarios with insights from child psychology and child development theories. To that end, we review the outcomes of the CRI studies, outline common trends and challenges, and identify two key factors from child psychology that impact child-robot interactions, especially in a long-term perspective: developmental stage and individual characteristics. For both of them we discuss prospective experiment designs which support building naturally engaging and sustainable interactions.
[ { "created": "Sat, 20 Apr 2024 17:44:07 GMT", "version": "v1" } ]
2024-04-23
[ [ "Rudenko", "Irina", "" ], [ "Rudenko", "Andrey", "" ], [ "Lilienthal", "Achim J.", "" ], [ "Arras", "Kai O.", "" ], [ "Bruno", "Barbara", "" ] ]
Social robots, owing to their embodied physical presence in human spaces and the ability to directly interact with the users and their environment, have a great potential to support children in various activities in education, healthcare and daily life. Child-Robot Interaction (CRI), as any domain involving children, inevitably faces the major challenge of designing generalized strategies to work with unique, turbulent and very diverse individuals. Addressing this challenging endeavor requires to combine the standpoint of the robot-centered perspective, i.e. what robots technically can and are best positioned to do, with that of the child-centered perspective, i.e. what children may gain from the robot and how the robot should act to best support them in reaching the goals of the interaction. This article aims to help researchers bridge the two perspectives and proposes to address the development of CRI scenarios with insights from child psychology and child development theories. To that end, we review the outcomes of the CRI studies, outline common trends and challenges, and identify two key factors from child psychology that impact child-robot interactions, especially in a long-term perspective: developmental stage and individual characteristics. For both of them we discuss prospective experiment designs which support building naturally engaging and sustainable interactions.
2002.09320
Adnan Darwiche
Adnan Darwiche
An Advance on Variable Elimination with Applications to Tensor-Based Computation
To Appear in Proceedings of the European Conference on Artificial Intelligence (ECAI), Spain, 2020
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present new results on the classical algorithm of variable elimination, which underlies many algorithms including for probabilistic inference. The results relate to exploiting functional dependencies, allowing one to perform inference and learning efficiently on models that have very large treewidth. The highlight of the advance is that it works with standard (dense) factors, without the need for sparse factors or techniques based on knowledge compilation that are commonly utilized. This is significant as it permits a direct implementation of the improved variable elimination algorithm using tensors and their operations, leading to extremely efficient implementations especially when learning model parameters. Moreover, the proposed technique does not require knowledge of the specific functional dependencies, only that they exist, so can be used when learning these dependencies. We illustrate the efficacy of our proposed algorithm by compiling Bayesian network queries into tensor graphs and then learning their parameters from labeled data using a standard tool for tensor computation.
[ { "created": "Fri, 21 Feb 2020 14:17:44 GMT", "version": "v1" } ]
2020-04-21
[ [ "Darwiche", "Adnan", "" ] ]
We present new results on the classical algorithm of variable elimination, which underlies many algorithms including for probabilistic inference. The results relate to exploiting functional dependencies, allowing one to perform inference and learning efficiently on models that have very large treewidth. The highlight of the advance is that it works with standard (dense) factors, without the need for sparse factors or techniques based on knowledge compilation that are commonly utilized. This is significant as it permits a direct implementation of the improved variable elimination algorithm using tensors and their operations, leading to extremely efficient implementations especially when learning model parameters. Moreover, the proposed technique does not require knowledge of the specific functional dependencies, only that they exist, so can be used when learning these dependencies. We illustrate the efficacy of our proposed algorithm by compiling Bayesian network queries into tensor graphs and then learning their parameters from labeled data using a standard tool for tensor computation.
1907.08228
Indira Sen
Indira Sen, Fabian Floeck, Katrin Weller, Bernd Weiss, Claudia Wagner
TED-On: A Total Error Framework for Digital Traces of Human Behavior on Online Platforms
20 pages, 2 figures, Longer version of paper set to appear in Public Opinion Quarterly. Updating terminology
null
null
null
cs.CY cs.HC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peoples' activities and opinions recorded as digital traces online, especially on social media and other web-based platforms, offer increasingly informative pictures of the public. They promise to allow inferences about populations beyond the users of the platforms on which the traces are recorded, representing real potential for the Social Sciences and a complement to survey-based research. But the use of digital traces brings its own complexities and new error sources to the research enterprise. Recently, researchers have begun to discuss the errors that can occur when digital traces are used to learn about humans and social phenomena. This article synthesizes this discussion and proposes a systematic way to categorize potential errors, inspired by the Total Survey Error (TSE) Framework developed for survey methodology. We introduce a conceptual framework to diagnose, understand, and document errors that may occur in studies based on such digital traces. While there are clear parallels to the well-known error sources in the TSE framework, the new "Total Error Framework for Digital Traces of Human Behavior on Online Platforms" (TED-On) identifies several types of error that are specific to the use of digital traces. By providing a standard vocabulary to describe these errors, the proposed framework is intended to advance communication and research concerning the use of digital traces in scientific social research.
[ { "created": "Thu, 18 Jul 2019 18:18:48 GMT", "version": "v1" }, { "created": "Thu, 26 Sep 2019 16:42:26 GMT", "version": "v2" }, { "created": "Thu, 5 Dec 2019 18:03:44 GMT", "version": "v3" }, { "created": "Thu, 3 Jun 2021 16:52:11 GMT", "version": "v4" } ]
2021-06-04
[ [ "Sen", "Indira", "" ], [ "Floeck", "Fabian", "" ], [ "Weller", "Katrin", "" ], [ "Weiss", "Bernd", "" ], [ "Wagner", "Claudia", "" ] ]
Peoples' activities and opinions recorded as digital traces online, especially on social media and other web-based platforms, offer increasingly informative pictures of the public. They promise to allow inferences about populations beyond the users of the platforms on which the traces are recorded, representing real potential for the Social Sciences and a complement to survey-based research. But the use of digital traces brings its own complexities and new error sources to the research enterprise. Recently, researchers have begun to discuss the errors that can occur when digital traces are used to learn about humans and social phenomena. This article synthesizes this discussion and proposes a systematic way to categorize potential errors, inspired by the Total Survey Error (TSE) Framework developed for survey methodology. We introduce a conceptual framework to diagnose, understand, and document errors that may occur in studies based on such digital traces. While there are clear parallels to the well-known error sources in the TSE framework, the new "Total Error Framework for Digital Traces of Human Behavior on Online Platforms" (TED-On) identifies several types of error that are specific to the use of digital traces. By providing a standard vocabulary to describe these errors, the proposed framework is intended to advance communication and research concerning the use of digital traces in scientific social research.
2005.02061
Roman Walch
Alexandros Bampoulidis and Alessandro Bruni and Lukas Helminger and Daniel Kales and Christian Rechberger and Roman Walch
Privately Connecting Mobility to Infectious Diseases via Applied Cryptography
Accepted at PoPETs 2022
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown that cell phone mobility data has the unique potential to create accurate models for human mobility and consequently the spread of infected diseases. While prior studies have exclusively relied on a mobile network operator's subscribers' aggregated data in modelling disease dynamics, it may be preferable to contemplate aggregated mobility data of infected individuals only. Clearly, naively linking mobile phone data with health records would violate privacy by either allowing to track mobility patterns of infected individuals, leak information on who is infected, or both. This work aims to develop a solution that reports the aggregated mobile phone location data of infected individuals while still maintaining compliance with privacy expectations. To achieve privacy, we use homomorphic encryption, validation techniques derived from zero-knowledge proofs, and differential privacy. Our protocol's open-source implementation can process eight million subscribers in 70 minutes.
[ { "created": "Tue, 5 May 2020 10:59:30 GMT", "version": "v1" }, { "created": "Tue, 20 Oct 2020 14:31:57 GMT", "version": "v2" }, { "created": "Mon, 11 Jan 2021 10:27:37 GMT", "version": "v3" }, { "created": "Mon, 13 Jun 2022 11:19:49 GMT", "version": "v4" } ]
2022-06-14
[ [ "Bampoulidis", "Alexandros", "" ], [ "Bruni", "Alessandro", "" ], [ "Helminger", "Lukas", "" ], [ "Kales", "Daniel", "" ], [ "Rechberger", "Christian", "" ], [ "Walch", "Roman", "" ] ]
Recent work has shown that cell phone mobility data has the unique potential to create accurate models for human mobility and consequently the spread of infected diseases. While prior studies have exclusively relied on a mobile network operator's subscribers' aggregated data in modelling disease dynamics, it may be preferable to contemplate aggregated mobility data of infected individuals only. Clearly, naively linking mobile phone data with health records would violate privacy by either allowing to track mobility patterns of infected individuals, leak information on who is infected, or both. This work aims to develop a solution that reports the aggregated mobile phone location data of infected individuals while still maintaining compliance with privacy expectations. To achieve privacy, we use homomorphic encryption, validation techniques derived from zero-knowledge proofs, and differential privacy. Our protocol's open-source implementation can process eight million subscribers in 70 minutes.
2302.00868
Hojeong Lee
Hojeong Lee, Minseon Gwak, Kawon Lee, Minjeong Kim, Joseph Konan and Ojas Bhargave
Speech Enhancement for Virtual Meetings on Cellular Networks
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
We study speech enhancement using deep learning (DL) for virtual meetings on cellular devices, where transmitted speech has background noise and transmission loss that affects speech quality. Since the Deep Noise Suppression (DNS) Challenge dataset does not contain practical disturbance, we collect a transmitted DNS (t-DNS) dataset using Zoom Meetings over T-Mobile network. We select two baseline models: Demucs and FullSubNet. The Demucs is an end-to-end model that takes time-domain inputs and outputs time-domain denoised speech, and the FullSubNet takes time-frequency-domain inputs and outputs the energy ratio of the target speech in the inputs. The goal of this project is to enhance the speech transmitted over the cellular networks using deep learning models.
[ { "created": "Thu, 2 Feb 2023 04:35:48 GMT", "version": "v1" }, { "created": "Thu, 16 Feb 2023 17:12:35 GMT", "version": "v2" } ]
2023-02-17
[ [ "Lee", "Hojeong", "" ], [ "Gwak", "Minseon", "" ], [ "Lee", "Kawon", "" ], [ "Kim", "Minjeong", "" ], [ "Konan", "Joseph", "" ], [ "Bhargave", "Ojas", "" ] ]
We study speech enhancement using deep learning (DL) for virtual meetings on cellular devices, where transmitted speech has background noise and transmission loss that affects speech quality. Since the Deep Noise Suppression (DNS) Challenge dataset does not contain practical disturbance, we collect a transmitted DNS (t-DNS) dataset using Zoom Meetings over T-Mobile network. We select two baseline models: Demucs and FullSubNet. The Demucs is an end-to-end model that takes time-domain inputs and outputs time-domain denoised speech, and the FullSubNet takes time-frequency-domain inputs and outputs the energy ratio of the target speech in the inputs. The goal of this project is to enhance the speech transmitted over the cellular networks using deep learning models.
1801.08329
Siddique Latif
Muhammad Usman, Siddique Latif and Junaid Qadir
Using Deep Autoencoders for Facial Expression Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature descriptors involved in image processing are generally manually chosen and high dimensional in nature. Selecting the most important features is a very crucial task for systems like facial expression recognition. This paper investigates the performance of deep autoencoders for feature selection and dimension reduction for facial expression recognition on multiple levels of hidden layers. The features extracted from the stacked autoencoder outperformed when compared to other state-of-the-art feature selection and dimension reduction techniques.
[ { "created": "Thu, 25 Jan 2018 10:06:01 GMT", "version": "v1" } ]
2018-01-26
[ [ "Usman", "Muhammad", "" ], [ "Latif", "Siddique", "" ], [ "Qadir", "Junaid", "" ] ]
Feature descriptors involved in image processing are generally manually chosen and high dimensional in nature. Selecting the most important features is a very crucial task for systems like facial expression recognition. This paper investigates the performance of deep autoencoders for feature selection and dimension reduction for facial expression recognition on multiple levels of hidden layers. The features extracted from the stacked autoencoder outperformed when compared to other state-of-the-art feature selection and dimension reduction techniques.
2002.00718
Fabio Cermelli
Fabio Cermelli, Massimiliano Mancini, Samuel Rota Bul\`o, Elisa Ricci, Barbara Caputo
Modeling the Background for Incremental Learning in Semantic Segmentation
Accepted at CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite their effectiveness in a wide range of tasks, deep architectures suffer from some important limitations. In particular, they are vulnerable to catastrophic forgetting, i.e. they perform poorly when they are required to update their model as new classes are available but the original training set is not retained. This paper addresses this problem in the context of semantic segmentation. Current strategies fail on this task because they do not consider a peculiar aspect of semantic segmentation: since each training step provides annotation only for a subset of all possible classes, pixels of the background class (i.e. pixels that do not belong to any other classes) exhibit a semantic distribution shift. In this work we revisit classical incremental learning methods, proposing a new distillation-based framework which explicitly accounts for this shift. Furthermore, we introduce a novel strategy to initialize classifier's parameters, thus preventing biased predictions toward the background class. We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC 2012 and ADE20K datasets, significantly outperforming state of the art incremental learning methods.
[ { "created": "Mon, 3 Feb 2020 13:30:38 GMT", "version": "v1" }, { "created": "Mon, 30 Mar 2020 14:01:26 GMT", "version": "v2" } ]
2020-03-31
[ [ "Cermelli", "Fabio", "" ], [ "Mancini", "Massimiliano", "" ], [ "Bulò", "Samuel Rota", "" ], [ "Ricci", "Elisa", "" ], [ "Caputo", "Barbara", "" ] ]
Despite their effectiveness in a wide range of tasks, deep architectures suffer from some important limitations. In particular, they are vulnerable to catastrophic forgetting, i.e. they perform poorly when they are required to update their model as new classes are available but the original training set is not retained. This paper addresses this problem in the context of semantic segmentation. Current strategies fail on this task because they do not consider a peculiar aspect of semantic segmentation: since each training step provides annotation only for a subset of all possible classes, pixels of the background class (i.e. pixels that do not belong to any other classes) exhibit a semantic distribution shift. In this work we revisit classical incremental learning methods, proposing a new distillation-based framework which explicitly accounts for this shift. Furthermore, we introduce a novel strategy to initialize classifier's parameters, thus preventing biased predictions toward the background class. We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC 2012 and ADE20K datasets, significantly outperforming state of the art incremental learning methods.
2101.07314
Takuma Yagi
Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi Sato
GO-Finder: A Registration-Free Wearable System for Assisting Users in Finding Lost Objects via Hand-Held Object Discovery
13 pages, 13 figures, ACM IUI 2021
null
10.1145/3397481.3450664
null
cs.HC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder ("Generic Object Finder"), a registration-free wearable camera based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, Go-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted a user study to investigate how users benefit from using GO-Finder and confirmed improved accuracy and reduced mental load regarding the object search task by providing clear visual cues on object locations.
[ { "created": "Mon, 18 Jan 2021 20:04:56 GMT", "version": "v1" }, { "created": "Fri, 12 Feb 2021 11:16:44 GMT", "version": "v2" } ]
2021-02-15
[ [ "Yagi", "Takuma", "" ], [ "Nishiyasu", "Takumi", "" ], [ "Kawasaki", "Kunimasa", "" ], [ "Matsuki", "Moe", "" ], [ "Sato", "Yoichi", "" ] ]
People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder ("Generic Object Finder"), a registration-free wearable camera based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, Go-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted a user study to investigate how users benefit from using GO-Finder and confirmed improved accuracy and reduced mental load regarding the object search task by providing clear visual cues on object locations.
2101.04035
Jesse Josua Benjamin
Jesse Josua Benjamin, Arne Berger, Nick Merrill, James Pierce
Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry
Accepted to ACM 2021 CHI Conference on Human Factors in Computing Systems (CHI 2021)
null
10.1145/3411764.3445481
null
cs.HC cs.CY cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Design research is important for understanding and interrogating how emerging technologies shape human experience. However, design research with Machine Learning (ML) is relatively underdeveloped. Crucially, designers have not found a grasp on ML uncertainty as a design opportunity rather than an obstacle. The technical literature points to data and model uncertainties as two main properties of ML. Through post-phenomenology, we position uncertainty as one defining material attribute of ML processes which mediate human experience. To understand ML uncertainty as a design material, we investigate four design research case studies involving ML. We derive three provocative concepts: thingly uncertainty: ML-driven artefacts have uncertain, variable relations to their environments; pattern leakage: ML uncertainty can lead to patterns shaping the world they are meant to represent; and futures creep: ML technologies texture human relations to time with uncertainty. Finally, we outline design research trajectories and sketch a post-phenomenological approach to human-ML relations.
[ { "created": "Mon, 11 Jan 2021 17:11:19 GMT", "version": "v1" } ]
2021-10-19
[ [ "Benjamin", "Jesse Josua", "" ], [ "Berger", "Arne", "" ], [ "Merrill", "Nick", "" ], [ "Pierce", "James", "" ] ]
Design research is important for understanding and interrogating how emerging technologies shape human experience. However, design research with Machine Learning (ML) is relatively underdeveloped. Crucially, designers have not found a grasp on ML uncertainty as a design opportunity rather than an obstacle. The technical literature points to data and model uncertainties as two main properties of ML. Through post-phenomenology, we position uncertainty as one defining material attribute of ML processes which mediate human experience. To understand ML uncertainty as a design material, we investigate four design research case studies involving ML. We derive three provocative concepts: thingly uncertainty: ML-driven artefacts have uncertain, variable relations to their environments; pattern leakage: ML uncertainty can lead to patterns shaping the world they are meant to represent; and futures creep: ML technologies texture human relations to time with uncertainty. Finally, we outline design research trajectories and sketch a post-phenomenological approach to human-ML relations.
1106.5489
Gilberto Pastorello
Gilberto Z. Pastorello, G. Arturo Sanchez-Azofeifa and Mario A. Nascimento
A Review of the Enviro-Net Project
v2: 29 pages, 5 figures, reflects changes addressing reviewers' comments v1: 38 pages, 8 figures
G. Z. Pastorello, G. A. Sanchez-Azofeifa, M. A. Nascimento. Enviro-Net: From Networks of Ground-Based Sensor Systems to a Web Platform for Sensor Data Management. Sensors. 2011; 11(6):6454-6479
10.3390/s110606454
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystems monitoring is essential to properly understand their development and the effects of events, both climatological and anthropological in nature. The amount of data used in these assessments is increasing at very high rates. This is due to increasing availability of sensing systems and the development of new techniques to analyze sensor data. The Enviro-Net Project encompasses several of such sensor system deployments across five countries in the Americas. These deployments use a few different ground-based sensor systems, installed at different heights monitoring the conditions in tropical dry forests over long periods of time. This paper presents our experience in deploying and maintaining these systems, retrieving and pre-processing the data, and describes the Web portal developed to help with data management, visualization and analysis.
[ { "created": "Mon, 27 Jun 2011 18:55:39 GMT", "version": "v1" }, { "created": "Thu, 30 Jun 2011 18:44:43 GMT", "version": "v2" } ]
2011-07-01
[ [ "Pastorello", "Gilberto Z.", "" ], [ "Sanchez-Azofeifa", "G. Arturo", "" ], [ "Nascimento", "Mario A.", "" ] ]
Ecosystems monitoring is essential to properly understand their development and the effects of events, both climatological and anthropological in nature. The amount of data used in these assessments is increasing at very high rates. This is due to increasing availability of sensing systems and the development of new techniques to analyze sensor data. The Enviro-Net Project encompasses several of such sensor system deployments across five countries in the Americas. These deployments use a few different ground-based sensor systems, installed at different heights monitoring the conditions in tropical dry forests over long periods of time. This paper presents our experience in deploying and maintaining these systems, retrieving and pre-processing the data, and describes the Web portal developed to help with data management, visualization and analysis.
2407.11253
Xinling Yu
Xinling Yu, Sean Hooten, Ziyue Liu, Yequan Zhao, Marco Fiorentino, Thomas Van Vaerenbergh, Zheng Zhang
Separable Operator Networks
SepONet version 2. This revised version polishes writing and open sources code. The initial version was submitted to arXiv on July 15, 2024
null
null
null
cs.LG cs.CE
http://creativecommons.org/licenses/by-sa/4.0/
Operator learning has become a powerful tool in machine learning for modeling complex physical systems governed by partial differential equations (PDEs). Although Deep Operator Networks (DeepONet) show promise, they require extensive data acquisition. Physics-informed DeepONets (PI-DeepONet) mitigate data scarcity but suffer from inefficient training processes. We introduce Separable Operator Networks (SepONet), a novel framework that significantly enhances the efficiency of physics-informed operator learning. SepONet uses independent trunk networks to learn basis functions separately for different coordinate axes, enabling faster and more memory-efficient training via forward-mode automatic differentiation. We provide a universal approximation theorem for SepONet proving that it generalizes to arbitrary operator learning problems, and then validate its performance through comprehensive benchmarking against PI-DeepONet. Our results demonstrate SepONet's superior performance across various nonlinear and inseparable PDEs, with SepONet's advantages increasing with problem complexity, dimension, and scale. For 1D time-dependent PDEs, SepONet achieves up to $112\times$ faster training and $82\times$ reduction in GPU memory usage compared to PI-DeepONet, while maintaining comparable accuracy. For the 2D time-dependent nonlinear diffusion equation, SepONet efficiently handles the complexity, achieving a 6.44\% mean relative $\ell_{2}$ test error, while PI-DeepONet fails due to memory constraints. This work paves the way for extreme-scale learning of continuous mappings between infinite-dimensional function spaces. Open source code is available at \url{https://github.com/HewlettPackard/separable-operator-networks}.
[ { "created": "Mon, 15 Jul 2024 21:43:41 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2024 07:08:49 GMT", "version": "v2" } ]
2024-08-14
[ [ "Yu", "Xinling", "" ], [ "Hooten", "Sean", "" ], [ "Liu", "Ziyue", "" ], [ "Zhao", "Yequan", "" ], [ "Fiorentino", "Marco", "" ], [ "Van Vaerenbergh", "Thomas", "" ], [ "Zhang", "Zheng", "" ] ]
Operator learning has become a powerful tool in machine learning for modeling complex physical systems governed by partial differential equations (PDEs). Although Deep Operator Networks (DeepONet) show promise, they require extensive data acquisition. Physics-informed DeepONets (PI-DeepONet) mitigate data scarcity but suffer from inefficient training processes. We introduce Separable Operator Networks (SepONet), a novel framework that significantly enhances the efficiency of physics-informed operator learning. SepONet uses independent trunk networks to learn basis functions separately for different coordinate axes, enabling faster and more memory-efficient training via forward-mode automatic differentiation. We provide a universal approximation theorem for SepONet proving that it generalizes to arbitrary operator learning problems, and then validate its performance through comprehensive benchmarking against PI-DeepONet. Our results demonstrate SepONet's superior performance across various nonlinear and inseparable PDEs, with SepONet's advantages increasing with problem complexity, dimension, and scale. For 1D time-dependent PDEs, SepONet achieves up to $112\times$ faster training and $82\times$ reduction in GPU memory usage compared to PI-DeepONet, while maintaining comparable accuracy. For the 2D time-dependent nonlinear diffusion equation, SepONet efficiently handles the complexity, achieving a 6.44\% mean relative $\ell_{2}$ test error, while PI-DeepONet fails due to memory constraints. This work paves the way for extreme-scale learning of continuous mappings between infinite-dimensional function spaces. Open source code is available at \url{https://github.com/HewlettPackard/separable-operator-networks}.
1006.3159
Alexandre Chapoutot
Olivier Bouissou (LMeASI), Yassamine Seladji (LMeASI), Alexandre Chapoutot (LIP6)
Abstract Fixpoint Computations with Numerical Acceleration Methods
null
Electronic Notes in Theoretical Computer Science (2010) 29-42
10.1016/j.entcs.2010.09.004
null
cs.PL cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Static analysis by abstract interpretation aims at automatically proving properties of computer programs. To do this, an over-approximation of program semantics, defined as the least fixpoint of a system of semantic equations, must be computed. To enforce the convergence of this computation, widening operator is used but it may lead to coarse results. We propose a new method to accelerate the computation of this fixpoint by using standard techniques of numerical analysis. Our goal is to automatically and dynamically adapt the widening operator in order to maintain precision.
[ { "created": "Wed, 16 Jun 2010 08:39:12 GMT", "version": "v1" } ]
2013-05-02
[ [ "Bouissou", "Olivier", "", "LMeASI" ], [ "Seladji", "Yassamine", "", "LMeASI" ], [ "Chapoutot", "Alexandre", "", "LIP6" ] ]
Static analysis by abstract interpretation aims at automatically proving properties of computer programs. To do this, an over-approximation of program semantics, defined as the least fixpoint of a system of semantic equations, must be computed. To enforce the convergence of this computation, widening operator is used but it may lead to coarse results. We propose a new method to accelerate the computation of this fixpoint by using standard techniques of numerical analysis. Our goal is to automatically and dynamically adapt the widening operator in order to maintain precision.
2003.02227
Hsiu-Chin Lin
Hsiu-Chin Lin and Michael Mistry
Contact Surface Estimation via Haptic Perception
Accepted for publications in IEEE International Conference on Robotics and Automation, 2020
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legged systems need to optimize contact force in order to maintain contacts. For this, the controller needs to have the knowledge of the surface geometry and how slippery the terrain is. We can use a vision system to realize the terrain, but the accuracy of the vision system degrades in harsh weather, and it cannot visualize the terrain if it is covered with water or grass. Also, the degree of friction cannot be directly visualized. In this paper, we propose an online method to estimate the surface information via haptic exploration. We also introduce a probabilistic criterion to measure the quality of the estimation. The method is validated on both simulation and a real robot platform.
[ { "created": "Wed, 4 Mar 2020 18:15:05 GMT", "version": "v1" } ]
2020-03-05
[ [ "Lin", "Hsiu-Chin", "" ], [ "Mistry", "Michael", "" ] ]
Legged systems need to optimize contact force in order to maintain contacts. For this, the controller needs to have the knowledge of the surface geometry and how slippery the terrain is. We can use a vision system to realize the terrain, but the accuracy of the vision system degrades in harsh weather, and it cannot visualize the terrain if it is covered with water or grass. Also, the degree of friction cannot be directly visualized. In this paper, we propose an online method to estimate the surface information via haptic exploration. We also introduce a probabilistic criterion to measure the quality of the estimation. The method is validated on both simulation and a real robot platform.
2311.14838
Nikolay Bogoychev Dr
Nikolay Bogoychev, Jelmer van der Linde, Graeme Nail, Barry Haddow, Jaume Zaragoza-Bernabeu, Gema Ram\'irez-S\'anchez, Lukas Weymann, Tudor Nicolae Mateiu, Jind\v{r}ich Helcl, Mikko Aulamo
OpusCleaner and OpusTrainer, open source toolkits for training Machine Translation and Large language models
Code on Github: https://github.com/hplt-project/OpusCleaner and https://github.com/hplt-project/OpusTrainer
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Developing high quality machine translation systems is a labour intensive, challenging and confusing process for newcomers to the field. We present a pair of tools OpusCleaner and OpusTrainer that aim to simplify the process, reduce the amount of work and lower the entry barrier for newcomers. OpusCleaner is a data downloading, cleaning, and proprocessing toolkit. It is designed to allow researchers to quickly download, visualise and preprocess bilingual (or monolingual) data that comes from many different sources, each of them with different quality, issues, and unique filtering/preprocessing requirements. OpusTrainer is a data scheduling and data augmenting tool aimed at building large scale, robust machine translation systems and large language models. It features deterministic data mixing from many different sources, on-the-fly data augmentation and more. Using these tools, we showcase how we can use it to create high quality machine translation model robust to noisy user input; multilingual models and terminology aware models.
[ { "created": "Fri, 24 Nov 2023 20:24:00 GMT", "version": "v1" } ]
2023-11-28
[ [ "Bogoychev", "Nikolay", "" ], [ "van der Linde", "Jelmer", "" ], [ "Nail", "Graeme", "" ], [ "Haddow", "Barry", "" ], [ "Zaragoza-Bernabeu", "Jaume", "" ], [ "Ramírez-Sánchez", "Gema", "" ], [ "Weymann", "Lukas", "" ], [ "Mateiu", "Tudor Nicolae", "" ], [ "Helcl", "Jindřich", "" ], [ "Aulamo", "Mikko", "" ] ]
Developing high quality machine translation systems is a labour intensive, challenging and confusing process for newcomers to the field. We present a pair of tools OpusCleaner and OpusTrainer that aim to simplify the process, reduce the amount of work and lower the entry barrier for newcomers. OpusCleaner is a data downloading, cleaning, and proprocessing toolkit. It is designed to allow researchers to quickly download, visualise and preprocess bilingual (or monolingual) data that comes from many different sources, each of them with different quality, issues, and unique filtering/preprocessing requirements. OpusTrainer is a data scheduling and data augmenting tool aimed at building large scale, robust machine translation systems and large language models. It features deterministic data mixing from many different sources, on-the-fly data augmentation and more. Using these tools, we showcase how we can use it to create high quality machine translation model robust to noisy user input; multilingual models and terminology aware models.
2406.06618
Shuo Yu
Shuo Yu, Feng Xia, Yueru Wang, Shihao Li, Falih Febrinanto, Madhu Chetty
PANDORA: Deep graph learning based COVID-19 infection risk level forecasting
null
null
null
null
cs.SI cs.AI cs.CY cs.LG physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 as a global pandemic causes a massive disruption to social stability that threatens human life and the economy. Policymakers and all elements of society must deliver measurable actions based on the pandemic's severity to minimize the detrimental impact of COVID-19. A proper forecasting system is arguably important to provide an early signal of the risk of COVID-19 infection so that the authorities are ready to protect the people from the worst. However, making a good forecasting model for infection risks in different cities or regions is not an easy task, because it has a lot of influential factors that are difficult to be identified manually. To address the current limitations, we propose a deep graph learning model, called PANDORA, to predict the infection risks of COVID-19, by considering all essential factors and integrating them into a geographical network. The framework uses geographical position relations and transportation frequency as higher-order structural properties formulated by higher-order network structures (i.e., network motifs). Moreover, four significant node attributes (i.e., multiple features of a particular area, including climate, medical condition, economy, and human mobility) are also considered. We propose three different aggregators to better aggregate node attributes and structural features, namely, Hadamard, Summation, and Connection. Experimental results over real data show that PANDORA outperforms the baseline method with higher accuracy and faster convergence speed, no matter which aggregator is chosen. We believe that PANDORA using deep graph learning provides a promising approach to get superior performance in infection risk level forecasting and help humans battle the COVID-19 crisis.
[ { "created": "Fri, 7 Jun 2024 07:27:22 GMT", "version": "v1" } ]
2024-06-12
[ [ "Yu", "Shuo", "" ], [ "Xia", "Feng", "" ], [ "Wang", "Yueru", "" ], [ "Li", "Shihao", "" ], [ "Febrinanto", "Falih", "" ], [ "Chetty", "Madhu", "" ] ]
COVID-19 as a global pandemic causes a massive disruption to social stability that threatens human life and the economy. Policymakers and all elements of society must deliver measurable actions based on the pandemic's severity to minimize the detrimental impact of COVID-19. A proper forecasting system is arguably important to provide an early signal of the risk of COVID-19 infection so that the authorities are ready to protect the people from the worst. However, making a good forecasting model for infection risks in different cities or regions is not an easy task, because it has a lot of influential factors that are difficult to be identified manually. To address the current limitations, we propose a deep graph learning model, called PANDORA, to predict the infection risks of COVID-19, by considering all essential factors and integrating them into a geographical network. The framework uses geographical position relations and transportation frequency as higher-order structural properties formulated by higher-order network structures (i.e., network motifs). Moreover, four significant node attributes (i.e., multiple features of a particular area, including climate, medical condition, economy, and human mobility) are also considered. We propose three different aggregators to better aggregate node attributes and structural features, namely, Hadamard, Summation, and Connection. Experimental results over real data show that PANDORA outperforms the baseline method with higher accuracy and faster convergence speed, no matter which aggregator is chosen. We believe that PANDORA using deep graph learning provides a promising approach to get superior performance in infection risk level forecasting and help humans battle the COVID-19 crisis.
1603.03495
Pengfei Lu
Pengfei Lu, Zhenqiang Wu
Continuous Molecular Communication in one dimensional situation
4pages, 6figures, this paper is acccepted by ISEEE 2016
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular Communication as the most potential methods to solve the communication in nano scale, for it's derived from nature, and it becomes more and more prevalent. Though molecular communication happens in three dimensional situation, there are also some situation that are in the one dimensional situation, especially when considering the transmitters and the receivers are in extremely short distance or in long slim pipe. In this paper, we introduce the one dimensional situation, and studied how the continuous information molecules transmitted in this situation, also introduced how to encode and decode the information molecules, and based on the molecular communication model, we studied some metrics of it, such as the distance between transmitter and receiver, the emitting frequency of transmitter. Through the research we know that the distance and frequency are important metrics to the successful communication, which can direct us how to place the nano transmitters and receivers in the future nano network environment.
[ { "created": "Fri, 11 Mar 2016 00:40:01 GMT", "version": "v1" } ]
2016-03-14
[ [ "Lu", "Pengfei", "" ], [ "Wu", "Zhenqiang", "" ] ]
Molecular Communication as the most potential methods to solve the communication in nano scale, for it's derived from nature, and it becomes more and more prevalent. Though molecular communication happens in three dimensional situation, there are also some situation that are in the one dimensional situation, especially when considering the transmitters and the receivers are in extremely short distance or in long slim pipe. In this paper, we introduce the one dimensional situation, and studied how the continuous information molecules transmitted in this situation, also introduced how to encode and decode the information molecules, and based on the molecular communication model, we studied some metrics of it, such as the distance between transmitter and receiver, the emitting frequency of transmitter. Through the research we know that the distance and frequency are important metrics to the successful communication, which can direct us how to place the nano transmitters and receivers in the future nano network environment.
2004.14979
Vered Shwartz
Yehudit Meged, Avi Caciularu, Vered Shwartz, Ido Dagan
Paraphrasing vs Coreferring: Two Sides of the Same Coin
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the potential synergy between two different NLP tasks, both confronting predicate lexical variability: identifying predicate paraphrases, and event coreference resolution. First, we used annotations from an event coreference dataset as distant supervision to re-score heuristically-extracted predicate paraphrases. The new scoring gained more than 18 points in average precision upon their ranking by the original scoring method. Then, we used the same re-ranking features as additional inputs to a state-of-the-art event coreference resolution model, which yielded modest but consistent improvements to the model's performance. The results suggest a promising direction to leverage data and models for each of the tasks to the benefit of the other.
[ { "created": "Thu, 30 Apr 2020 17:29:17 GMT", "version": "v1" }, { "created": "Fri, 9 Oct 2020 16:48:36 GMT", "version": "v2" } ]
2020-10-12
[ [ "Meged", "Yehudit", "" ], [ "Caciularu", "Avi", "" ], [ "Shwartz", "Vered", "" ], [ "Dagan", "Ido", "" ] ]
We study the potential synergy between two different NLP tasks, both confronting predicate lexical variability: identifying predicate paraphrases, and event coreference resolution. First, we used annotations from an event coreference dataset as distant supervision to re-score heuristically-extracted predicate paraphrases. The new scoring gained more than 18 points in average precision upon their ranking by the original scoring method. Then, we used the same re-ranking features as additional inputs to a state-of-the-art event coreference resolution model, which yielded modest but consistent improvements to the model's performance. The results suggest a promising direction to leverage data and models for each of the tasks to the benefit of the other.
2406.05141
Quentin Japhet
Quentin Japhet (DAVID), Dimitri Watel (IP Paris, SAMOVAR, SOP - SAMOVAR, ENSIIE), Dominique Barth (DAVID), Marc-Antoine Weisser (GALaC)
Maximal Line Digraphs
null
null
null
null
cs.DM cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A line digraph $L(G) = (A, E)$ is the digraph constructed from the digraph $G = (V, A)$ such that there is an arc $(a,b)$ in $L(G)$ if the terminal node of $a$ in $G$ is the initial node of $b$. The maximum number of arcs in a line digraph with $m$ nodes is $(m/2)^2 + (m/2)$ if $m$ is even, and $((m - 1)/2)^2 + m - 1$ otherwise. For $m \geq 7$, there is only one line digraph with as many arcs if $m$ is even, and if $m$ is odd, there are two line digraphs, each being the transpose of the other.
[ { "created": "Mon, 27 May 2024 08:45:34 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 07:00:29 GMT", "version": "v2" } ]
2024-06-13
[ [ "Japhet", "Quentin", "", "DAVID" ], [ "Watel", "Dimitri", "", "IP Paris, SAMOVAR, SOP -\n SAMOVAR, ENSIIE" ], [ "Barth", "Dominique", "", "DAVID" ], [ "Weisser", "Marc-Antoine", "", "GALaC" ] ]
A line digraph $L(G) = (A, E)$ is the digraph constructed from the digraph $G = (V, A)$ such that there is an arc $(a,b)$ in $L(G)$ if the terminal node of $a$ in $G$ is the initial node of $b$. The maximum number of arcs in a line digraph with $m$ nodes is $(m/2)^2 + (m/2)$ if $m$ is even, and $((m - 1)/2)^2 + m - 1$ otherwise. For $m \geq 7$, there is only one line digraph with as many arcs if $m$ is even, and if $m$ is odd, there are two line digraphs, each being the transpose of the other.
1605.00372
Ching-Chi Lin
Ching-Chi Lin and Cheng-Yu Hsieh
A Linear-Time Algorithm for the Weighted Paired-Domination Problem on Block Graphs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a graph $G = (V,E)$, a vertex subset $S\subseteq V(G)$ is said to be a dominating set of $G$ if every vertex not in $S$ is adjacent to a vertex in $S$. A dominating set $S$ of $G$ is called a paired-dominating set of $G$ if the induced subgraph $G[S]$ contains a perfect matching. In this paper, we propose an $O(n+m)$-time algorithm for the weighted paired-domination problem on block graphs using dynamic programming, which strengthens the results in [Theoret. Comput. Sci., 410(47--49):5063--5071, 2009] and [J. Comb. Optim., 19(4):457--470, 2010]. Moreover, the algorithm can be completed in $O(n)$ time if the block-cut-vertex structure of $G$ is given.
[ { "created": "Mon, 2 May 2016 07:19:17 GMT", "version": "v1" } ]
2016-05-03
[ [ "Lin", "Ching-Chi", "" ], [ "Hsieh", "Cheng-Yu", "" ] ]
In a graph $G = (V,E)$, a vertex subset $S\subseteq V(G)$ is said to be a dominating set of $G$ if every vertex not in $S$ is adjacent to a vertex in $S$. A dominating set $S$ of $G$ is called a paired-dominating set of $G$ if the induced subgraph $G[S]$ contains a perfect matching. In this paper, we propose an $O(n+m)$-time algorithm for the weighted paired-domination problem on block graphs using dynamic programming, which strengthens the results in [Theoret. Comput. Sci., 410(47--49):5063--5071, 2009] and [J. Comb. Optim., 19(4):457--470, 2010]. Moreover, the algorithm can be completed in $O(n)$ time if the block-cut-vertex structure of $G$ is given.
1903.01180
Huanneng Qiu
Huanneng Qiu, Matthew Garratt, David Howard and Sreenatha Anavatti
Evolving Spiking Neural Networks for Nonlinear Control Problems
conference: ssci 2018
null
10.1109/SSCI.2018.8628848
null
cs.NE cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking Neural Networks are powerful computational modelling tools that have attracted much interest because of the bioinspired modelling of synaptic interactions between neurons. Most of the research employing spiking neurons has been non-behavioural and discontinuous. Comparatively, this paper presents a recurrent spiking controller that is capable of solving nonlinear control problems in continuous domains using a popular topology evolution algorithm as the learning mechanism. We propose two mechanisms necessary to the decoding of continuous signals from discrete spike transmission: (i) a background current component to maintain frequency sufficiency for spike rate decoding, and (ii) a general network structure that derives strength from topology evolution. We demonstrate that the proposed spiking controller can learn significantly faster to discover functional solutions than sigmoidal neural networks in solving a classic nonlinear control problem.
[ { "created": "Mon, 4 Mar 2019 11:30:53 GMT", "version": "v1" } ]
2019-03-05
[ [ "Qiu", "Huanneng", "" ], [ "Garratt", "Matthew", "" ], [ "Howard", "David", "" ], [ "Anavatti", "Sreenatha", "" ] ]
Spiking Neural Networks are powerful computational modelling tools that have attracted much interest because of the bioinspired modelling of synaptic interactions between neurons. Most of the research employing spiking neurons has been non-behavioural and discontinuous. Comparatively, this paper presents a recurrent spiking controller that is capable of solving nonlinear control problems in continuous domains using a popular topology evolution algorithm as the learning mechanism. We propose two mechanisms necessary to the decoding of continuous signals from discrete spike transmission: (i) a background current component to maintain frequency sufficiency for spike rate decoding, and (ii) a general network structure that derives strength from topology evolution. We demonstrate that the proposed spiking controller can learn significantly faster to discover functional solutions than sigmoidal neural networks in solving a classic nonlinear control problem.
1704.07699
Lucia Ballerini
Lucia Ballerini, Ruggiero Lovreglio, Maria del C. Valdes-Hernandez, Joel Ramirez, Bradley J. MacIntosh, Sandra E. Black and Joanna M. Wardlaw
Perivascular Spaces Segmentation in Brain MRI Using Optimal 3D Filtering
null
null
10.1038/s41598-018-19781-5
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perivascular Spaces (PVS) are a recently recognised feature of Small Vessel Disease (SVD), also indicating neuroinflammation, and are an important part of the brain's circulation and glymphatic drainage system. Quantitative analysis of PVS on Magnetic Resonance Images (MRI) is important for understanding their relationship with neurological diseases. In this work, we propose a segmentation technique based on the 3D Frangi filtering for extraction of PVS from MRI. Based on prior knowledge from neuroradiological ratings of PVS, we used ordered logit models to optimise Frangi filter parameters in response to the variability in the scanner's parameters and study protocols. We optimized and validated our proposed models on two independent cohorts, a dementia sample (N=20) and patients who previously had mild to moderate stroke (N=48). Results demonstrate the robustness and generalisability of our segmentation method. Segmentation-based PVS burden estimates correlated with neuroradiological assessments (Spearman's $\rho$ = 0.74, p $<$ 0.001), suggesting the great potential of our proposed method
[ { "created": "Tue, 25 Apr 2017 14:02:06 GMT", "version": "v1" } ]
2018-04-13
[ [ "Ballerini", "Lucia", "" ], [ "Lovreglio", "Ruggiero", "" ], [ "Valdes-Hernandez", "Maria del C.", "" ], [ "Ramirez", "Joel", "" ], [ "MacIntosh", "Bradley J.", "" ], [ "Black", "Sandra E.", "" ], [ "Wardlaw", "Joanna M.", "" ] ]
Perivascular Spaces (PVS) are a recently recognised feature of Small Vessel Disease (SVD), also indicating neuroinflammation, and are an important part of the brain's circulation and glymphatic drainage system. Quantitative analysis of PVS on Magnetic Resonance Images (MRI) is important for understanding their relationship with neurological diseases. In this work, we propose a segmentation technique based on the 3D Frangi filtering for extraction of PVS from MRI. Based on prior knowledge from neuroradiological ratings of PVS, we used ordered logit models to optimise Frangi filter parameters in response to the variability in the scanner's parameters and study protocols. We optimized and validated our proposed models on two independent cohorts, a dementia sample (N=20) and patients who previously had mild to moderate stroke (N=48). Results demonstrate the robustness and generalisability of our segmentation method. Segmentation-based PVS burden estimates correlated with neuroradiological assessments (Spearman's $\rho$ = 0.74, p $<$ 0.001), suggesting the great potential of our proposed method
2209.00865
Lemeng Wu
Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu
Diffusion-based Molecule Generation with Informative Prior Bridges
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development. Because the molecules are governed by physical laws, a key challenge is to incorporate prior information into the training procedure to generate high-quality and realistic molecules. We propose a simple and novel approach to steer the training of diffusion-based generative models with physical and statistics prior information. This is achieved by constructing physically informed diffusion bridges, stochastic processes that guarantee to yield a given observation at the fixed terminal time. We develop a Lyapunov function based method to construct and determine bridges, and propose a number of proposals of informative prior bridges for both high-quality molecule generation and uniformity-promoted 3D point cloud generation. With comprehensive experiments, we show that our method provides a powerful approach to the 3D generation task, yielding molecule structures with better quality and stability scores and more uniformly distributed point clouds of high qualities.
[ { "created": "Fri, 2 Sep 2022 07:52:06 GMT", "version": "v1" } ]
2022-09-05
[ [ "Wu", "Lemeng", "" ], [ "Gong", "Chengyue", "" ], [ "Liu", "Xingchao", "" ], [ "Ye", "Mao", "" ], [ "Liu", "Qiang", "" ] ]
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development. Because the molecules are governed by physical laws, a key challenge is to incorporate prior information into the training procedure to generate high-quality and realistic molecules. We propose a simple and novel approach to steer the training of diffusion-based generative models with physical and statistics prior information. This is achieved by constructing physically informed diffusion bridges, stochastic processes that guarantee to yield a given observation at the fixed terminal time. We develop a Lyapunov function based method to construct and determine bridges, and propose a number of proposals of informative prior bridges for both high-quality molecule generation and uniformity-promoted 3D point cloud generation. With comprehensive experiments, we show that our method provides a powerful approach to the 3D generation task, yielding molecule structures with better quality and stability scores and more uniformly distributed point clouds of high qualities.
2304.05451
Eduardo Noboro Tominaga
Eduardo Noboro Tominaga, Onel Luiz Alcaraz L\'opez, Hirley Alves, Richard Demo Souza and Leonardo Ter\c{c}as
Performance Analysis of Centralized and Distributed Massive MIMO for MTC
6 pages, 8 figures. Paper accepted for presentation at the 2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Gothenburg, Sweden, 2023
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive Multiple-Input Multiple-Output (mMIMO) is one of the essential technologies introduced by the Fifth Generation (5G) of wireless communication systems. However, although mMIMO provides many benefits for wireless communications, it cannot ensure uniform wireless coverage and suffers from inter-cell interference inherent to the traditional cellular network paradigm. Therefore, industry and academia are working on the evolution from conventional Centralized mMIMO (CmMIMO) to Distributed mMIMO (DmMIMO) architectures for the Sixth Generation (6G) of wireless networks. Under this new paradigm, several Access Points (APs) are distributed in the coverage area, and all jointly cooperate to serve the active devices. Aiming at Machine-Type Communication (MTC) use cases, we compare the performance of CmMIMO and different DmMIMO deployments in an indoor industrial scenario considering regular and alarm traffic patterns for MTC. Our simulation results show that DmMIMO's performance is often superior to CmMIMO. However, the traditional CmMIMO can outperform DmMIMO when the devices' channels are highly correlated.
[ { "created": "Tue, 11 Apr 2023 18:51:16 GMT", "version": "v1" } ]
2023-04-13
[ [ "Tominaga", "Eduardo Noboro", "" ], [ "López", "Onel Luiz Alcaraz", "" ], [ "Alves", "Hirley", "" ], [ "Souza", "Richard Demo", "" ], [ "Terças", "Leonardo", "" ] ]
Massive Multiple-Input Multiple-Output (mMIMO) is one of the essential technologies introduced by the Fifth Generation (5G) of wireless communication systems. However, although mMIMO provides many benefits for wireless communications, it cannot ensure uniform wireless coverage and suffers from inter-cell interference inherent to the traditional cellular network paradigm. Therefore, industry and academia are working on the evolution from conventional Centralized mMIMO (CmMIMO) to Distributed mMIMO (DmMIMO) architectures for the Sixth Generation (6G) of wireless networks. Under this new paradigm, several Access Points (APs) are distributed in the coverage area, and all jointly cooperate to serve the active devices. Aiming at Machine-Type Communication (MTC) use cases, we compare the performance of CmMIMO and different DmMIMO deployments in an indoor industrial scenario considering regular and alarm traffic patterns for MTC. Our simulation results show that DmMIMO's performance is often superior to CmMIMO. However, the traditional CmMIMO can outperform DmMIMO when the devices' channels are highly correlated.
1703.09825
Areej Alhothali
Areej Alhothali and Jesse Hoey
Semi-Supervised Affective Meaning Lexicon Expansion Using Semantic and Distributed Word Representations
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an extension to graph-based sentiment lexicon induction methods by incorporating distributed and semantic word representations in building the similarity graph to expand a three-dimensional sentiment lexicon. We also implemented and evaluated the label propagation using four different word representations and similarity metrics. Our comprehensive evaluation of the four approaches was performed on a single data set, demonstrating that all four methods can generate a significant number of new sentiment assignments with high accuracy. The highest correlations (tau=0.51) and the lowest error (mean absolute error < 1.1%), obtained by combining both the semantic and the distributional features, outperformed the distributional-based and semantic-based label-propagation models and approached a supervised algorithm.
[ { "created": "Tue, 28 Mar 2017 22:05:20 GMT", "version": "v1" } ]
2017-03-30
[ [ "Alhothali", "Areej", "" ], [ "Hoey", "Jesse", "" ] ]
In this paper, we propose an extension to graph-based sentiment lexicon induction methods by incorporating distributed and semantic word representations in building the similarity graph to expand a three-dimensional sentiment lexicon. We also implemented and evaluated the label propagation using four different word representations and similarity metrics. Our comprehensive evaluation of the four approaches was performed on a single data set, demonstrating that all four methods can generate a significant number of new sentiment assignments with high accuracy. The highest correlations (tau=0.51) and the lowest error (mean absolute error < 1.1%), obtained by combining both the semantic and the distributional features, outperformed the distributional-based and semantic-based label-propagation models and approached a supervised algorithm.
2404.14964
Friedemann Zenke
Julia Gygax and Friedemann Zenke
Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks
25 pages, 7 figures + 3 supplementary figures
null
null
null
cs.NE q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Training spiking neural networks to approximate complex functions is essential for studying information processing in the brain and neuromorphic computing. Yet, the binary nature of spikes constitutes a challenge for direct gradient-based training. To sidestep this problem, surrogate gradients have proven empirically successful, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to lack of support for automatic differentiation, are impractical for training deep spiking neural networks, yet provide gradients equivalent to surrogate gradients in single neurons. On the other hand, we examine stochastic automatic differentiation, which is compatible with discrete randomness but has never been applied to spiking neural network training. We find that the latter provides the missing theoretical basis for surrogate gradients in stochastic spiking neural networks. We further show that surrogate gradients in deterministic networks correspond to a particular asymptotic case and numerically confirm the effectiveness of surrogate gradients in stochastic multi-layer spiking neural networks. Finally, we illustrate that surrogate gradients are not conservative fields and, thus, not gradients of a surrogate loss. Our work provides the missing theoretical foundation for surrogate gradients and an analytically well-founded solution for end-to-end training of stochastic spiking neural networks.
[ { "created": "Tue, 23 Apr 2024 12:20:09 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2024 12:48:18 GMT", "version": "v2" } ]
2024-06-07
[ [ "Gygax", "Julia", "" ], [ "Zenke", "Friedemann", "" ] ]
Training spiking neural networks to approximate complex functions is essential for studying information processing in the brain and neuromorphic computing. Yet, the binary nature of spikes constitutes a challenge for direct gradient-based training. To sidestep this problem, surrogate gradients have proven empirically successful, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to lack of support for automatic differentiation, are impractical for training deep spiking neural networks, yet provide gradients equivalent to surrogate gradients in single neurons. On the other hand, we examine stochastic automatic differentiation, which is compatible with discrete randomness but has never been applied to spiking neural network training. We find that the latter provides the missing theoretical basis for surrogate gradients in stochastic spiking neural networks. We further show that surrogate gradients in deterministic networks correspond to a particular asymptotic case and numerically confirm the effectiveness of surrogate gradients in stochastic multi-layer spiking neural networks. Finally, we illustrate that surrogate gradients are not conservative fields and, thus, not gradients of a surrogate loss. Our work provides the missing theoretical foundation for surrogate gradients and an analytically well-founded solution for end-to-end training of stochastic spiking neural networks.
2312.15636
Feng Zhou
Feng Zhou, Jianqin Yin, Peiyang Li
Lifting by Image -- Leveraging Image Cues for Accurate 3D Human Pose Estimation
Accepted by AAAI24
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The "lifting from 2D pose" method has been the dominant approach to 3D Human Pose Estimation (3DHPE) due to the powerful visual analysis ability of 2D pose estimators. Widely known, there exists a depth ambiguity problem when estimating solely from 2D pose, where one 2D pose can be mapped to multiple 3D poses. Intuitively, the rich semantic and texture information in images can contribute to a more accurate "lifting" procedure. Yet, existing research encounters two primary challenges. Firstly, the distribution of image data in 3D motion capture datasets is too narrow because of the laboratorial environment, which leads to poor generalization ability of methods trained with image information. Secondly, effective strategies for leveraging image information are lacking. In this paper, we give new insight into the cause of poor generalization problems and the effectiveness of image features. Based on that, we propose an advanced framework. Specifically, the framework consists of two stages. First, we enable the keypoints to query and select the beneficial features from all image patches. To reduce the keypoints attention to inconsequential background features, we design a novel Pose-guided Transformer Layer, which adaptively limits the updates to unimportant image patches. Then, through a designed Adaptive Feature Selection Module, we prune less significant image patches from the feature map. In the second stage, we allow the keypoints to further emphasize the retained critical image features. This progressive learning approach prevents further training on insignificant image features. Experimental results show that our model achieves state-of-the-art performance on both the Human3.6M dataset and the MPI-INF-3DHP dataset.
[ { "created": "Mon, 25 Dec 2023 07:50:58 GMT", "version": "v1" } ]
2023-12-27
[ [ "Zhou", "Feng", "" ], [ "Yin", "Jianqin", "" ], [ "Li", "Peiyang", "" ] ]
The "lifting from 2D pose" method has been the dominant approach to 3D Human Pose Estimation (3DHPE) due to the powerful visual analysis ability of 2D pose estimators. Widely known, there exists a depth ambiguity problem when estimating solely from 2D pose, where one 2D pose can be mapped to multiple 3D poses. Intuitively, the rich semantic and texture information in images can contribute to a more accurate "lifting" procedure. Yet, existing research encounters two primary challenges. Firstly, the distribution of image data in 3D motion capture datasets is too narrow because of the laboratorial environment, which leads to poor generalization ability of methods trained with image information. Secondly, effective strategies for leveraging image information are lacking. In this paper, we give new insight into the cause of poor generalization problems and the effectiveness of image features. Based on that, we propose an advanced framework. Specifically, the framework consists of two stages. First, we enable the keypoints to query and select the beneficial features from all image patches. To reduce the keypoints attention to inconsequential background features, we design a novel Pose-guided Transformer Layer, which adaptively limits the updates to unimportant image patches. Then, through a designed Adaptive Feature Selection Module, we prune less significant image patches from the feature map. In the second stage, we allow the keypoints to further emphasize the retained critical image features. This progressive learning approach prevents further training on insignificant image features. Experimental results show that our model achieves state-of-the-art performance on both the Human3.6M dataset and the MPI-INF-3DHP dataset.
2205.00308
Yelena Mejova
Yelena Mejova, Jisun An, Gianmarco De Francisci Morales, Haewoon Kwak
Modeling Political Activism around Gun Debate via Social Media
null
ACM Transactions on Social Computing. 2022
10.1145/3532102
null
cs.SI cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
The United States have some of the highest rates of gun violence among developed countries. Yet, there is a disagreement about the extent to which firearms should be regulated. In this study, we employ social media signals to examine the predictors of offline political activism, at both population and individual level. We show that it is possible to classify the stance of users on the gun issue, especially accurately when network information is available. Alongside socioeconomic variables, network information such as the relative size of the two sides of the debate is also predictive of state-level gun policy. On individual level, we build a statistical model using network, content, and psycho-linguistic features that predicts real-life political action, and explore the most predictive linguistic features. Thus, we argue that, alongside demographics and socioeconomic indicators, social media provides useful signals in the holistic modeling of political engagement around the gun debate.
[ { "created": "Sat, 30 Apr 2022 17:17:08 GMT", "version": "v1" } ]
2022-05-03
[ [ "Mejova", "Yelena", "" ], [ "An", "Jisun", "" ], [ "Morales", "Gianmarco De Francisci", "" ], [ "Kwak", "Haewoon", "" ] ]
The United States have some of the highest rates of gun violence among developed countries. Yet, there is a disagreement about the extent to which firearms should be regulated. In this study, we employ social media signals to examine the predictors of offline political activism, at both population and individual level. We show that it is possible to classify the stance of users on the gun issue, especially accurately when network information is available. Alongside socioeconomic variables, network information such as the relative size of the two sides of the debate is also predictive of state-level gun policy. On individual level, we build a statistical model using network, content, and psycho-linguistic features that predicts real-life political action, and explore the most predictive linguistic features. Thus, we argue that, alongside demographics and socioeconomic indicators, social media provides useful signals in the holistic modeling of political engagement around the gun debate.
2102.13340
Parvaneh Asghari
Parvaneh Asghari and Seyyed Hamid Haj Seyyed Javadi
Lightweight Key-Dependent Dynamic S-Boxes based on Hyperelliptic Curve for IoT Devices
in Persian language
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Security is one of the main issues in Internet of Things (IoT). Encryption plays a curtail role in making these systems secure. Substitution Box (S-Box) has an effective impact in block encryption methods. Due to the restricted resource capacities of IoT nodes, providing a lightweight S-Box is a challenging problem. This paper presents a key-dependent S-Box using Hyperelliptic curve. The proposed S-Box is analytically evaluated using performance criteria including bijection, nonlinearity, strict avalanche effect, and algebraic degree. The evaluation results endorse that the offered S-Box production algorithm is considerably an effective way to generate cryptographic strong S-Box.
[ { "created": "Fri, 26 Feb 2021 07:38:43 GMT", "version": "v1" } ]
2021-03-01
[ [ "Asghari", "Parvaneh", "" ], [ "Javadi", "Seyyed Hamid Haj Seyyed", "" ] ]
Security is one of the main issues in Internet of Things (IoT). Encryption plays a curtail role in making these systems secure. Substitution Box (S-Box) has an effective impact in block encryption methods. Due to the restricted resource capacities of IoT nodes, providing a lightweight S-Box is a challenging problem. This paper presents a key-dependent S-Box using Hyperelliptic curve. The proposed S-Box is analytically evaluated using performance criteria including bijection, nonlinearity, strict avalanche effect, and algebraic degree. The evaluation results endorse that the offered S-Box production algorithm is considerably an effective way to generate cryptographic strong S-Box.
2407.18715
Peng Hao
Peng Hao, Xiaobing Wang, Yingying Jiang, Hanchao Jia, Xiaoshuai Hao
BCTR: Bidirectional Conditioning Transformer for Scene Graph Generation
9 pages, 3 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Scene Graph Generation (SGG) remains a challenging task due to its compositional property. Previous approaches improve prediction efficiency by learning in an end-to-end manner. However, these methods exhibit limited performance as they assume unidirectional conditioning between entities and predicates, leading to insufficient information interaction. To address this limitation, we propose a novel bidirectional conditioning factorization for SGG, introducing efficient interaction between entities and predicates. Specifically, we develop an end-to-end scene graph generation model, Bidirectional Conditioning Transformer (BCTR), to implement our factorization. BCTR consists of two key modules. First, the Bidirectional Conditioning Generator (BCG) facilitates multi-stage interactive feature augmentation between entities and predicates, enabling mutual benefits between the two predictions. Second, Random Feature Alignment (RFA) regularizes the feature space by distilling multi-modal knowledge from pre-trained models, enhancing BCTR's ability on tailed categories without relying on statistical priors. We conduct a series of experiments on Visual Genome and Open Image V6, demonstrating that BCTR achieves state-of-the-art performance on both benchmarks. The code will be available upon acceptance of the paper.
[ { "created": "Fri, 26 Jul 2024 13:02:48 GMT", "version": "v1" } ]
2024-07-29
[ [ "Hao", "Peng", "" ], [ "Wang", "Xiaobing", "" ], [ "Jiang", "Yingying", "" ], [ "Jia", "Hanchao", "" ], [ "Hao", "Xiaoshuai", "" ] ]
Scene Graph Generation (SGG) remains a challenging task due to its compositional property. Previous approaches improve prediction efficiency by learning in an end-to-end manner. However, these methods exhibit limited performance as they assume unidirectional conditioning between entities and predicates, leading to insufficient information interaction. To address this limitation, we propose a novel bidirectional conditioning factorization for SGG, introducing efficient interaction between entities and predicates. Specifically, we develop an end-to-end scene graph generation model, Bidirectional Conditioning Transformer (BCTR), to implement our factorization. BCTR consists of two key modules. First, the Bidirectional Conditioning Generator (BCG) facilitates multi-stage interactive feature augmentation between entities and predicates, enabling mutual benefits between the two predictions. Second, Random Feature Alignment (RFA) regularizes the feature space by distilling multi-modal knowledge from pre-trained models, enhancing BCTR's ability on tailed categories without relying on statistical priors. We conduct a series of experiments on Visual Genome and Open Image V6, demonstrating that BCTR achieves state-of-the-art performance on both benchmarks. The code will be available upon acceptance of the paper.
2403.17447
Yingtao Shen
Yingtao Shen, Minqing Sun, Jie Zhao, An Zou
Chain of Compression: A Systematic Approach to Combinationally Compress Convolutional Neural Networks
10 pages, 15 figures
null
null
null
cs.LG cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have achieved significant popularity, but their computational and memory intensity poses challenges for resource-constrained computing systems, particularly with the prerequisite of real-time performance. To release this burden, model compression has become an important research focus. Many approaches like quantization, pruning, early exit, and knowledge distillation have demonstrated the effect of reducing redundancy in neural networks. Upon closer examination, it becomes apparent that each approach capitalizes on its unique features to compress the neural network, and they can also exhibit complementary behavior when combined. To explore the interactions and reap the benefits from the complementary features, we propose the Chain of Compression, which works on the combinational sequence to apply these common techniques to compress the neural network. Validated on the image-based regression and classification networks across different data sets, our proposed Chain of Compression can significantly compress the computation cost by 100-1000 times with ignorable accuracy loss compared with the baseline model.
[ { "created": "Tue, 26 Mar 2024 07:26:00 GMT", "version": "v1" } ]
2024-03-27
[ [ "Shen", "Yingtao", "" ], [ "Sun", "Minqing", "" ], [ "Zhao", "Jie", "" ], [ "Zou", "An", "" ] ]
Convolutional neural networks (CNNs) have achieved significant popularity, but their computational and memory intensity poses challenges for resource-constrained computing systems, particularly with the prerequisite of real-time performance. To release this burden, model compression has become an important research focus. Many approaches like quantization, pruning, early exit, and knowledge distillation have demonstrated the effect of reducing redundancy in neural networks. Upon closer examination, it becomes apparent that each approach capitalizes on its unique features to compress the neural network, and they can also exhibit complementary behavior when combined. To explore the interactions and reap the benefits from the complementary features, we propose the Chain of Compression, which works on the combinational sequence to apply these common techniques to compress the neural network. Validated on the image-based regression and classification networks across different data sets, our proposed Chain of Compression can significantly compress the computation cost by 100-1000 times with ignorable accuracy loss compared with the baseline model.
2407.18420
Alessio Mansutti
Christoph Haase, Alessio Mansutti, Amaury Pouly
On Polynomial-Time Decidability of k-Negations Fragments of First-Order Theories
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a generic framework that provides sufficient conditions for guaranteeing polynomial-time decidability of fixed-negation fragments of first-order theories that adhere to certain fixed-parameter tractability requirements. It enables deciding sentences of such theories with arbitrary existential quantification, conjunction and a fixed number of negation symbols in polynomial time. It was recently shown by Nguyen and Pak [SIAM J. Comput. 51(2): 1--31 (2022)] that an even more restricted such fragment of Presburger arithmetic (the first-order theory of the integers with addition and order) is NP-hard. In contrast, by application of our framework, we show that the fixed negation fragment of weak Presburger arithmetic, which drops the order relation from Presburger arithmetic in favour of equality, is decidable in polynomial time.
[ { "created": "Thu, 25 Jul 2024 22:41:24 GMT", "version": "v1" } ]
2024-07-29
[ [ "Haase", "Christoph", "" ], [ "Mansutti", "Alessio", "" ], [ "Pouly", "Amaury", "" ] ]
This paper introduces a generic framework that provides sufficient conditions for guaranteeing polynomial-time decidability of fixed-negation fragments of first-order theories that adhere to certain fixed-parameter tractability requirements. It enables deciding sentences of such theories with arbitrary existential quantification, conjunction and a fixed number of negation symbols in polynomial time. It was recently shown by Nguyen and Pak [SIAM J. Comput. 51(2): 1--31 (2022)] that an even more restricted such fragment of Presburger arithmetic (the first-order theory of the integers with addition and order) is NP-hard. In contrast, by application of our framework, we show that the fixed negation fragment of weak Presburger arithmetic, which drops the order relation from Presburger arithmetic in favour of equality, is decidable in polynomial time.
2003.01538
M. G. Sarwar Murshed
Edward Verenich, Alvaro Velasquez, M.G. Sarwar Murshed, Faraz Hussain
FlexServe: Deployment of PyTorch Models as Flexible REST Endpoints
3 pages, 1 figure, conference
null
null
null
cs.DC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of artificial intelligence capabilities into modern software systems is increasingly being simplified through the use of cloud-based machine learning services and representational state transfer architecture design. However, insufficient information regarding underlying model provenance and the lack of control over model evolution serve as an impediment to the more widespread adoption of these services in many operational environments which have strict security requirements. Furthermore, tools such as TensorFlow Serving allow models to be deployed as RESTful endpoints, but require error-prone transformations for PyTorch models as these dynamic computational graphs. This is in contrast to the static computational graphs of TensorFlow. To enable rapid deployments of PyTorch models without intermediate transformations we have developed FlexServe, a simple library to deploy multi-model ensembles with flexible batching.
[ { "created": "Sat, 29 Feb 2020 18:51:09 GMT", "version": "v1" } ]
2020-03-04
[ [ "Verenich", "Edward", "" ], [ "Velasquez", "Alvaro", "" ], [ "Murshed", "M. G. Sarwar", "" ], [ "Hussain", "Faraz", "" ] ]
The integration of artificial intelligence capabilities into modern software systems is increasingly being simplified through the use of cloud-based machine learning services and representational state transfer architecture design. However, insufficient information regarding underlying model provenance and the lack of control over model evolution serve as an impediment to the more widespread adoption of these services in many operational environments which have strict security requirements. Furthermore, tools such as TensorFlow Serving allow models to be deployed as RESTful endpoints, but require error-prone transformations for PyTorch models as these dynamic computational graphs. This is in contrast to the static computational graphs of TensorFlow. To enable rapid deployments of PyTorch models without intermediate transformations we have developed FlexServe, a simple library to deploy multi-model ensembles with flexible batching.
2102.01540
Sebastian Lamm
Demian Hespe, Sebastian Lamm, Christian Schorr
Targeted Branching for the Maximum Independent Set Problem
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding a maximum independent set is a fundamental NP-hard problem that is used in many real-world applications. Given an unweighted graph, this problem asks for a maximum cardinality set of pairwise non-adjacent vertices. Some of the most successful algorithms for this problem are based on the branch-and-bound or branch-and-reduce paradigms. In particular, branch-and-reduce algorithms, which combine branch-and-bound with reduction rules, achieved substantial results, solving many previously infeasible instances. These results were to a large part achieved by developing new, more practical reduction rules. However, other components that have been shown to have an impact on the performance of these algorithms have not received as much attention. One of these is the branching strategy, which determines what vertex is included or excluded in a potential solution. The most commonly used strategy selects vertices based on their degree and does not take into account other factors that contribute to the performance. In this work, we develop and evaluate several novel branching strategies for both branch-and-bound and branch-and-reduce algorithms. Our strategies are based on one of two approaches. They either (1) aim to decompose the graph into two or more connected components which can then be solved independently, or (2) try to remove vertices that hinder the application of a reduction rule. Our experiments on a large set of real-world instances indicate that our strategies are able to improve the performance of the state-of-the-art branch-and-reduce algorithms. To be more specific, our reduction-based packing branching rule is able to outperform the default branching strategy of selecting a vertex of highest degree on 65% of all instances tested. Furthermore, our decomposition-based strategy based on edge cuts is able to achieve a speedup of 2.29 on sparse networks (1.22 on all instances).
[ { "created": "Tue, 2 Feb 2021 15:09:18 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2021 14:59:30 GMT", "version": "v2" } ]
2021-03-30
[ [ "Hespe", "Demian", "" ], [ "Lamm", "Sebastian", "" ], [ "Schorr", "Christian", "" ] ]
Finding a maximum independent set is a fundamental NP-hard problem that is used in many real-world applications. Given an unweighted graph, this problem asks for a maximum cardinality set of pairwise non-adjacent vertices. Some of the most successful algorithms for this problem are based on the branch-and-bound or branch-and-reduce paradigms. In particular, branch-and-reduce algorithms, which combine branch-and-bound with reduction rules, achieved substantial results, solving many previously infeasible instances. These results were to a large part achieved by developing new, more practical reduction rules. However, other components that have been shown to have an impact on the performance of these algorithms have not received as much attention. One of these is the branching strategy, which determines what vertex is included or excluded in a potential solution. The most commonly used strategy selects vertices based on their degree and does not take into account other factors that contribute to the performance. In this work, we develop and evaluate several novel branching strategies for both branch-and-bound and branch-and-reduce algorithms. Our strategies are based on one of two approaches. They either (1) aim to decompose the graph into two or more connected components which can then be solved independently, or (2) try to remove vertices that hinder the application of a reduction rule. Our experiments on a large set of real-world instances indicate that our strategies are able to improve the performance of the state-of-the-art branch-and-reduce algorithms. To be more specific, our reduction-based packing branching rule is able to outperform the default branching strategy of selecting a vertex of highest degree on 65% of all instances tested. Furthermore, our decomposition-based strategy based on edge cuts is able to achieve a speedup of 2.29 on sparse networks (1.22 on all instances).
2105.06209
Yingzhe He
Yingzhe He, Guozhu Meng, Kai Chen, Jinwen He, Xingbo Hu
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks
16 pages, 10 figures, conference
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine unlearning has great significance in guaranteeing model security and protecting user privacy. Additionally, many legal provisions clearly stipulate that users have the right to demand model providers to delete their own data from training set, that is, the right to be forgotten. The naive way of unlearning data is to retrain the model without it from scratch, which becomes extremely time and resource consuming at the modern scale of deep neural networks. Other unlearning approaches by refactoring model or training data struggle to gain a balance between overhead and model usability. In this paper, we propose an approach, dubbed as DeepObliviate, to implement machine unlearning efficiently, without modifying the normal training mode. Our approach improves the original training process by storing intermediate models on the hard disk. Given a data point to unlearn, we first quantify its temporal residual memory left in stored models. The influenced models will be retrained and we decide when to terminate the retraining based on the trend of residual memory on-the-fly. Last, we stitch an unlearned model by combining the retrained models and uninfluenced models. We extensively evaluate our approach on five datasets and deep learning models. Compared to the method of retraining from scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1% accuracy rates and 66.7$\times$, 75.0$\times$, 33.3$\times$, 29.4$\times$, 13.7$\times$ speedups on the MNIST, SVHN, CIFAR-10, Purchase, and ImageNet datasets, respectively. Compared to the state-of-the-art unlearning approach, we improve 5.8% accuracy, 32.5$\times$ prediction speedup, and reach a comparable retrain speedup under identical settings on average on these datasets. Additionally, DeepObliviate can also pass the backdoor-based unlearning verification.
[ { "created": "Thu, 13 May 2021 12:02:04 GMT", "version": "v1" } ]
2021-05-14
[ [ "He", "Yingzhe", "" ], [ "Meng", "Guozhu", "" ], [ "Chen", "Kai", "" ], [ "He", "Jinwen", "" ], [ "Hu", "Xingbo", "" ] ]
Machine unlearning has great significance in guaranteeing model security and protecting user privacy. Additionally, many legal provisions clearly stipulate that users have the right to demand model providers to delete their own data from training set, that is, the right to be forgotten. The naive way of unlearning data is to retrain the model without it from scratch, which becomes extremely time and resource consuming at the modern scale of deep neural networks. Other unlearning approaches by refactoring model or training data struggle to gain a balance between overhead and model usability. In this paper, we propose an approach, dubbed as DeepObliviate, to implement machine unlearning efficiently, without modifying the normal training mode. Our approach improves the original training process by storing intermediate models on the hard disk. Given a data point to unlearn, we first quantify its temporal residual memory left in stored models. The influenced models will be retrained and we decide when to terminate the retraining based on the trend of residual memory on-the-fly. Last, we stitch an unlearned model by combining the retrained models and uninfluenced models. We extensively evaluate our approach on five datasets and deep learning models. Compared to the method of retraining from scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1% accuracy rates and 66.7$\times$, 75.0$\times$, 33.3$\times$, 29.4$\times$, 13.7$\times$ speedups on the MNIST, SVHN, CIFAR-10, Purchase, and ImageNet datasets, respectively. Compared to the state-of-the-art unlearning approach, we improve 5.8% accuracy, 32.5$\times$ prediction speedup, and reach a comparable retrain speedup under identical settings on average on these datasets. Additionally, DeepObliviate can also pass the backdoor-based unlearning verification.
1911.12529
Hwann-Tzong Chen
Ting-I Hsieh, Yi-Chen Lo, Hwann-Tzong Chen, Tyng-Luh Liu
One-Shot Object Detection with Co-Attention and Co-Excitation
NeurIPS 2019
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims to tackle the challenging problem of one-shot object detection. Given a query image patch whose class label is not included in the training data, the goal of the task is to detect all instances of the same class in a target image. To this end, we develop a novel {\em co-attention and co-excitation} (CoAE) framework that makes contributions in three key technical aspects. First, we propose to use the non-local operation to explore the co-attention embodied in each query-target pair and yield region proposals accounting for the one-shot situation. Second, we formulate a squeeze-and-co-excitation scheme that can adaptively emphasize correlated feature channels to help uncover relevant proposals and eventually the target objects. Third, we design a margin-based ranking loss for implicitly learning a metric to predict the similarity of a region proposal to the underlying query, no matter its class label is seen or unseen in training. The resulting model is therefore a two-stage detector that yields a strong baseline on both VOC and MS-COCO under one-shot setting of detecting objects from both seen and never-seen classes. Codes are available at https://github.com/timy90022/One-Shot-Object-Detection.
[ { "created": "Thu, 28 Nov 2019 05:14:23 GMT", "version": "v1" } ]
2019-12-02
[ [ "Hsieh", "Ting-I", "" ], [ "Lo", "Yi-Chen", "" ], [ "Chen", "Hwann-Tzong", "" ], [ "Liu", "Tyng-Luh", "" ] ]
This paper aims to tackle the challenging problem of one-shot object detection. Given a query image patch whose class label is not included in the training data, the goal of the task is to detect all instances of the same class in a target image. To this end, we develop a novel {\em co-attention and co-excitation} (CoAE) framework that makes contributions in three key technical aspects. First, we propose to use the non-local operation to explore the co-attention embodied in each query-target pair and yield region proposals accounting for the one-shot situation. Second, we formulate a squeeze-and-co-excitation scheme that can adaptively emphasize correlated feature channels to help uncover relevant proposals and eventually the target objects. Third, we design a margin-based ranking loss for implicitly learning a metric to predict the similarity of a region proposal to the underlying query, no matter its class label is seen or unseen in training. The resulting model is therefore a two-stage detector that yields a strong baseline on both VOC and MS-COCO under one-shot setting of detecting objects from both seen and never-seen classes. Codes are available at https://github.com/timy90022/One-Shot-Object-Detection.
2010.03661
Jakob Aungiers
Jakob Aungiers
Multivariate Temporal Autoencoder for Predictive Reconstruction of Deep Sequences
6 pages, 6 figures, 2 equations, 3 tables
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Time series sequence prediction and modelling has proven to be a challenging endeavor in real world datasets. Two key issues are the multi-dimensionality of data and the interaction of independent dimensions forming a latent output signal, as well as the representation of multi-dimensional temporal data inside of a predictive model. This paper proposes a multi-branch deep neural network approach to tackling the aforementioned problems by modelling a latent state vector representation of data windows through the use of a recurrent autoencoder branch and subsequently feeding the trained latent vector representation into a predictor branch of the model. This model is henceforth referred to as Multivariate Temporal Autoencoder (MvTAe). The framework in this paper utilizes a synthetic multivariate temporal dataset which contains dimensions that combine to create a hidden output target.
[ { "created": "Wed, 7 Oct 2020 21:25:35 GMT", "version": "v1" } ]
2020-10-09
[ [ "Aungiers", "Jakob", "" ] ]
Time series sequence prediction and modelling has proven to be a challenging endeavor in real world datasets. Two key issues are the multi-dimensionality of data and the interaction of independent dimensions forming a latent output signal, as well as the representation of multi-dimensional temporal data inside of a predictive model. This paper proposes a multi-branch deep neural network approach to tackling the aforementioned problems by modelling a latent state vector representation of data windows through the use of a recurrent autoencoder branch and subsequently feeding the trained latent vector representation into a predictor branch of the model. This model is henceforth referred to as Multivariate Temporal Autoencoder (MvTAe). The framework in this paper utilizes a synthetic multivariate temporal dataset which contains dimensions that combine to create a hidden output target.
2107.06550
Yunlong Lu
Yunlong Lu, Na Meng, Wenxin Li
FAPR: Fast and Accurate Program Repair for Introductory Programming Courses
12 pages, 5 figures for main text; 3 pages, 5 figures for appendix
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In introductory programming courses, it is challenging for instructors to provide debugging feedback on students' incorrect programs. Some recent tools automatically offer program repair feedback by identifying any differences between incorrect and correct programs, but suffer from issues related to scalability, accuracy, and cross-language portability. This paper presents FAPR -- our novel approach that suggests repairs based on program differences in a fast and accurate manner. FAPR is different from current tools in three aspects. First, it encodes syntactic information into token sequences to enable high-speed comparison between incorrect and correct programs. Second, to accurately extract program differences, FAPR adopts a novel matching algorithm that maximizes token-level matches and minimizes statement-level differences. Third, FAPR relies on testing instead of static/dynamic analysis to validate and refine candidate repairs, so it eliminates the language dependency or high runtime overhead incurred by complex program analysis. We implemented FAPR to suggest repairs for both C and C++ programs; our experience shows the great cross-language portability of FAPR. More importantly, we empirically compared FAPR with a state-of-the-art tool Clara. FAPR suggested repairs for over 95.5% of incorrect solutions. We sampled 250 repairs among FAPR's suggestions, and found 89.6% of the samples to be minimal and correct. FAPR outperformed Clara by suggesting repairs for more cases, creating smaller repairs, producing higher-quality fixes, and causing lower runtime overheads. Our results imply that FAPR can potentially help instructors or TAs to effectively locate bugs in incorrect code, and to provide debugging hints/guidelines based on those generated repairs.
[ { "created": "Wed, 14 Jul 2021 08:34:39 GMT", "version": "v1" } ]
2021-07-15
[ [ "Lu", "Yunlong", "" ], [ "Meng", "Na", "" ], [ "Li", "Wenxin", "" ] ]
In introductory programming courses, it is challenging for instructors to provide debugging feedback on students' incorrect programs. Some recent tools automatically offer program repair feedback by identifying any differences between incorrect and correct programs, but suffer from issues related to scalability, accuracy, and cross-language portability. This paper presents FAPR -- our novel approach that suggests repairs based on program differences in a fast and accurate manner. FAPR is different from current tools in three aspects. First, it encodes syntactic information into token sequences to enable high-speed comparison between incorrect and correct programs. Second, to accurately extract program differences, FAPR adopts a novel matching algorithm that maximizes token-level matches and minimizes statement-level differences. Third, FAPR relies on testing instead of static/dynamic analysis to validate and refine candidate repairs, so it eliminates the language dependency or high runtime overhead incurred by complex program analysis. We implemented FAPR to suggest repairs for both C and C++ programs; our experience shows the great cross-language portability of FAPR. More importantly, we empirically compared FAPR with a state-of-the-art tool Clara. FAPR suggested repairs for over 95.5% of incorrect solutions. We sampled 250 repairs among FAPR's suggestions, and found 89.6% of the samples to be minimal and correct. FAPR outperformed Clara by suggesting repairs for more cases, creating smaller repairs, producing higher-quality fixes, and causing lower runtime overheads. Our results imply that FAPR can potentially help instructors or TAs to effectively locate bugs in incorrect code, and to provide debugging hints/guidelines based on those generated repairs.
2403.08383
Can Liu
Can Liu and Jin Wang and and Yipeng Zhou and Yachao Yuan and Quanzheng Sheng and Kejie Lu
AFGI: Towards Accurate and Fast-convergent Gradient Inversion Attack in Federated Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) empowers privacypreservation in model training by only exposing users' model gradients. Yet, FL users are susceptible to gradient inversion attacks (GIAs) which can reconstruct ground-truth training data such as images based on model gradients. However, reconstructing high-resolution images by existing GIAs faces two challenges: inferior accuracy and slow-convergence, especially when duplicating labels exist in the training batch. To address these challenges, we present an Accurate and Fast-convergent Gradient Inversion attack algorithm, called AFGI, with two components: Label Recovery Block (LRB) which can accurately restore duplicating labels of private images based on exposed gradients; VME Regularization Term, which includes the total variance of reconstructed images, the discrepancy between three-channel means and edges, between values from exposed gradients and reconstructed images, respectively. The AFGI can be regarded as a white-box attack strategy to reconstruct images by leveraging labels recovered by LRB. In particular, AFGI is efficient that accurately reconstruct ground-truth images when users' training batch size is up to 48. Our experimental results manifest that AFGI can diminish 85% time costs while achieving superb inversion quality in the ImageNet dataset. At last, our study unveils the shortcomings of FL in privacy-preservation, prompting the development of more advanced countermeasure strategies.
[ { "created": "Wed, 13 Mar 2024 09:48:04 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 09:27:43 GMT", "version": "v2" }, { "created": "Wed, 31 Jul 2024 08:57:57 GMT", "version": "v3" } ]
2024-08-01
[ [ "Liu", "Can", "" ], [ "Wang", "Jin", "" ], [ "Zhou", "and Yipeng", "" ], [ "Yuan", "Yachao", "" ], [ "Sheng", "Quanzheng", "" ], [ "Lu", "Kejie", "" ] ]
Federated learning (FL) empowers privacypreservation in model training by only exposing users' model gradients. Yet, FL users are susceptible to gradient inversion attacks (GIAs) which can reconstruct ground-truth training data such as images based on model gradients. However, reconstructing high-resolution images by existing GIAs faces two challenges: inferior accuracy and slow-convergence, especially when duplicating labels exist in the training batch. To address these challenges, we present an Accurate and Fast-convergent Gradient Inversion attack algorithm, called AFGI, with two components: Label Recovery Block (LRB) which can accurately restore duplicating labels of private images based on exposed gradients; VME Regularization Term, which includes the total variance of reconstructed images, the discrepancy between three-channel means and edges, between values from exposed gradients and reconstructed images, respectively. The AFGI can be regarded as a white-box attack strategy to reconstruct images by leveraging labels recovered by LRB. In particular, AFGI is efficient that accurately reconstruct ground-truth images when users' training batch size is up to 48. Our experimental results manifest that AFGI can diminish 85% time costs while achieving superb inversion quality in the ImageNet dataset. At last, our study unveils the shortcomings of FL in privacy-preservation, prompting the development of more advanced countermeasure strategies.
1903.07091
Naveen Arivazhagan
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, Wolfgang Macherey
The Missing Ingredient in Zero-Shot Neural Machine Translation
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multilingual Neural Machine Translation (NMT) models are capable of translating between multiple source and target languages. Despite various approaches to train such models, they have difficulty with zero-shot translation: translating between language pairs that were not together seen during training. In this paper we first diagnose why state-of-the-art multilingual NMT models that rely purely on parameter sharing, fail to generalize to unseen language pairs. We then propose auxiliary losses on the NMT encoder that impose representational invariance across languages. Our simple approach vastly improves zero-shot translation quality without regressing on supervised directions. For the first time, on WMT14 English-FrenchGerman, we achieve zero-shot performance that is on par with pivoting. We also demonstrate the easy scalability of our approach to multiple languages on the IWSLT 2017 shared task.
[ { "created": "Sun, 17 Mar 2019 14:01:53 GMT", "version": "v1" } ]
2019-03-19
[ [ "Arivazhagan", "Naveen", "" ], [ "Bapna", "Ankur", "" ], [ "Firat", "Orhan", "" ], [ "Aharoni", "Roee", "" ], [ "Johnson", "Melvin", "" ], [ "Macherey", "Wolfgang", "" ] ]
Multilingual Neural Machine Translation (NMT) models are capable of translating between multiple source and target languages. Despite various approaches to train such models, they have difficulty with zero-shot translation: translating between language pairs that were not together seen during training. In this paper we first diagnose why state-of-the-art multilingual NMT models that rely purely on parameter sharing, fail to generalize to unseen language pairs. We then propose auxiliary losses on the NMT encoder that impose representational invariance across languages. Our simple approach vastly improves zero-shot translation quality without regressing on supervised directions. For the first time, on WMT14 English-FrenchGerman, we achieve zero-shot performance that is on par with pivoting. We also demonstrate the easy scalability of our approach to multiple languages on the IWSLT 2017 shared task.
2106.10404
Shiwei Liu
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Published on the thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021). Code can be found https://github.com/Shiweiliuiiiiiii/GraNet
Conference on Neural Information Processing Systems (NeurIPS 2021)
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (\textbf{GraNet}), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet.
[ { "created": "Sat, 19 Jun 2021 02:09:25 GMT", "version": "v1" }, { "created": "Mon, 18 Oct 2021 09:28:31 GMT", "version": "v2" }, { "created": "Sat, 23 Oct 2021 07:18:42 GMT", "version": "v3" }, { "created": "Sun, 6 Feb 2022 15:09:51 GMT", "version": "v4" } ]
2022-02-08
[ [ "Liu", "Shiwei", "" ], [ "Chen", "Tianlong", "" ], [ "Chen", "Xiaohan", "" ], [ "Atashgahi", "Zahra", "" ], [ "Yin", "Lu", "" ], [ "Kou", "Huanyu", "" ], [ "Shen", "Li", "" ], [ "Pechenizkiy", "Mykola", "" ], [ "Wang", "Zhangyang", "" ], [ "Mocanu", "Decebal Constantin", "" ] ]
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (\textbf{GraNet}), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet.
cs/0006013
Ion Androutsopoulos
Ion Androutsopoulos, John Koutsias, Konstantinos V. Chandrinos, George Paliouras and Constantine D. Spyropoulos
An evaluation of Naive Bayesian anti-spam filtering
9 pages
Proceedings of the workshop on Machine Learning in the New Information Age, G. Potamias, V. Moustakis and M. van Someren (eds.), 11th European Conference on Machine Learning, Barcelona, Spain, pp. 9-17, 2000
null
null
cs.CL cs.AI
null
It has recently been argued that a Naive Bayesian classifier can be used to filter unsolicited bulk e-mail ("spam"). We conduct a thorough evaluation of this proposal on a corpus that we make publicly available, contributing towards standard benchmarks. At the same time we investigate the effect of attribute-set size, training-corpus size, lemmatization, and stop-lists on the filter's performance, issues that had not been previously explored. After introducing appropriate cost-sensitive evaluation measures, we reach the conclusion that additional safety nets are needed for the Naive Bayesian anti-spam filter to be viable in practice.
[ { "created": "Wed, 7 Jun 2000 11:10:50 GMT", "version": "v1" } ]
2009-09-25
[ [ "Androutsopoulos", "Ion", "" ], [ "Koutsias", "John", "" ], [ "Chandrinos", "Konstantinos V.", "" ], [ "Paliouras", "George", "" ], [ "Spyropoulos", "Constantine D.", "" ] ]
It has recently been argued that a Naive Bayesian classifier can be used to filter unsolicited bulk e-mail ("spam"). We conduct a thorough evaluation of this proposal on a corpus that we make publicly available, contributing towards standard benchmarks. At the same time we investigate the effect of attribute-set size, training-corpus size, lemmatization, and stop-lists on the filter's performance, issues that had not been previously explored. After introducing appropriate cost-sensitive evaluation measures, we reach the conclusion that additional safety nets are needed for the Naive Bayesian anti-spam filter to be viable in practice.
2309.04861
Ayan Biswas
Ayan Biswas, Supriya Dhabal, Palaniandavar Venkateswaran
Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture
null
null
null
null
cs.SD cs.IR eess.AS
http://creativecommons.org/licenses/by/4.0/
Music genre classification has become increasingly critical with the advent of various streaming applications. Nowadays, we find it impossible to imagine using the artist's name and song title to search for music in a sophisticated music app. It is always difficult to classify music correctly because the information linked to music, such as region, artist, album, or non-album, is so variable. This paper presents a study on music genre classification using a combination of Digital Signal Processing (DSP) and Deep Learning (DL) techniques. A novel algorithm is proposed that utilizes both DSP and DL methods to extract relevant features from audio signals and classify them into various genres. The algorithm was tested on the GTZAN dataset and achieved high accuracy. An end-to-end deployment architecture is also proposed for integration into music-related applications. The performance of the algorithm is analyzed and future directions for improvement are discussed. The proposed DSP and DL-based music genre classification algorithm and deployment architecture demonstrate a promising approach for music genre classification.
[ { "created": "Sat, 9 Sep 2023 19:01:12 GMT", "version": "v1" }, { "created": "Thu, 14 Sep 2023 06:05:04 GMT", "version": "v2" } ]
2023-09-15
[ [ "Biswas", "Ayan", "" ], [ "Dhabal", "Supriya", "" ], [ "Venkateswaran", "Palaniandavar", "" ] ]
Music genre classification has become increasingly critical with the advent of various streaming applications. Nowadays, we find it impossible to imagine using the artist's name and song title to search for music in a sophisticated music app. It is always difficult to classify music correctly because the information linked to music, such as region, artist, album, or non-album, is so variable. This paper presents a study on music genre classification using a combination of Digital Signal Processing (DSP) and Deep Learning (DL) techniques. A novel algorithm is proposed that utilizes both DSP and DL methods to extract relevant features from audio signals and classify them into various genres. The algorithm was tested on the GTZAN dataset and achieved high accuracy. An end-to-end deployment architecture is also proposed for integration into music-related applications. The performance of the algorithm is analyzed and future directions for improvement are discussed. The proposed DSP and DL-based music genre classification algorithm and deployment architecture demonstrate a promising approach for music genre classification.
2207.14422
Kamal Gupta
Simon Odense, Kamal Gupta and William G. Macready
Neural-Guided RuntimePrediction of Planners for Improved Motion and Task Planning with Graph Neural Networks
8 pages; To appear in proceedings of 2022 IROS, Kyoto Japan
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The past decade has amply demonstrated the remarkable functionality that can be realized by learning complex input/output relationships. Algorithmically, one of the most important and opaque relationships is that between a problem's structure and an effective solution method. Here, we quantitatively connect the structure of a planning problem to the performance of a given sampling-based motion planning (SBMP) algorithm. We demonstrate that the geometric relationships of motion planning problems can be well captured by graph neural networks (GNNs) to predict SBMP runtime. By using an algorithm portfolio we show that GNN predictions of runtime on particular problems can be leveraged to accelerate online motion planning in both navigation and manipulation tasks. Moreover, the problem-to-runtime map can be inverted to identify subproblems easier to solve by particular SBMPs. We provide a motivating example of how this knowledge may be used to improve integrated task and motion planning on simulated examples. These successes rely on the relational structure of GNNs to capture scalable generalization from low-dimensional navigation tasks to high degree-of-freedom manipulation tasks in 3d environments.
[ { "created": "Fri, 29 Jul 2022 01:00:25 GMT", "version": "v1" } ]
2022-08-01
[ [ "Odense", "Simon", "" ], [ "Gupta", "Kamal", "" ], [ "Macready", "William G.", "" ] ]
The past decade has amply demonstrated the remarkable functionality that can be realized by learning complex input/output relationships. Algorithmically, one of the most important and opaque relationships is that between a problem's structure and an effective solution method. Here, we quantitatively connect the structure of a planning problem to the performance of a given sampling-based motion planning (SBMP) algorithm. We demonstrate that the geometric relationships of motion planning problems can be well captured by graph neural networks (GNNs) to predict SBMP runtime. By using an algorithm portfolio we show that GNN predictions of runtime on particular problems can be leveraged to accelerate online motion planning in both navigation and manipulation tasks. Moreover, the problem-to-runtime map can be inverted to identify subproblems easier to solve by particular SBMPs. We provide a motivating example of how this knowledge may be used to improve integrated task and motion planning on simulated examples. These successes rely on the relational structure of GNNs to capture scalable generalization from low-dimensional navigation tasks to high degree-of-freedom manipulation tasks in 3d environments.
2003.03303
Jiajia Guo
Jiajia Guo, Xi Yang, Chao-Kai Wen, Shi Jin, Geoffrey Ye Li
Deep Learning-based CSI Feedback for RIS-assisted Multi-user Systems
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the realm of reconfigurable intelligent surface (RIS)-assisted wireless communications, efficient channel state information (CSI) feedback is paramount. This paper introduces RIS-CoCsiNet, a novel deep learning-based framework designed to greatly enhance feedback efficiency. By leveraging the inherent correlation among proximate user equipments (UEs), our approach strategically categorizes RIS-UE CSI into shared and unique data sets. This nuanced understanding allows for significant reductions in feedback overhead, as the shared data is no longer redundantly relayed. Setting RIS-CoCsiNet apart from traditional autoencoder systems, we incorporate an additional decoder and a combination neural network at the base station. These enhancements are tasked with the precise retrieval and fusion of shared and individual data. And notably, all these innovations are achieved without modifying the UEs. For those UEs boasting multiple antennas, our design seamlessly integrates long short-term memory modules, capturing the intricate correlations between antennas. With a recognition of the non-sparse nature of the RIS-UE CSI phase, we pioneer two magnitude-dependent phase feedback strategies. These strategies adeptly weave in both statistical and real-time CSI magnitude data. The potency of RIS-CoCsiNet is further solidified through compelling simulation results drawn from two diverse channel datasets.
[ { "created": "Fri, 6 Mar 2020 16:33:09 GMT", "version": "v1" }, { "created": "Mon, 14 Dec 2020 16:13:04 GMT", "version": "v2" }, { "created": "Sat, 17 Dec 2022 13:23:52 GMT", "version": "v3" }, { "created": "Sat, 9 Mar 2024 08:36:41 GMT", "version": "v4" } ]
2024-03-12
[ [ "Guo", "Jiajia", "" ], [ "Yang", "Xi", "" ], [ "Wen", "Chao-Kai", "" ], [ "Jin", "Shi", "" ], [ "Li", "Geoffrey Ye", "" ] ]
In the realm of reconfigurable intelligent surface (RIS)-assisted wireless communications, efficient channel state information (CSI) feedback is paramount. This paper introduces RIS-CoCsiNet, a novel deep learning-based framework designed to greatly enhance feedback efficiency. By leveraging the inherent correlation among proximate user equipments (UEs), our approach strategically categorizes RIS-UE CSI into shared and unique data sets. This nuanced understanding allows for significant reductions in feedback overhead, as the shared data is no longer redundantly relayed. Setting RIS-CoCsiNet apart from traditional autoencoder systems, we incorporate an additional decoder and a combination neural network at the base station. These enhancements are tasked with the precise retrieval and fusion of shared and individual data. And notably, all these innovations are achieved without modifying the UEs. For those UEs boasting multiple antennas, our design seamlessly integrates long short-term memory modules, capturing the intricate correlations between antennas. With a recognition of the non-sparse nature of the RIS-UE CSI phase, we pioneer two magnitude-dependent phase feedback strategies. These strategies adeptly weave in both statistical and real-time CSI magnitude data. The potency of RIS-CoCsiNet is further solidified through compelling simulation results drawn from two diverse channel datasets.
2301.02092
Sadra Safadoust
Sadra Safadoust, Fatma G\"uney
DepthP+P: Metric Accurate Monocular Depth Estimation using Planar and Parallax
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Current self-supervised monocular depth estimation methods are mostly based on estimating a rigid-body motion representing camera motion. These methods suffer from the well-known scale ambiguity problem in their predictions. We propose DepthP+P, a method that learns to estimate outputs in metric scale by following the traditional planar parallax paradigm. We first align the two frames using a common ground plane which removes the effect of the rotation component in the camera motion. With two neural networks, we predict the depth and the camera translation, which is easier to predict alone compared to predicting it together with rotation. By assuming a known camera height, we can then calculate the induced 2D image motion of a 3D point and use it for reconstructing the target image in a self-supervised monocular approach. We perform experiments on the KITTI driving dataset and show that the planar parallax approach, which only needs to predict camera translation, can be a metrically accurate alternative to the current methods that rely on estimating 6DoF camera motion.
[ { "created": "Thu, 5 Jan 2023 14:53:21 GMT", "version": "v1" } ]
2023-01-06
[ [ "Safadoust", "Sadra", "" ], [ "Güney", "Fatma", "" ] ]
Current self-supervised monocular depth estimation methods are mostly based on estimating a rigid-body motion representing camera motion. These methods suffer from the well-known scale ambiguity problem in their predictions. We propose DepthP+P, a method that learns to estimate outputs in metric scale by following the traditional planar parallax paradigm. We first align the two frames using a common ground plane which removes the effect of the rotation component in the camera motion. With two neural networks, we predict the depth and the camera translation, which is easier to predict alone compared to predicting it together with rotation. By assuming a known camera height, we can then calculate the induced 2D image motion of a 3D point and use it for reconstructing the target image in a self-supervised monocular approach. We perform experiments on the KITTI driving dataset and show that the planar parallax approach, which only needs to predict camera translation, can be a metrically accurate alternative to the current methods that rely on estimating 6DoF camera motion.
2211.14703
Kaihong Wang
Kaihong Wang and Donghyun Kim and Rogerio Feris and Kate Saenko and Margrit Betke
Exploring Consistency in Cross-Domain Transformer for Domain Adaptive Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While transformers have greatly boosted performance in semantic segmentation, domain adaptive transformers are not yet well explored. We identify that the domain gap can cause discrepancies in self-attention. Due to this gap, the transformer attends to spurious regions or pixels, which deteriorates accuracy on the target domain. We propose to perform adaptation on attention maps with cross-domain attention layers that share features between the source and the target domains. Specifically, we impose consistency between predictions from cross-domain attention and self-attention modules to encourage similar distribution in the attention and output of the model across domains, i.e., attention-level and output-level alignment. We also enforce consistency in attention maps between different augmented views to further strengthen the attention-based alignment. Combining these two components, our method mitigates the discrepancy in attention maps across domains and further boosts the performance of the transformer under unsupervised domain adaptation settings. Our model outperforms the existing state-of-the-art baseline model on three widely used benchmarks, including GTAV-to-Cityscapes by 1.3 percent point (pp), Synthia-to-Cityscapes by 0.6 pp, and Cityscapes-to-ACDC by 1.1 pp, on average. Additionally, we verify the effectiveness and generalizability of our method through extensive experiments. Our code will be publicly available.
[ { "created": "Sun, 27 Nov 2022 02:40:33 GMT", "version": "v1" }, { "created": "Tue, 29 Nov 2022 01:55:17 GMT", "version": "v2" }, { "created": "Wed, 21 Dec 2022 04:47:09 GMT", "version": "v3" } ]
2022-12-22
[ [ "Wang", "Kaihong", "" ], [ "Kim", "Donghyun", "" ], [ "Feris", "Rogerio", "" ], [ "Saenko", "Kate", "" ], [ "Betke", "Margrit", "" ] ]
While transformers have greatly boosted performance in semantic segmentation, domain adaptive transformers are not yet well explored. We identify that the domain gap can cause discrepancies in self-attention. Due to this gap, the transformer attends to spurious regions or pixels, which deteriorates accuracy on the target domain. We propose to perform adaptation on attention maps with cross-domain attention layers that share features between the source and the target domains. Specifically, we impose consistency between predictions from cross-domain attention and self-attention modules to encourage similar distribution in the attention and output of the model across domains, i.e., attention-level and output-level alignment. We also enforce consistency in attention maps between different augmented views to further strengthen the attention-based alignment. Combining these two components, our method mitigates the discrepancy in attention maps across domains and further boosts the performance of the transformer under unsupervised domain adaptation settings. Our model outperforms the existing state-of-the-art baseline model on three widely used benchmarks, including GTAV-to-Cityscapes by 1.3 percent point (pp), Synthia-to-Cityscapes by 0.6 pp, and Cityscapes-to-ACDC by 1.1 pp, on average. Additionally, we verify the effectiveness and generalizability of our method through extensive experiments. Our code will be publicly available.
1009.4977
S. M. Kamruzzaman
Md. Abul kalam Azad, Rezwana Sharmeen, and S. M. Kamruzzaman
Universal Numeric Segmented Display
6 Pages, International Conference
Proc. 7th International Conference on Computer and Information Technology (ICCIT-2004), Dhaka, Bangladesh, pp. 887-892, Dec. 2004
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.
[ { "created": "Sat, 25 Sep 2010 06:17:02 GMT", "version": "v1" } ]
2010-09-28
[ [ "Azad", "Md. Abul kalam", "" ], [ "Sharmeen", "Rezwana", "" ], [ "Kamruzzaman", "S. M.", "" ] ]
Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.
2205.07614
Chao Wang
Chao Wang, Chen Chen, Dong Li, Bin Wang
Rethinking Reinforcement Learning based Logic Synthesis
nine pages; one figure;
null
null
null
cs.LG cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, reinforcement learning has been used to address logic synthesis by formulating the operator sequence optimization problem as a Markov decision process. However, through extensive experiments, we find out that the learned policy makes decisions independent from the circuit features (i.e., states) and yields an operator sequence that is permutation invariant to some extent in terms of operators. Based on these findings, we develop a new RL-based method that can automatically recognize critical operators and generate common operator sequences generalizable to unseen circuits. Our algorithm is verified on both the EPFL benchmark, a private dataset and a circuit at industrial scale. Experimental results demonstrate that it achieves a good balance among delay, area and runtime, and is practical for industrial usage.
[ { "created": "Mon, 16 May 2022 12:15:32 GMT", "version": "v1" }, { "created": "Sat, 28 May 2022 08:31:10 GMT", "version": "v2" }, { "created": "Mon, 27 Jun 2022 11:41:13 GMT", "version": "v3" } ]
2022-06-28
[ [ "Wang", "Chao", "" ], [ "Chen", "Chen", "" ], [ "Li", "Dong", "" ], [ "Wang", "Bin", "" ] ]
Recently, reinforcement learning has been used to address logic synthesis by formulating the operator sequence optimization problem as a Markov decision process. However, through extensive experiments, we find out that the learned policy makes decisions independent from the circuit features (i.e., states) and yields an operator sequence that is permutation invariant to some extent in terms of operators. Based on these findings, we develop a new RL-based method that can automatically recognize critical operators and generate common operator sequences generalizable to unseen circuits. Our algorithm is verified on both the EPFL benchmark, a private dataset and a circuit at industrial scale. Experimental results demonstrate that it achieves a good balance among delay, area and runtime, and is practical for industrial usage.
1412.6684
Ismael Rafols
Jordi Molas-Gallart, Puay Tang, Ismael Rafols
On the relationship between interdisciplinarity and impact: different modalities of interdisciplinarity lead to different types of impact
null
Journal of Science Policy and Research Management, 29(2), 69-89
null
null
cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is increasing interest among funding agencies to understand how they can best contribute to enhancing the socio-economic impact of research. Interdisciplinarity is often presented as a research mode that can facilitate impact but there exist a limited number of analytical studies that have attempted to examine whether or how interdisciplinarity can affect the societal relevance of research. We investigate fifteen Social Sciences research investments in the UK to examine how they have achieved impact. We analyse research drivers, cognitive distances, degree of integration, collaborative practices, stakeholder engagement and the type of impact generated. The analysis suggests that interdisciplinarity cannot be associated with a single type of impact mechanism. Also, interdisciplinarity is neither a sufficient nor a necessary condition for achieving societal relevance and impact. However, we identify a specific modality -- "long-range" interdisciplinarity, which appears more likely to be associated with societal impact because of its focused problem-orientation and its strong interaction with stakeholders.
[ { "created": "Sat, 20 Dec 2014 19:36:36 GMT", "version": "v1" } ]
2014-12-23
[ [ "Molas-Gallart", "Jordi", "" ], [ "Tang", "Puay", "" ], [ "Rafols", "Ismael", "" ] ]
There is increasing interest among funding agencies to understand how they can best contribute to enhancing the socio-economic impact of research. Interdisciplinarity is often presented as a research mode that can facilitate impact but there exist a limited number of analytical studies that have attempted to examine whether or how interdisciplinarity can affect the societal relevance of research. We investigate fifteen Social Sciences research investments in the UK to examine how they have achieved impact. We analyse research drivers, cognitive distances, degree of integration, collaborative practices, stakeholder engagement and the type of impact generated. The analysis suggests that interdisciplinarity cannot be associated with a single type of impact mechanism. Also, interdisciplinarity is neither a sufficient nor a necessary condition for achieving societal relevance and impact. However, we identify a specific modality -- "long-range" interdisciplinarity, which appears more likely to be associated with societal impact because of its focused problem-orientation and its strong interaction with stakeholders.
1712.02662
Wei-Lin Hsiao
Wei-Lin Hsiao, Kristen Grauman
Creating Capsule Wardrobes from Fashion Images
Accepted to CVPR 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to automatically create capsule wardrobes. Given an inventory of candidate garments and accessories, the algorithm must assemble a minimal set of items that provides maximal mix-and-match outfits. We pose the task as a subset selection problem. To permit efficient subset selection over the space of all outfit combinations, we develop submodular objective functions capturing the key ingredients of visual compatibility, versatility, and user-specific preference. Since adding garments to a capsule only expands its possible outfits, we devise an iterative approach to allow near-optimal submodular function maximization. Finally, we present an unsupervised approach to learn visual compatibility from "in the wild" full body outfit photos; the compatibility metric translates well to cleaner catalog photos and improves over existing methods. Our results on thousands of pieces from popular fashion websites show that automatic capsule creation has potential to mimic skilled fashionistas in assembling flexible wardrobes, while being significantly more scalable.
[ { "created": "Thu, 7 Dec 2017 15:06:26 GMT", "version": "v1" }, { "created": "Sat, 14 Apr 2018 17:02:40 GMT", "version": "v2" } ]
2018-04-17
[ [ "Hsiao", "Wei-Lin", "" ], [ "Grauman", "Kristen", "" ] ]
We propose to automatically create capsule wardrobes. Given an inventory of candidate garments and accessories, the algorithm must assemble a minimal set of items that provides maximal mix-and-match outfits. We pose the task as a subset selection problem. To permit efficient subset selection over the space of all outfit combinations, we develop submodular objective functions capturing the key ingredients of visual compatibility, versatility, and user-specific preference. Since adding garments to a capsule only expands its possible outfits, we devise an iterative approach to allow near-optimal submodular function maximization. Finally, we present an unsupervised approach to learn visual compatibility from "in the wild" full body outfit photos; the compatibility metric translates well to cleaner catalog photos and improves over existing methods. Our results on thousands of pieces from popular fashion websites show that automatic capsule creation has potential to mimic skilled fashionistas in assembling flexible wardrobes, while being significantly more scalable.
2404.12055
Ziang Ren
Ziang Ren, Samuel Lensgraf, Alberto Quattrini Li
Improving the perception of visual fiducial markers in the field using Adaptive Active Exposure Control
Paper accepted by ISER 2023
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate localization is fundamental for autonomous underwater vehicles (AUVs) to carry out precise tasks, such as manipulation and construction. Vision-based solutions using fiducial marker are promising, but extremely challenging underwater because of harsh lighting condition underwater. This paper introduces a gradient-based active camera exposure control method to tackle sharp lighting variations during image acquisition, which can establish better foundation for subsequent image enhancement procedures. Considering a typical scenario for underwater operations where visual tags are used, we proposed several experiments comparing our method with other state-of-the-art exposure control method including Active Exposure Control (AEC) and Gradient-based Exposure Control (GEC). Results show a significant improvement in the accuracy of robot localization. This method is an important component that can be used in visual-based state estimation pipeline to improve the overall localization accuracy.
[ { "created": "Thu, 18 Apr 2024 10:10:56 GMT", "version": "v1" } ]
2024-04-19
[ [ "Ren", "Ziang", "" ], [ "Lensgraf", "Samuel", "" ], [ "Li", "Alberto Quattrini", "" ] ]
Accurate localization is fundamental for autonomous underwater vehicles (AUVs) to carry out precise tasks, such as manipulation and construction. Vision-based solutions using fiducial marker are promising, but extremely challenging underwater because of harsh lighting condition underwater. This paper introduces a gradient-based active camera exposure control method to tackle sharp lighting variations during image acquisition, which can establish better foundation for subsequent image enhancement procedures. Considering a typical scenario for underwater operations where visual tags are used, we proposed several experiments comparing our method with other state-of-the-art exposure control method including Active Exposure Control (AEC) and Gradient-based Exposure Control (GEC). Results show a significant improvement in the accuracy of robot localization. This method is an important component that can be used in visual-based state estimation pipeline to improve the overall localization accuracy.
1808.06945
Jingjing Xu
Jingjing Xu, Xuancheng Ren, Yi Zhang, Qi Zeng, Xiaoyan Cai, Xu Sun
A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation
Accepted by EMNLP 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Narrative story generation is a challenging problem because it demands the generated sentences with tight semantic connections, which has not been well studied by most existing generative models. To address this problem, we propose a skeleton-based model to promote the coherence of generated stories. Different from traditional models that generate a complete sentence at a stroke, the proposed model first generates the most critical phrases, called skeleton, and then expands the skeleton to a complete and fluent sentence. The skeleton is not manually defined, but learned by a reinforcement learning method. Compared to the state-of-the-art models, our skeleton-based model can generate significantly more coherent text according to human evaluation and automatic evaluation. The G-score is improved by 20.1% in the human evaluation. The code is available at https://github.com/lancopku/Skeleton-Based-Generation-Model
[ { "created": "Tue, 21 Aug 2018 14:55:34 GMT", "version": "v1" }, { "created": "Mon, 27 Aug 2018 08:16:21 GMT", "version": "v2" } ]
2018-08-28
[ [ "Xu", "Jingjing", "" ], [ "Ren", "Xuancheng", "" ], [ "Zhang", "Yi", "" ], [ "Zeng", "Qi", "" ], [ "Cai", "Xiaoyan", "" ], [ "Sun", "Xu", "" ] ]
Narrative story generation is a challenging problem because it demands the generated sentences with tight semantic connections, which has not been well studied by most existing generative models. To address this problem, we propose a skeleton-based model to promote the coherence of generated stories. Different from traditional models that generate a complete sentence at a stroke, the proposed model first generates the most critical phrases, called skeleton, and then expands the skeleton to a complete and fluent sentence. The skeleton is not manually defined, but learned by a reinforcement learning method. Compared to the state-of-the-art models, our skeleton-based model can generate significantly more coherent text according to human evaluation and automatic evaluation. The G-score is improved by 20.1% in the human evaluation. The code is available at https://github.com/lancopku/Skeleton-Based-Generation-Model
2211.02468
Anaelia Ovalle
Anaelia Ovalle, Evan Czyzycki, Cho-Jui Hsieh
Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning
v1
null
null
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural networks. A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample such that its corresponding model output changes. These sensitivity attacks exploit the model's sensitivity toward task-irrelevant features. Another form of adversarial sample can be crafted via invariance attacks, which exploit the model underestimating the importance of relevant features. Previous literature has indicated a tradeoff in defending against both attack types within a strictly L_p bounded defense. To promote robustness toward both types of attacks beyond Euclidean distance metrics, we use metric learning to frame adversarial regularization as an optimal transport problem. Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
[ { "created": "Fri, 4 Nov 2022 13:54:02 GMT", "version": "v1" } ]
2022-11-07
[ [ "Ovalle", "Anaelia", "" ], [ "Czyzycki", "Evan", "" ], [ "Hsieh", "Cho-Jui", "" ] ]
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural networks. A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample such that its corresponding model output changes. These sensitivity attacks exploit the model's sensitivity toward task-irrelevant features. Another form of adversarial sample can be crafted via invariance attacks, which exploit the model underestimating the importance of relevant features. Previous literature has indicated a tradeoff in defending against both attack types within a strictly L_p bounded defense. To promote robustness toward both types of attacks beyond Euclidean distance metrics, we use metric learning to frame adversarial regularization as an optimal transport problem. Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
2311.16140
Fei He
Fei He, Zhiyuan Yang, Mingyue Gao, Biplab Poudel, Newgin Sam Ebin Sam Dhas, Rajan Gyawali, Ashwin Dhakal, Jianlin Cheng, Dong Xu
Adapting Segment Anything Model (SAM) through Prompt-based Learning for Enhanced Protein Identification in Cryo-EM Micrographs
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryo-electron microscopy (cryo-EM) remains pivotal in structural biology, yet the task of protein particle picking, integral for 3D protein structure construction, is laden with manual inefficiencies. While recent AI tools such as Topaz and crYOLO are advancing the field, they do not fully address the challenges of cryo-EM images, including low contrast, complex shapes, and heterogeneous conformations. This study explored prompt-based learning to adapt the state-of-the-art image segmentation foundation model Segment Anything Model (SAM) for cryo-EM. This focus was driven by the desire to optimize model performance with a small number of labeled data without altering pre-trained parameters, aiming for a balance between adaptability and foundational knowledge retention. Through trials with three prompt-based learning strategies, namely head prompt, prefix prompt, and encoder prompt, we observed enhanced performance and reduced computational requirements compared to the fine-tuning approach. This work not only highlights the potential of prompting SAM in protein identification from cryo-EM micrographs but also suggests its broader promise in biomedical image segmentation and object detection.
[ { "created": "Sat, 4 Nov 2023 14:20:08 GMT", "version": "v1" } ]
2023-11-29
[ [ "He", "Fei", "" ], [ "Yang", "Zhiyuan", "" ], [ "Gao", "Mingyue", "" ], [ "Poudel", "Biplab", "" ], [ "Dhas", "Newgin Sam Ebin Sam", "" ], [ "Gyawali", "Rajan", "" ], [ "Dhakal", "Ashwin", "" ], [ "Cheng", "Jianlin", "" ], [ "Xu", "Dong", "" ] ]
Cryo-electron microscopy (cryo-EM) remains pivotal in structural biology, yet the task of protein particle picking, integral for 3D protein structure construction, is laden with manual inefficiencies. While recent AI tools such as Topaz and crYOLO are advancing the field, they do not fully address the challenges of cryo-EM images, including low contrast, complex shapes, and heterogeneous conformations. This study explored prompt-based learning to adapt the state-of-the-art image segmentation foundation model Segment Anything Model (SAM) for cryo-EM. This focus was driven by the desire to optimize model performance with a small number of labeled data without altering pre-trained parameters, aiming for a balance between adaptability and foundational knowledge retention. Through trials with three prompt-based learning strategies, namely head prompt, prefix prompt, and encoder prompt, we observed enhanced performance and reduced computational requirements compared to the fine-tuning approach. This work not only highlights the potential of prompting SAM in protein identification from cryo-EM micrographs but also suggests its broader promise in biomedical image segmentation and object detection.
2202.03845
Klaudia Krawiecka
Klaudia Krawiecka, Simon Birnbach, Simon Eberz, Ivan Martinovic
BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The lack of standard input interfaces in the Internet of Things (IoT) ecosystems presents a challenge in securing such infrastructures. To tackle this challenge, we introduce a novel behavioral biometric system based on naturally occurring interactions with objects in smart environments. This biometric leverages existing sensors to authenticate users without requiring any hardware modifications of existing smart home devices. The system is designed to reduce the need for phone-based authentication mechanisms, on which smart home systems currently rely. It requires the user to approve transactions on their phone only when the user cannot be authenticated with high confidence through their interactions with the smart environment. We conduct a real-world experiment that involves 13 participants in a company environment, using this experiment to also study mimicry attacks on our proposed system. We show that this system can provide seamless and unobtrusive authentication while still staying highly resistant to zero-effort, video, and in-person observation-based mimicry attacks. Even when at most 1% of the strongest type of mimicry attacks are successful, our system does not require the user to take out their phone to approve legitimate transactions in more than 80% of cases for a single interaction. This increases to 92% of transactions when interactions with more objects are considered.
[ { "created": "Tue, 8 Feb 2022 13:11:42 GMT", "version": "v1" }, { "created": "Fri, 18 Feb 2022 21:38:50 GMT", "version": "v2" } ]
2022-02-22
[ [ "Krawiecka", "Klaudia", "" ], [ "Birnbach", "Simon", "" ], [ "Eberz", "Simon", "" ], [ "Martinovic", "Ivan", "" ] ]
The lack of standard input interfaces in the Internet of Things (IoT) ecosystems presents a challenge in securing such infrastructures. To tackle this challenge, we introduce a novel behavioral biometric system based on naturally occurring interactions with objects in smart environments. This biometric leverages existing sensors to authenticate users without requiring any hardware modifications of existing smart home devices. The system is designed to reduce the need for phone-based authentication mechanisms, on which smart home systems currently rely. It requires the user to approve transactions on their phone only when the user cannot be authenticated with high confidence through their interactions with the smart environment. We conduct a real-world experiment that involves 13 participants in a company environment, using this experiment to also study mimicry attacks on our proposed system. We show that this system can provide seamless and unobtrusive authentication while still staying highly resistant to zero-effort, video, and in-person observation-based mimicry attacks. Even when at most 1% of the strongest type of mimicry attacks are successful, our system does not require the user to take out their phone to approve legitimate transactions in more than 80% of cases for a single interaction. This increases to 92% of transactions when interactions with more objects are considered.
2302.01281
Jean Marie Tshimula
Jean Marie Tshimula, D'Jeff K. Nkashama, Kalonji Kalala, Maximilien V. Dialufuma, Mbuyi Mukendi Didier, Hugues Kanda, Jean Tshibangu Muabila, Christian N. Mayemba
Redesigning Electronic Health Record Systems to Support Developing Countries
2023 7th International Conference on Medical and Health Informatics (ICMHI 2023)
null
null
null
cs.CY cs.AI cs.DB cs.HC cs.SE
http://creativecommons.org/licenses/by/4.0/
Electronic Health Record (EHR) has become an essential tool in the healthcare ecosystem, providing authorized clinicians with patients' health-related information for better treatment. While most developed countries are taking advantage of EHRs to improve their healthcare system, it remains challenging in developing countries to support clinical decision-making and public health using a computerized patient healthcare information system. This paper proposes a novel EHR architecture suitable for developing countries--an architecture that fosters inclusion and provides solutions tailored to all social classes and socioeconomic statuses. Our architecture foresees an internet-free (offline) solution to allow medical transactions between healthcare organizations, and the storage of EHRs in geographically underserved and rural areas. Moreover, we discuss how artificial intelligence can leverage anonymous health-related information to enable better public health policy and surveillance.
[ { "created": "Tue, 31 Jan 2023 19:16:38 GMT", "version": "v1" } ]
2023-02-03
[ [ "Tshimula", "Jean Marie", "" ], [ "Nkashama", "D'Jeff K.", "" ], [ "Kalala", "Kalonji", "" ], [ "Dialufuma", "Maximilien V.", "" ], [ "Didier", "Mbuyi Mukendi", "" ], [ "Kanda", "Hugues", "" ], [ "Muabila", "Jean Tshibangu", "" ], [ "Mayemba", "Christian N.", "" ] ]
Electronic Health Record (EHR) has become an essential tool in the healthcare ecosystem, providing authorized clinicians with patients' health-related information for better treatment. While most developed countries are taking advantage of EHRs to improve their healthcare system, it remains challenging in developing countries to support clinical decision-making and public health using a computerized patient healthcare information system. This paper proposes a novel EHR architecture suitable for developing countries--an architecture that fosters inclusion and provides solutions tailored to all social classes and socioeconomic statuses. Our architecture foresees an internet-free (offline) solution to allow medical transactions between healthcare organizations, and the storage of EHRs in geographically underserved and rural areas. Moreover, we discuss how artificial intelligence can leverage anonymous health-related information to enable better public health policy and surveillance.
2007.07924
Abubakar Siddique
Abubakar Siddique and Henry Medeiros
Tracking Passengers and Baggage Items Using Multiple Overhead Cameras at Security Checkpoints
16 pages, 16 figures
IEEE Transactions on Systems, Man, and Cybernetics: Systems ( Volume: 53, Issue: 6, June 2023)
10.1109/TSMC.2022.3225252
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a novel framework to track multiple objects in overhead camera videos for airport checkpoint security scenarios where targets correspond to passengers and their baggage items. We propose a self-supervised learning (SSL) technique to provide the model information about instance segmentation uncertainty from overhead images. Our SSL approach improves object detection by employing a test-time data augmentation and a regression-based, rotation-invariant pseudo-label refinement technique. Our pseudo-label generation method provides multiple geometrically transformed images as inputs to a convolutional neural network (CNN), regresses the augmented detections generated by the network to reduce localization errors, and then clusters them using the mean-shift algorithm. The self-supervised detector model is used in a single-camera tracking algorithm to generate temporal identifiers for the targets. Our method also incorporates a multiview trajectory association mechanism to maintain consistent temporal identifiers as passengers travel across camera views. An evaluation of detection, tracking, and association performances on videos obtained from multiple overhead cameras in a realistic airport checkpoint environment demonstrates the effectiveness of the proposed approach. Our results show that self-supervision improves object detection accuracy by up to 42% without increasing the inference time of the model. Our multicamera association method achieves up to 89% multiobject tracking accuracy with an average computation time of less than 15 ms.
[ { "created": "Wed, 15 Jul 2020 18:09:31 GMT", "version": "v1" }, { "created": "Tue, 22 Feb 2022 17:48:37 GMT", "version": "v2" }, { "created": "Tue, 27 Feb 2024 08:20:57 GMT", "version": "v3" } ]
2024-03-13
[ [ "Siddique", "Abubakar", "" ], [ "Medeiros", "Henry", "" ] ]
We introduce a novel framework to track multiple objects in overhead camera videos for airport checkpoint security scenarios where targets correspond to passengers and their baggage items. We propose a self-supervised learning (SSL) technique to provide the model information about instance segmentation uncertainty from overhead images. Our SSL approach improves object detection by employing a test-time data augmentation and a regression-based, rotation-invariant pseudo-label refinement technique. Our pseudo-label generation method provides multiple geometrically transformed images as inputs to a convolutional neural network (CNN), regresses the augmented detections generated by the network to reduce localization errors, and then clusters them using the mean-shift algorithm. The self-supervised detector model is used in a single-camera tracking algorithm to generate temporal identifiers for the targets. Our method also incorporates a multiview trajectory association mechanism to maintain consistent temporal identifiers as passengers travel across camera views. An evaluation of detection, tracking, and association performances on videos obtained from multiple overhead cameras in a realistic airport checkpoint environment demonstrates the effectiveness of the proposed approach. Our results show that self-supervision improves object detection accuracy by up to 42% without increasing the inference time of the model. Our multicamera association method achieves up to 89% multiobject tracking accuracy with an average computation time of less than 15 ms.
1805.06775
Yenming Huang
Yenming Huang and Borching Su
Circularly Pulse-Shaped Precoding for OFDM: A New Waveform and Its Optimization Design for 5G New Radio
15 pages, 21 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
IEEE Access 6 (2018) 44129-44146
10.1109/ACCESS.2018.2864336
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new circularly pulse-shaped (CPS) precoding orthogonal frequency division multiplexing (OFDM) waveform, or CPS-OFDM for short, is proposed in this paper. CPS-OFDM, characterized by user-specific precoder flexibility, possesses the advantages of both low out-of-subband emission (OSBE) and low peak-to-average power ratio (PAPR), which are two major desired physical layer signal properties for various scenarios in 5G New Radio (NR), including fragmented spectrum access, new types of user equipments (UEs), and communications at high carrier frequencies. As opposed to most of existing waveform candidates using windowing or filtering techniques, CPS-OFDM prevents block extension that causes extra inter-block interference (IBI) and envelope fluctuation unfriendly to signal reception and power amplifier (PA) efficiency, respectively. An optimization problem of the prototype shaping vector built in the CPS precoder is formulated to minimize the variance of instantaneous power (VIP) with controllable OSBE power (OSBEP) and noise enhancement penalty (NEP). In order to solve the optimization problem involving a quartic objective function, the majorization-minimization (MM) algorithmic framework is exploited. By proving the convexity of the proposed problem, the globally optimal solution invariant of coming data is guaranteed to be attained via numbers of iterations. Simulation results demonstrate the advantages of the proposed scheme in terms of detection reliability and spectral efficiency for practical 5G cases such as asynchronous transmissions and mixed numerologies.
[ { "created": "Thu, 17 May 2018 13:51:42 GMT", "version": "v1" } ]
2018-09-28
[ [ "Huang", "Yenming", "" ], [ "Su", "Borching", "" ] ]
A new circularly pulse-shaped (CPS) precoding orthogonal frequency division multiplexing (OFDM) waveform, or CPS-OFDM for short, is proposed in this paper. CPS-OFDM, characterized by user-specific precoder flexibility, possesses the advantages of both low out-of-subband emission (OSBE) and low peak-to-average power ratio (PAPR), which are two major desired physical layer signal properties for various scenarios in 5G New Radio (NR), including fragmented spectrum access, new types of user equipments (UEs), and communications at high carrier frequencies. As opposed to most of existing waveform candidates using windowing or filtering techniques, CPS-OFDM prevents block extension that causes extra inter-block interference (IBI) and envelope fluctuation unfriendly to signal reception and power amplifier (PA) efficiency, respectively. An optimization problem of the prototype shaping vector built in the CPS precoder is formulated to minimize the variance of instantaneous power (VIP) with controllable OSBE power (OSBEP) and noise enhancement penalty (NEP). In order to solve the optimization problem involving a quartic objective function, the majorization-minimization (MM) algorithmic framework is exploited. By proving the convexity of the proposed problem, the globally optimal solution invariant of coming data is guaranteed to be attained via numbers of iterations. Simulation results demonstrate the advantages of the proposed scheme in terms of detection reliability and spectral efficiency for practical 5G cases such as asynchronous transmissions and mixed numerologies.
2405.04428
Alexis Baudin
Alexis Baudin, Cl\'emence Magnien, Lionel Tabourier
BBK: a simpler, faster algorithm for enumerating maximal bicliques in large sparse bipartite graphs
21 pages, 4 figures, 3 tables
null
null
null
cs.DS cs.CC cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bipartite graphs are a prevalent modeling tool for real-world networks, capturing interactions between vertices of two different types. Within this framework, bicliques emerge as crucial structures when studying dense subgraphs: they are sets of vertices such that all vertices of the first type interact with all vertices of the second type. Therefore, they allow identifying groups of closely related vertices of the network, such as individuals with similar interests or webpages with similar contents. This article introduces a new algorithm designed for the exhaustive enumeration of maximal bicliques within a bipartite graph. This algorithm, called BBK for Bipartite Bron-Kerbosch, is a new extension to the bipartite case of the Bron-Kerbosch algorithm, which enumerates the maximal cliques in standard (non-bipartite) graphs. It is faster than the state-of-the-art algorithms and allows the enumeration on massive bipartite graphs that are not manageable with existing implementations. We analyze it theoretically to establish two complexity formulas: one as a function of the input and one as a function of the output characteristics of the algorithm. We also provide an open-access implementation of BBK in C++, which we use to experiment and validate its efficiency on massive real-world datasets and show that its execution time is shorter in practice than state-of-the art algorithms. These experiments also show that the order in which the vertices are processed, as well as the choice of one of the two types of vertices on which to initiate the enumeration have an impact on the computation time.
[ { "created": "Tue, 7 May 2024 15:49:34 GMT", "version": "v1" }, { "created": "Fri, 24 May 2024 09:00:21 GMT", "version": "v2" } ]
2024-05-27
[ [ "Baudin", "Alexis", "" ], [ "Magnien", "Clémence", "" ], [ "Tabourier", "Lionel", "" ] ]
Bipartite graphs are a prevalent modeling tool for real-world networks, capturing interactions between vertices of two different types. Within this framework, bicliques emerge as crucial structures when studying dense subgraphs: they are sets of vertices such that all vertices of the first type interact with all vertices of the second type. Therefore, they allow identifying groups of closely related vertices of the network, such as individuals with similar interests or webpages with similar contents. This article introduces a new algorithm designed for the exhaustive enumeration of maximal bicliques within a bipartite graph. This algorithm, called BBK for Bipartite Bron-Kerbosch, is a new extension to the bipartite case of the Bron-Kerbosch algorithm, which enumerates the maximal cliques in standard (non-bipartite) graphs. It is faster than the state-of-the-art algorithms and allows the enumeration on massive bipartite graphs that are not manageable with existing implementations. We analyze it theoretically to establish two complexity formulas: one as a function of the input and one as a function of the output characteristics of the algorithm. We also provide an open-access implementation of BBK in C++, which we use to experiment and validate its efficiency on massive real-world datasets and show that its execution time is shorter in practice than state-of-the art algorithms. These experiments also show that the order in which the vertices are processed, as well as the choice of one of the two types of vertices on which to initiate the enumeration have an impact on the computation time.
1511.02196
Haohan Wang
Haohan Wang, Madhavi K. Ganapathiraju
Evaluating Protein-protein Interaction Predictors with a Novel 3-Dimensional Metric
This article is an extended version of a poster presented in AMIA TBI 2015
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order for the predicted interactions to be directly adopted by biologists, the ma- chine learning predictions have to be of high precision, regardless of recall. This aspect cannot be evaluated or numerically represented well by traditional metrics like accuracy, ROC, or precision-recall curve. In this work, we start from the alignment in sensitivity of ROC and recall of precision-recall curve, and propose an evaluation metric focusing on the ability of a model to be adopted by biologists. This metric evaluates the ability of a machine learning algorithm to predict only new interactions, meanwhile, it eliminates the influence of test dataset. In the experiment of evaluating different classifiers with a same data set and evaluating the same predictor with different datasets, our new metric fulfills the evaluation task of our interest while two widely recognized metrics, ROC and precision-recall curve fail the tasks for different reasons.
[ { "created": "Fri, 6 Nov 2015 19:14:09 GMT", "version": "v1" } ]
2015-11-09
[ [ "Wang", "Haohan", "" ], [ "Ganapathiraju", "Madhavi K.", "" ] ]
In order for the predicted interactions to be directly adopted by biologists, the ma- chine learning predictions have to be of high precision, regardless of recall. This aspect cannot be evaluated or numerically represented well by traditional metrics like accuracy, ROC, or precision-recall curve. In this work, we start from the alignment in sensitivity of ROC and recall of precision-recall curve, and propose an evaluation metric focusing on the ability of a model to be adopted by biologists. This metric evaluates the ability of a machine learning algorithm to predict only new interactions, meanwhile, it eliminates the influence of test dataset. In the experiment of evaluating different classifiers with a same data set and evaluating the same predictor with different datasets, our new metric fulfills the evaluation task of our interest while two widely recognized metrics, ROC and precision-recall curve fail the tasks for different reasons.
2405.10213
Nedjma Ousidhoum
Dimosthenis Antypas, Christian Arnold, Jose Camacho-Collados, Nedjma Ousidhoum, Carla Perez Almendros
Words as Trigger Points in Social Media Discussions
null
null
null
null
cs.SI cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trigger points are a concept introduced by Mau, Lux, and Westheuser (2023) to study qualitative focus group interviews and understand polarisation in Germany. When people communicate, trigger points represent moments when individuals feel that their understanding of what is fair, normal, or appropriate in society is questioned. In the original studies, individuals react affectively to such triggers and show strong and negative emotional responses. In this paper, we introduce the first systematic study of the large-scale effect of individual words as trigger points by analysing a large amount of social media posts. We examine online deliberations on Reddit between 2020 and 2022 and collect >100 million posts from subreddits related to a set of words identified as trigger points in UK politics. We find that such trigger words affect user engagement and have noticeable consequences on animosity in online discussions. We share empirical evidence of trigger words causing animosity, and how they provide incentives for hate speech, adversarial debates, and disagreements. Our work is the first to introduce trigger points to computational studies of online communication. Our findings are relevant to researchers interested in online harms and who examine how citizens debate politics and society in light of affective polarisation.
[ { "created": "Thu, 16 May 2024 16:02:42 GMT", "version": "v1" } ]
2024-05-17
[ [ "Antypas", "Dimosthenis", "" ], [ "Arnold", "Christian", "" ], [ "Camacho-Collados", "Jose", "" ], [ "Ousidhoum", "Nedjma", "" ], [ "Almendros", "Carla Perez", "" ] ]
Trigger points are a concept introduced by Mau, Lux, and Westheuser (2023) to study qualitative focus group interviews and understand polarisation in Germany. When people communicate, trigger points represent moments when individuals feel that their understanding of what is fair, normal, or appropriate in society is questioned. In the original studies, individuals react affectively to such triggers and show strong and negative emotional responses. In this paper, we introduce the first systematic study of the large-scale effect of individual words as trigger points by analysing a large amount of social media posts. We examine online deliberations on Reddit between 2020 and 2022 and collect >100 million posts from subreddits related to a set of words identified as trigger points in UK politics. We find that such trigger words affect user engagement and have noticeable consequences on animosity in online discussions. We share empirical evidence of trigger words causing animosity, and how they provide incentives for hate speech, adversarial debates, and disagreements. Our work is the first to introduce trigger points to computational studies of online communication. Our findings are relevant to researchers interested in online harms and who examine how citizens debate politics and society in light of affective polarisation.
1602.03956
Daniel Filipe Farinha
Daniel Filipe G. Farinha
Grokya: a Privacy-Friendly Framework for Ubiquitous Computing
Master thesis
null
10.13140/RG.2.1.2452.3281
null
cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
In a world where for-profit enterprises are increasingly looking to maximize profits by engaging in privacy invading consumer-profiling techniques, the rise of ubiquitous computing and the Internet of Things (IoT) poses a major problem. If not acted upon quickly, the combination of Big Data with IoT will explode into a dystopian world that even George Orwell could not have predicted. The proposed project aims to fill a gap that no other solution is addressing, which is to reach a win-win scenario that works for both the enterprises and the consumers. It aims to do this by creating the building blocks for a consumer owned infrastructure that can provide both privacy for the user, and still enable the enterprises to achieve their high-level goals.
[ { "created": "Fri, 12 Feb 2016 03:18:47 GMT", "version": "v1" }, { "created": "Wed, 29 Jun 2016 07:19:01 GMT", "version": "v2" } ]
2016-06-30
[ [ "Farinha", "Daniel Filipe G.", "" ] ]
In a world where for-profit enterprises are increasingly looking to maximize profits by engaging in privacy invading consumer-profiling techniques, the rise of ubiquitous computing and the Internet of Things (IoT) poses a major problem. If not acted upon quickly, the combination of Big Data with IoT will explode into a dystopian world that even George Orwell could not have predicted. The proposed project aims to fill a gap that no other solution is addressing, which is to reach a win-win scenario that works for both the enterprises and the consumers. It aims to do this by creating the building blocks for a consumer owned infrastructure that can provide both privacy for the user, and still enable the enterprises to achieve their high-level goals.
2009.03062
Harshal Tupsamudre
Harshal Tupsamudre and Sachin Lodha
Passwords: Divided they Stand, United they Fall
We propose an offline password guessing model in which attacker uses information from previous breaches and surveys to divide the search space into partitions. We prove that the success rate of attacker is maximum if the resulting partitions are explored in decreasing order of density. We demonstrate that the partition attack model is generic. This is an extended version of Passwords2014 paper
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Today, offline attacks are one of the most severe threats to password security. These attacks have claimed millions of passwords from prominent websites including Yahoo, LinkedIn, Twitter, Sony, Adobe and many more. Therefore, as a preventive measure, it is necessary to gauge the offline guessing resistance of a password database and to help users choose secure passwords. The rule-based mechanisms that rely on minimum password length and different character classes are too naive to capture the intricate human behavior whereas those based on probabilistic models require the knowledge of an entire password distribution which is not always easy to learn. In this paper, we propose a space partition attack model which uses information from previous leaks, surveys, attacks and other sources to divide the password search space into non-overlapping partitions and learn partition densities. We prove that the expected success of a partition attacker is maximum if the resulting partitions are explored in decreasing order of density. We show that the proposed attack model is more general and various popular attack techniques including probabilistic-based, dictionary-based, grammar-based and brute-force are just different instances of a partition attacker. Later, we introduce bin attacker, another instance of a partition attacker, and measure the guessing resistance of real-world password databases. We demonstrate that the utilized search space is very small and as a result even a weak attacker can cause sufficient damage to the system. We prove that partition attacks can be countered only if partition densities are uniform. We use this result and propose a system that thwarts partition attacker by distributing users across different partitions. Finally, we demonstrate how some of the well-known password schemes can be adapted to help users in choosing passwords from the system assigned partitions.
[ { "created": "Mon, 7 Sep 2020 12:39:02 GMT", "version": "v1" }, { "created": "Tue, 8 Sep 2020 02:36:07 GMT", "version": "v2" }, { "created": "Sat, 12 Sep 2020 03:13:25 GMT", "version": "v3" } ]
2020-09-15
[ [ "Tupsamudre", "Harshal", "" ], [ "Lodha", "Sachin", "" ] ]
Today, offline attacks are one of the most severe threats to password security. These attacks have claimed millions of passwords from prominent websites including Yahoo, LinkedIn, Twitter, Sony, Adobe and many more. Therefore, as a preventive measure, it is necessary to gauge the offline guessing resistance of a password database and to help users choose secure passwords. The rule-based mechanisms that rely on minimum password length and different character classes are too naive to capture the intricate human behavior whereas those based on probabilistic models require the knowledge of an entire password distribution which is not always easy to learn. In this paper, we propose a space partition attack model which uses information from previous leaks, surveys, attacks and other sources to divide the password search space into non-overlapping partitions and learn partition densities. We prove that the expected success of a partition attacker is maximum if the resulting partitions are explored in decreasing order of density. We show that the proposed attack model is more general and various popular attack techniques including probabilistic-based, dictionary-based, grammar-based and brute-force are just different instances of a partition attacker. Later, we introduce bin attacker, another instance of a partition attacker, and measure the guessing resistance of real-world password databases. We demonstrate that the utilized search space is very small and as a result even a weak attacker can cause sufficient damage to the system. We prove that partition attacks can be countered only if partition densities are uniform. We use this result and propose a system that thwarts partition attacker by distributing users across different partitions. Finally, we demonstrate how some of the well-known password schemes can be adapted to help users in choosing passwords from the system assigned partitions.
1801.04102
Donghoon Lee
Donghoon Lee, Ming-Hsuan Yang, and Songhwai Oh
Generative Single Image Reflection Separation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single image reflection separation is an ill-posed problem since two scenes, a transmitted scene and a reflected scene, need to be inferred from a single observation. To make the problem tractable, in this work we assume that categories of two scenes are known. It allows us to address the problem by generating both scenes that belong to the categories while their contents are constrained to match with the observed image. A novel network architecture is proposed to render realistic images of both scenes based on adversarial learning. The network can be trained in a weakly supervised manner, i.e., it learns to separate an observed image without corresponding ground truth images of transmission and reflection scenes which are difficult to collect in practice. Experimental results on real and synthetic datasets demonstrate that the proposed algorithm performs favorably against existing methods.
[ { "created": "Fri, 12 Jan 2018 09:36:17 GMT", "version": "v1" } ]
2018-01-15
[ [ "Lee", "Donghoon", "" ], [ "Yang", "Ming-Hsuan", "" ], [ "Oh", "Songhwai", "" ] ]
Single image reflection separation is an ill-posed problem since two scenes, a transmitted scene and a reflected scene, need to be inferred from a single observation. To make the problem tractable, in this work we assume that categories of two scenes are known. It allows us to address the problem by generating both scenes that belong to the categories while their contents are constrained to match with the observed image. A novel network architecture is proposed to render realistic images of both scenes based on adversarial learning. The network can be trained in a weakly supervised manner, i.e., it learns to separate an observed image without corresponding ground truth images of transmission and reflection scenes which are difficult to collect in practice. Experimental results on real and synthetic datasets demonstrate that the proposed algorithm performs favorably against existing methods.
1812.11042
Aditya Dendukuri
Aditya Dendukuri and Khoa Luu
Image Processing in Quantum Computers
6 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum Image Processing (QIP)is an exciting new field showing a lot of promise as a powerful addition to the arsenal of Image Processing techniques. Representing image pixel by pixel using classical information requires an enormous amount of computational resources. Hence, exploring methods to represent images in a different paradigm of information is important. In this work, we study the representation of images in Quantum Information. The main motivation for this pursuit is the ability of storing N bits of classical information in only log(2N) quantum bits (qubits). The promising first step was the exponentially efficient implementation of the Fourier transform in quantum computers as compared to Fast Fourier Transform in classical computers. In addition, images encoded in quantum information could obey unique quantum properties like superposition or entanglement.
[ { "created": "Fri, 28 Dec 2018 15:29:06 GMT", "version": "v1" }, { "created": "Wed, 6 Feb 2019 23:04:47 GMT", "version": "v2" }, { "created": "Mon, 11 Feb 2019 19:03:12 GMT", "version": "v3" } ]
2019-02-13
[ [ "Dendukuri", "Aditya", "" ], [ "Luu", "Khoa", "" ] ]
Quantum Image Processing (QIP)is an exciting new field showing a lot of promise as a powerful addition to the arsenal of Image Processing techniques. Representing image pixel by pixel using classical information requires an enormous amount of computational resources. Hence, exploring methods to represent images in a different paradigm of information is important. In this work, we study the representation of images in Quantum Information. The main motivation for this pursuit is the ability of storing N bits of classical information in only log(2N) quantum bits (qubits). The promising first step was the exponentially efficient implementation of the Fourier transform in quantum computers as compared to Fast Fourier Transform in classical computers. In addition, images encoded in quantum information could obey unique quantum properties like superposition or entanglement.
1604.05572
Malte Paskuda
Malte Paskuda and Myriam Lewkowicz
Does Anonymity Increase the Chance to Get Feedback?
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To generate a hypothesis about the effects of anonymity on user participation in online communities, comments on Youtube were analysed for effects of the change from allowing pseudonyms to Google+ with its real name policy. Small differences were detected, leading to the hypothesis that the option to remain anonymous leads to a less active environment for getting feedback, with less polite and less rude comments on the expense of neutral ones.
[ { "created": "Tue, 19 Apr 2016 14:01:57 GMT", "version": "v1" } ]
2016-04-20
[ [ "Paskuda", "Malte", "" ], [ "Lewkowicz", "Myriam", "" ] ]
To generate a hypothesis about the effects of anonymity on user participation in online communities, comments on Youtube were analysed for effects of the change from allowing pseudonyms to Google+ with its real name policy. Small differences were detected, leading to the hypothesis that the option to remain anonymous leads to a less active environment for getting feedback, with less polite and less rude comments on the expense of neutral ones.
2303.10285
Gleb Pogudin
Andrey Bychkov and Opal Issan and Gleb Pogudin and Boris Kramer
Exact and optimal quadratization of nonlinear finite-dimensional non-autonomous dynamical systems
null
null
null
null
cs.SC cs.NA math.DS math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.
[ { "created": "Fri, 17 Mar 2023 23:52:35 GMT", "version": "v1" }, { "created": "Fri, 24 Mar 2023 20:30:06 GMT", "version": "v2" }, { "created": "Fri, 8 Sep 2023 19:06:08 GMT", "version": "v3" }, { "created": "Tue, 5 Dec 2023 21:18:27 GMT", "version": "v4" } ]
2023-12-07
[ [ "Bychkov", "Andrey", "" ], [ "Issan", "Opal", "" ], [ "Pogudin", "Gleb", "" ], [ "Kramer", "Boris", "" ] ]
Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.
2007.00792
Haoyi Wang
Haoyi Wang, Victor Sanchez, Chang-Tsun Li
Age-Oriented Face Synthesis with Conditional Discriminator Pool and Adversarial Triplet Loss
null
null
10.1109/TIP.2021.3084106
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The vanilla Generative Adversarial Networks (GAN) are commonly used to generate realistic images depicting aged and rejuvenated faces. However, the performance of such vanilla GANs in the age-oriented face synthesis task is often compromised by the mode collapse issue, which may result in the generation of faces with minimal variations and a poor synthesis accuracy. In addition, recent age-oriented face synthesis methods use the L1 or L2 constraint to preserve the identity information on synthesized faces, which implicitly limits the identity permanence capabilities when these constraints are associated with a trivial weighting factor. In this paper, we propose a method for the age-oriented face synthesis task that achieves a high synthesis accuracy with strong identity permanence capabilities. Specifically, to achieve a high synthesis accuracy, our method tackles the mode collapse issue with a novel Conditional Discriminator Pool (CDP), which consists of multiple discriminators, each targeting one particular age category. To achieve strong identity permanence capabilities, our method uses a novel Adversarial Triplet loss. This loss, which is based on the Triplet loss, adds a ranking operation to further pull the positive embedding towards the anchor embedding resulting in significantly reduced intra-class variances in the feature space. Through extensive experiments, we show that our proposed method outperforms state-of-the-art methods in terms of synthesis accuracy and identity permanence capabilities, qualitatively and quantitatively.
[ { "created": "Wed, 1 Jul 2020 22:18:21 GMT", "version": "v1" }, { "created": "Fri, 3 Jul 2020 23:31:58 GMT", "version": "v2" } ]
2021-07-07
[ [ "Wang", "Haoyi", "" ], [ "Sanchez", "Victor", "" ], [ "Li", "Chang-Tsun", "" ] ]
The vanilla Generative Adversarial Networks (GAN) are commonly used to generate realistic images depicting aged and rejuvenated faces. However, the performance of such vanilla GANs in the age-oriented face synthesis task is often compromised by the mode collapse issue, which may result in the generation of faces with minimal variations and a poor synthesis accuracy. In addition, recent age-oriented face synthesis methods use the L1 or L2 constraint to preserve the identity information on synthesized faces, which implicitly limits the identity permanence capabilities when these constraints are associated with a trivial weighting factor. In this paper, we propose a method for the age-oriented face synthesis task that achieves a high synthesis accuracy with strong identity permanence capabilities. Specifically, to achieve a high synthesis accuracy, our method tackles the mode collapse issue with a novel Conditional Discriminator Pool (CDP), which consists of multiple discriminators, each targeting one particular age category. To achieve strong identity permanence capabilities, our method uses a novel Adversarial Triplet loss. This loss, which is based on the Triplet loss, adds a ranking operation to further pull the positive embedding towards the anchor embedding resulting in significantly reduced intra-class variances in the feature space. Through extensive experiments, we show that our proposed method outperforms state-of-the-art methods in terms of synthesis accuracy and identity permanence capabilities, qualitatively and quantitatively.
2007.06073
Daniel Halpern
Daniel Halpern, Ariel D. Procaccia, Alexandros Psomas, Nisarg Shah
Fair Division with Binary Valuations: One Rule to Rule Them All
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study fair allocation of indivisible goods among agents. Prior research focuses on additive agent preferences, which leads to an impossibility when seeking truthfulness, fairness, and efficiency. We show that when agents have binary additive preferences, a compelling rule -- maximum Nash welfare (MNW) -- provides all three guarantees. Specifically, we show that deterministic MNW with lexicographic tie-breaking is group strategyproof in addition to being envy-free up to one good and Pareto optimal. We also prove that fractional MNW -- known to be group strategyproof, envy-free, and Pareto optimal -- can be implemented as a distribution over deterministic MNW allocations, which are envy-free up to one good. Our work establishes maximum Nash welfare as the ultimate allocation rule in the realm of binary additive preferences.
[ { "created": "Sun, 12 Jul 2020 19:37:59 GMT", "version": "v1" }, { "created": "Wed, 30 Sep 2020 15:14:14 GMT", "version": "v2" } ]
2020-10-01
[ [ "Halpern", "Daniel", "" ], [ "Procaccia", "Ariel D.", "" ], [ "Psomas", "Alexandros", "" ], [ "Shah", "Nisarg", "" ] ]
We study fair allocation of indivisible goods among agents. Prior research focuses on additive agent preferences, which leads to an impossibility when seeking truthfulness, fairness, and efficiency. We show that when agents have binary additive preferences, a compelling rule -- maximum Nash welfare (MNW) -- provides all three guarantees. Specifically, we show that deterministic MNW with lexicographic tie-breaking is group strategyproof in addition to being envy-free up to one good and Pareto optimal. We also prove that fractional MNW -- known to be group strategyproof, envy-free, and Pareto optimal -- can be implemented as a distribution over deterministic MNW allocations, which are envy-free up to one good. Our work establishes maximum Nash welfare as the ultimate allocation rule in the realm of binary additive preferences.
2206.12485
Neguine Rezaii
Neguine Rezaii
The syntax-lexicon tradeoff in writing
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
As speakers turn their thoughts into sentences, they maintain a balance between the complexity of words and syntax. However, it is unclear whether this syntax-lexicon tradeoff is unique to the spoken language production that is under the pressure of rapid online processing. Alternatively, it is possible that the tradeoff is a basic property of language irrespective of the modality of production. This work evaluates the relationship between the complexity of words and syntactic rules in the written language of neurotypical individuals on three different topics. We found that similar to speaking, constructing sentences in writing involves a tradeoff between the complexity of the lexical and syntactic items. We also show that the reduced online processing demands during writing allows for retrieving more complex words at the cost of incorporating simpler syntax. This work further highlights the role of accessibility of the elements of a sentence as the driving force in the emergence of the syntax-lexicon tradeoff.
[ { "created": "Fri, 24 Jun 2022 19:57:12 GMT", "version": "v1" } ]
2022-06-28
[ [ "Rezaii", "Neguine", "" ] ]
As speakers turn their thoughts into sentences, they maintain a balance between the complexity of words and syntax. However, it is unclear whether this syntax-lexicon tradeoff is unique to the spoken language production that is under the pressure of rapid online processing. Alternatively, it is possible that the tradeoff is a basic property of language irrespective of the modality of production. This work evaluates the relationship between the complexity of words and syntactic rules in the written language of neurotypical individuals on three different topics. We found that similar to speaking, constructing sentences in writing involves a tradeoff between the complexity of the lexical and syntactic items. We also show that the reduced online processing demands during writing allows for retrieving more complex words at the cost of incorporating simpler syntax. This work further highlights the role of accessibility of the elements of a sentence as the driving force in the emergence of the syntax-lexicon tradeoff.
1710.07992
Veeresh Devireddy
Veeresh D, Thimmaraju S. N, Ravish G. K
Twin Sort Technique
core computer algorithm, 3 pages, conference
International Journal of Combined Research & Development October 2014
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective behind the Twin Sort technique is to sort the list of unordered data elements efficiently and to allow efficient and simple arrangement of data elements within the data structure with optimization of comparisons and iterations in the sorting method. This sorting technique effectively terminates the iterations when there is no need of comparison if the elements are all sorted in between the iterations. Unlike Quick sort, Merge sorting technique, this new sorting technique is based on the iterative method of sorting elements within the data structure. So it will be advantageous for optimization of iterations when there is no need for sorting elements. Finally, the Twin Sort technique is more efficient and simple method of arranging elements within a data structure and it is easy to implement when comparing to the other sorting technique. By the introduction of optimization of comparison and iterations, it will never allow the arranging task on the ordered elements.
[ { "created": "Sun, 22 Oct 2017 18:16:41 GMT", "version": "v1" } ]
2017-10-24
[ [ "D", "Veeresh", "" ], [ "N", "Thimmaraju S.", "" ], [ "K", "Ravish G.", "" ] ]
The objective behind the Twin Sort technique is to sort the list of unordered data elements efficiently and to allow efficient and simple arrangement of data elements within the data structure with optimization of comparisons and iterations in the sorting method. This sorting technique effectively terminates the iterations when there is no need of comparison if the elements are all sorted in between the iterations. Unlike Quick sort, Merge sorting technique, this new sorting technique is based on the iterative method of sorting elements within the data structure. So it will be advantageous for optimization of iterations when there is no need for sorting elements. Finally, the Twin Sort technique is more efficient and simple method of arranging elements within a data structure and it is easy to implement when comparing to the other sorting technique. By the introduction of optimization of comparison and iterations, it will never allow the arranging task on the ordered elements.
2101.12242
Philipp Adis
Philipp Adis, Nicolas Horst, Mathias Wien
D3DLO: Deep 3D LiDAR Odometry
5 pages, 4 figures, accepted at IEEE ICIP 2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
LiDAR odometry (LO) describes the task of finding an alignment of subsequent LiDAR point clouds. This alignment can be used to estimate the motion of the platform where the LiDAR sensor is mounted on. Currently, on the well-known KITTI Vision Benchmark Suite state-of-the-art algorithms are non-learning approaches. We propose a network architecture that learns LO by directly processing 3D point clouds. It is trained on the KITTI dataset in an end-to-end manner without the necessity of pre-defining corresponding pairs of points. An evaluation on the KITTI Vision Benchmark Suite shows similar performance to a previously published work, DeepCLR [1], even though our model uses only around 3.56% of the number of network parameters thereof. Furthermore, a plane point extraction is applied which leads to a marginal performance decrease while simultaneously reducing the input size by up to 50%.
[ { "created": "Thu, 28 Jan 2021 19:23:06 GMT", "version": "v1" }, { "created": "Sat, 12 Jun 2021 13:33:04 GMT", "version": "v2" } ]
2021-06-15
[ [ "Adis", "Philipp", "" ], [ "Horst", "Nicolas", "" ], [ "Wien", "Mathias", "" ] ]
LiDAR odometry (LO) describes the task of finding an alignment of subsequent LiDAR point clouds. This alignment can be used to estimate the motion of the platform where the LiDAR sensor is mounted on. Currently, on the well-known KITTI Vision Benchmark Suite state-of-the-art algorithms are non-learning approaches. We propose a network architecture that learns LO by directly processing 3D point clouds. It is trained on the KITTI dataset in an end-to-end manner without the necessity of pre-defining corresponding pairs of points. An evaluation on the KITTI Vision Benchmark Suite shows similar performance to a previously published work, DeepCLR [1], even though our model uses only around 3.56% of the number of network parameters thereof. Furthermore, a plane point extraction is applied which leads to a marginal performance decrease while simultaneously reducing the input size by up to 50%.