id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2401.06503
Chandler Timm Doloriel
Chandler Timm C. Doloriel and Rhandley D. Cajote
Improving the Detection of Small Oriented Objects in Aerial Images
C. T. C. Doloriel and R. D. Cajote, "Improving the Detection of Small Oriented Objects in Aerial Images," 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 2023, pp. 176-185, doi: 10.1109/WACVW58289.2023.00023
null
10.1109/WACVW58289.2023.00023
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Small oriented objects that represent tiny pixel-area in large-scale aerial images are difficult to detect due to their size and orientation. Existing oriented aerial detectors have shown promising results but are mainly focused on orientation modeling with less regard to the size of the objects. In this work, we proposed a method to accurately detect small oriented objects in aerial images by enhancing the classification and regression tasks of the oriented object detection model. We designed the Attention-Points Network consisting of two losses: Guided-Attention Loss (GALoss) and Box-Points Loss (BPLoss). GALoss uses an instance segmentation mask as ground-truth to learn the attention features needed to improve the detection of small objects. These attention features are then used to predict box points for BPLoss, which determines the points' position relative to the target oriented bounding box. Experimental results show the effectiveness of our Attention-Points Network on a standard oriented aerial dataset with small object instances (DOTA-v1.5) and on a maritime-related dataset (HRSC2016). The code is publicly available.
[ { "created": "Fri, 12 Jan 2024 11:00:07 GMT", "version": "v1" } ]
2024-01-15
[ [ "Doloriel", "Chandler Timm C.", "" ], [ "Cajote", "Rhandley D.", "" ] ]
Small oriented objects that represent tiny pixel-area in large-scale aerial images are difficult to detect due to their size and orientation. Existing oriented aerial detectors have shown promising results but are mainly focused on orientation modeling with less regard to the size of the objects. In this work, we proposed a method to accurately detect small oriented objects in aerial images by enhancing the classification and regression tasks of the oriented object detection model. We designed the Attention-Points Network consisting of two losses: Guided-Attention Loss (GALoss) and Box-Points Loss (BPLoss). GALoss uses an instance segmentation mask as ground-truth to learn the attention features needed to improve the detection of small objects. These attention features are then used to predict box points for BPLoss, which determines the points' position relative to the target oriented bounding box. Experimental results show the effectiveness of our Attention-Points Network on a standard oriented aerial dataset with small object instances (DOTA-v1.5) and on a maritime-related dataset (HRSC2016). The code is publicly available.
2002.12852
Sushant Veer
Sushant Veer and Anirudha Majumdar
Probably Approximately Correct Vision-Based Planning using Motion Primitives
null
null
null
null
cs.RO cs.LG cs.SY eess.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an approach for learning vision-based planners that provably generalize to novel environments (i.e., environments unseen during training). We leverage the Probably Approximately Correct (PAC)-Bayes framework to obtain an upper bound on the expected cost of policies across all environments. Minimizing the PAC-Bayes upper bound thus trains policies that are accompanied by a certificate of performance on novel environments. The training pipeline we propose provides strong generalization guarantees for deep neural network policies by (a) obtaining a good prior distribution on the space of policies using Evolutionary Strategies (ES) followed by (b) formulating the PAC-Bayes optimization as an efficiently-solvable parametric convex optimization problem. We demonstrate the efficacy of our approach for producing strong generalization guarantees for learned vision-based motion planners through two simulated examples: (1) an Unmanned Aerial Vehicle (UAV) navigating obstacle fields with an onboard vision sensor, and (2) a dynamic quadrupedal robot traversing rough terrains with proprioceptive and exteroceptive sensors.
[ { "created": "Fri, 28 Feb 2020 16:29:59 GMT", "version": "v1" }, { "created": "Tue, 10 Nov 2020 02:38:21 GMT", "version": "v2" } ]
2020-11-11
[ [ "Veer", "Sushant", "" ], [ "Majumdar", "Anirudha", "" ] ]
This paper presents an approach for learning vision-based planners that provably generalize to novel environments (i.e., environments unseen during training). We leverage the Probably Approximately Correct (PAC)-Bayes framework to obtain an upper bound on the expected cost of policies across all environments. Minimizing the PAC-Bayes upper bound thus trains policies that are accompanied by a certificate of performance on novel environments. The training pipeline we propose provides strong generalization guarantees for deep neural network policies by (a) obtaining a good prior distribution on the space of policies using Evolutionary Strategies (ES) followed by (b) formulating the PAC-Bayes optimization as an efficiently-solvable parametric convex optimization problem. We demonstrate the efficacy of our approach for producing strong generalization guarantees for learned vision-based motion planners through two simulated examples: (1) an Unmanned Aerial Vehicle (UAV) navigating obstacle fields with an onboard vision sensor, and (2) a dynamic quadrupedal robot traversing rough terrains with proprioceptive and exteroceptive sensors.
1706.02171
Vahid Jamali
Vahid Jamali and Arman Ahmadzadeh and Nariman Farsad and Robert Schober
SCW Codes for Maximum Likelihood Detection in Diffusive Molecular Communications without Channel State Information
This paper has been submitted to IEEE Transaction on Communications. arXiv admin note: text overlap with arXiv:1701.06338
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instantaneous or statistical channel state information (CSI) is needed for most detection schemes developed for molecular communication (MC) systems. Since the MC channel changes over time, e.g., due to variations in the velocity of flow, the temperature, or the distance between transmitter and receiver, CSI acquisition has to be conducted repeatedly to keep track of CSI variations. Frequent CSI acquisition may entail a large overhead whereas infrequent CSI acquisition may result in a low CSI estimation accuracy. To overcome these challenges, we design codes which enable maximum likelihood sequence detection at the receiver without instantaneous or statistical CSI. In particular, assuming concentration shift keying modulation, we show that a class of codes, referred to as strongly constant-weight (SCW) codes, enables optimal CSI-free sequence detection at the expense of a decrease in data rate. For the proposed SCW codes, we analyze the code rate, the error rate, and the average number of released molecules. In addition, we study the properties of binary SCW codes and balanced SCW codes in further detail. Simulation results verify our analytical derivations and reveal that SCW codes with CSI-free detection outperform uncoded transmission with optimal coherent and non-coherent detection.
[ { "created": "Tue, 6 Jun 2017 10:32:25 GMT", "version": "v1" } ]
2017-06-08
[ [ "Jamali", "Vahid", "" ], [ "Ahmadzadeh", "Arman", "" ], [ "Farsad", "Nariman", "" ], [ "Schober", "Robert", "" ] ]
Instantaneous or statistical channel state information (CSI) is needed for most detection schemes developed for molecular communication (MC) systems. Since the MC channel changes over time, e.g., due to variations in the velocity of flow, the temperature, or the distance between transmitter and receiver, CSI acquisition has to be conducted repeatedly to keep track of CSI variations. Frequent CSI acquisition may entail a large overhead whereas infrequent CSI acquisition may result in a low CSI estimation accuracy. To overcome these challenges, we design codes which enable maximum likelihood sequence detection at the receiver without instantaneous or statistical CSI. In particular, assuming concentration shift keying modulation, we show that a class of codes, referred to as strongly constant-weight (SCW) codes, enables optimal CSI-free sequence detection at the expense of a decrease in data rate. For the proposed SCW codes, we analyze the code rate, the error rate, and the average number of released molecules. In addition, we study the properties of binary SCW codes and balanced SCW codes in further detail. Simulation results verify our analytical derivations and reveal that SCW codes with CSI-free detection outperform uncoded transmission with optimal coherent and non-coherent detection.
2005.11527
Daoyuan Wu
Daoyuan Wu and Debin Gao and Robert H. Deng and Rocky K. C. Chang
When Program Analysis Meets Bytecode Search: Targeted and Efficient Inter-procedural Analysis of Modern Android Apps in BackDroid
null
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Widely-used Android static program analysis tools, e.g., Amandroid and FlowDroid, perform the whole-app inter-procedural analysis that is comprehensive but fundamentally difficult to handle modern (large) apps. The average app size has increased three to four times over five years. In this paper, we explore a new paradigm of targeted inter-procedural analysis that can skip irrelevant code and focus only on the flows of security-sensitive sink APIs. To this end, we propose a technique called on-the-fly bytecode search, which searches the disassembled app bytecode text just in time when a caller needs to be located. In this way, it guides targeted (and backward) inter-procedural analysis step by step until reaching entry points, without relying on a whole-app graph. Such search-based inter-procedural analysis, however, is challenging due to Java polymorphism, callbacks, asynchronous flows, static initializers, and inter-component communication in Android apps. We overcome these unique obstacles in our context by proposing a set of bytecode search mechanisms that utilize flexible searches and forward object taint analysis. Atop of this new inter-procedural analysis, we further adjust the traditional backward slicing and forward constant propagation to provide the complete dataflow tracking of sink API calls. We have implemented a prototype called BackDroid and compared it with Amandroid in analyzing 3,178 modern popular apps for crypto and SSL misconfigurations. The evaluation shows that for such sink-based problems, BackDroid is 37 times faster (2.13 v.s. 78.15 minutes) and has no timed-out failure (v.s. 35% in Amandroid), while maintaining close or even better detection effectiveness.
[ { "created": "Sat, 23 May 2020 12:50:28 GMT", "version": "v1" } ]
2020-05-26
[ [ "Wu", "Daoyuan", "" ], [ "Gao", "Debin", "" ], [ "Deng", "Robert H.", "" ], [ "Chang", "Rocky K. C.", "" ] ]
Widely-used Android static program analysis tools, e.g., Amandroid and FlowDroid, perform the whole-app inter-procedural analysis that is comprehensive but fundamentally difficult to handle modern (large) apps. The average app size has increased three to four times over five years. In this paper, we explore a new paradigm of targeted inter-procedural analysis that can skip irrelevant code and focus only on the flows of security-sensitive sink APIs. To this end, we propose a technique called on-the-fly bytecode search, which searches the disassembled app bytecode text just in time when a caller needs to be located. In this way, it guides targeted (and backward) inter-procedural analysis step by step until reaching entry points, without relying on a whole-app graph. Such search-based inter-procedural analysis, however, is challenging due to Java polymorphism, callbacks, asynchronous flows, static initializers, and inter-component communication in Android apps. We overcome these unique obstacles in our context by proposing a set of bytecode search mechanisms that utilize flexible searches and forward object taint analysis. Atop of this new inter-procedural analysis, we further adjust the traditional backward slicing and forward constant propagation to provide the complete dataflow tracking of sink API calls. We have implemented a prototype called BackDroid and compared it with Amandroid in analyzing 3,178 modern popular apps for crypto and SSL misconfigurations. The evaluation shows that for such sink-based problems, BackDroid is 37 times faster (2.13 v.s. 78.15 minutes) and has no timed-out failure (v.s. 35% in Amandroid), while maintaining close or even better detection effectiveness.
1711.11513
Gijs Wijnholds
Michael Moortgat and Gijs Wijnholds
Lexical and Derivational Meaning in Vector-Based Models of Relativisation
10 page version to appear in Proceedings Amsterdam Colloquium, updated with appendix
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sadrzadeh et al (2013) present a compositional distributional analysis of relative clauses in English in terms of the Frobenius algebraic structure of finite dimensional vector spaces. The analysis relies on distinct type assignments and lexical recipes for subject vs object relativisation. The situation for Dutch is different: because of the verb final nature of Dutch, relative clauses are ambiguous between a subject vs object relativisation reading. Using an extended version of Lambek calculus, we present a compositional distributional framework that accounts for this derivational ambiguity, and that allows us to give a single meaning recipe for the relative pronoun reconciling the Frobenius semantics with the demands of Dutch derivational syntax.
[ { "created": "Thu, 30 Nov 2017 17:02:52 GMT", "version": "v1" }, { "created": "Fri, 1 Dec 2017 17:47:18 GMT", "version": "v2" } ]
2017-12-04
[ [ "Moortgat", "Michael", "" ], [ "Wijnholds", "Gijs", "" ] ]
Sadrzadeh et al (2013) present a compositional distributional analysis of relative clauses in English in terms of the Frobenius algebraic structure of finite dimensional vector spaces. The analysis relies on distinct type assignments and lexical recipes for subject vs object relativisation. The situation for Dutch is different: because of the verb final nature of Dutch, relative clauses are ambiguous between a subject vs object relativisation reading. Using an extended version of Lambek calculus, we present a compositional distributional framework that accounts for this derivational ambiguity, and that allows us to give a single meaning recipe for the relative pronoun reconciling the Frobenius semantics with the demands of Dutch derivational syntax.
1902.11131
Md. Abu Bakr Siddique
Shadman Sakib, Md. Abu Bakr Siddique
Unsupervised Segmentation Algorithms' Implementation in ITK for Tissue Classification via Human Head MRI Scans
4 Pages, 2 Tables
null
null
null
cs.CV cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Tissue classification is one of the significant tasks in the field of biomedical image analysis. Magnetic Resonance Imaging (MRI) is of great importance in tissue classification especially in the areas of brain tissue classification which is able to recognize anatomical areas of interest such as surgical planning, monitoring therapy, clinical drug trials, image registration, stereotactic neurosurgery, radiotherapy etc. The task of this paper is to implement different unsupervised classification algorithms in ITK and perform tissue classification (white matter, gray matter, cerebrospinal fluid (CSF) and background of the human brain). For this purpose, 5 grayscale head MRI scans are provided. In order of classifying brain tissues, three algorithms are used. These are: Otsu thresholding, Bayesian classification and Bayesian classification with Gaussian smoothing. The obtained classification results are analyzed in the results and discussion section.
[ { "created": "Tue, 26 Feb 2019 12:48:43 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2019 11:12:16 GMT", "version": "v2" }, { "created": "Thu, 16 Jan 2020 02:40:10 GMT", "version": "v3" }, { "created": "Sat, 25 Jan 2020 12:11:55 GMT", "version": "v4" } ]
2020-01-28
[ [ "Sakib", "Shadman", "" ], [ "Siddique", "Md. Abu Bakr", "" ] ]
Tissue classification is one of the significant tasks in the field of biomedical image analysis. Magnetic Resonance Imaging (MRI) is of great importance in tissue classification especially in the areas of brain tissue classification which is able to recognize anatomical areas of interest such as surgical planning, monitoring therapy, clinical drug trials, image registration, stereotactic neurosurgery, radiotherapy etc. The task of this paper is to implement different unsupervised classification algorithms in ITK and perform tissue classification (white matter, gray matter, cerebrospinal fluid (CSF) and background of the human brain). For this purpose, 5 grayscale head MRI scans are provided. In order of classifying brain tissues, three algorithms are used. These are: Otsu thresholding, Bayesian classification and Bayesian classification with Gaussian smoothing. The obtained classification results are analyzed in the results and discussion section.
2206.00103
Han Wang
Xinyan Li, Han Wang, Chunyang Chen, John Grundy
An Empirical Study on How Well Do COVID-19 Information Dashboards Service Users' Information Needs
null
IEEE Transactions on Services Computing (2021)
10.1109/TSC.2021.3114673
null
cs.HC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ongoing COVID-19 pandemic highlights the importance of dashboards for providing critical real-time information. In order to enable people to obtain information in time and to understand complex statistical data, many developers have designed and implemented public-oriented COVID-19 "information dashboards" during the pandemic. However, development often takes a long time and developers are not clear about many people's information needs, resulting in gaps between information needs and supplies. According to our empirical study and observations with popular developed COVID-19 dashboards, this seriously impedes information acquirement. Our study compares people's needs on Twitter with existing information suppliers. We determine that despite the COVID-19 information that is currently on existing dashboards, people are also interested in the relationship between COVID-19 and other viruses, the origin of COVID-19, vaccine development, fake new about COVID-19, impact on women, impact on school/university, and impact on business. Most of these have not yet been well addressed. We also summarise the visualization and interaction patterns commonly applied in dashboards, finding key patterns between data and visualization as well as visualization and interaction. Our findings can help developers to better optimize their dashboard to meet people's needs and make improvements to future crisis management dashboard development.
[ { "created": "Mon, 30 May 2022 02:14:23 GMT", "version": "v1" } ]
2022-06-02
[ [ "Li", "Xinyan", "" ], [ "Wang", "Han", "" ], [ "Chen", "Chunyang", "" ], [ "Grundy", "John", "" ] ]
The ongoing COVID-19 pandemic highlights the importance of dashboards for providing critical real-time information. In order to enable people to obtain information in time and to understand complex statistical data, many developers have designed and implemented public-oriented COVID-19 "information dashboards" during the pandemic. However, development often takes a long time and developers are not clear about many people's information needs, resulting in gaps between information needs and supplies. According to our empirical study and observations with popular developed COVID-19 dashboards, this seriously impedes information acquirement. Our study compares people's needs on Twitter with existing information suppliers. We determine that despite the COVID-19 information that is currently on existing dashboards, people are also interested in the relationship between COVID-19 and other viruses, the origin of COVID-19, vaccine development, fake new about COVID-19, impact on women, impact on school/university, and impact on business. Most of these have not yet been well addressed. We also summarise the visualization and interaction patterns commonly applied in dashboards, finding key patterns between data and visualization as well as visualization and interaction. Our findings can help developers to better optimize their dashboard to meet people's needs and make improvements to future crisis management dashboard development.
1406.1833
Kenneth Stanley
Paul A. Szerlip, Gregory Morse, Justin K. Pugh, and Kenneth O. Stanley
Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation
Corrected citation formatting
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set. Thus DDFA features are inherently discriminative from the start even though they are trained without knowledge of the ultimate classification problem. Interestingly, DDFA also continues to add new features indefinitely (so it does not depend on a hidden layer size), is not based on minimizing error, and is inherently divergent instead of convergent, thereby providing a unique direction of research for unsupervised feature learning. In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features.
[ { "created": "Fri, 6 Jun 2014 23:45:03 GMT", "version": "v1" }, { "created": "Tue, 10 Jun 2014 03:37:45 GMT", "version": "v2" } ]
2014-06-11
[ [ "Szerlip", "Paul A.", "" ], [ "Morse", "Gregory", "" ], [ "Pugh", "Justin K.", "" ], [ "Stanley", "Kenneth O.", "" ] ]
Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set. Thus DDFA features are inherently discriminative from the start even though they are trained without knowledge of the ultimate classification problem. Interestingly, DDFA also continues to add new features indefinitely (so it does not depend on a hidden layer size), is not based on minimizing error, and is inherently divergent instead of convergent, thereby providing a unique direction of research for unsupervised feature learning. In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features.
2210.14560
Zhengjie Yang
Zhengjie Yang, Sen Fu, Wei Bao, Dong Yuan, and Albert Y. Zomaya
Hierarchical Federated Learning with Momentum Acceleration in Multi-Tier Networks
18 pages, 5 figures
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose Hierarchical Federated Learning with Momentum Acceleration (HierMo), a three-tier worker-edge-cloud federated learning algorithm that applies momentum for training acceleration. Momentum is calculated and aggregated in the three tiers. We provide convergence analysis for HierMo, showing a convergence rate of O(1/T). In the analysis, we develop a new approach to characterize model aggregation, momentum aggregation, and their interactions. Based on this result, {we prove that HierMo achieves a tighter convergence upper bound compared with HierFAVG without momentum}. We also propose HierOPT, which optimizes the aggregation periods (worker-edge and edge-cloud aggregation periods) to minimize the loss given a limited training time.
[ { "created": "Wed, 26 Oct 2022 08:35:37 GMT", "version": "v1" } ]
2022-10-27
[ [ "Yang", "Zhengjie", "" ], [ "Fu", "Sen", "" ], [ "Bao", "Wei", "" ], [ "Yuan", "Dong", "" ], [ "Zomaya", "Albert Y.", "" ] ]
In this paper, we propose Hierarchical Federated Learning with Momentum Acceleration (HierMo), a three-tier worker-edge-cloud federated learning algorithm that applies momentum for training acceleration. Momentum is calculated and aggregated in the three tiers. We provide convergence analysis for HierMo, showing a convergence rate of O(1/T). In the analysis, we develop a new approach to characterize model aggregation, momentum aggregation, and their interactions. Based on this result, {we prove that HierMo achieves a tighter convergence upper bound compared with HierFAVG without momentum}. We also propose HierOPT, which optimizes the aggregation periods (worker-edge and edge-cloud aggregation periods) to minimize the loss given a limited training time.
1909.06653
Gramoz Goranci
Gramoz Goranci, Monika Henzinger, Dariusz Leniowski
A Tree Structure For Dynamic Facility Location
An extended abstract appeared at the 26th Annual European Symposium on Algorithms (ESA) 2018
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the metric facility location problem with client insertions and deletions. This setting differs from the classic dynamic facility location problem, where the set of clients remains the same, but the metric space can change over time. We show a deterministic algorithm that maintains a constant factor approximation to the optimal solution in worst-case time $\tilde O(2^{O(\kappa^2)})$ per client insertion or deletion in metric spaces while answering queries about the cost in $O(1)$ time, where $\kappa$ denotes the doubling dimension of the metric. For metric spaces with bounded doubling dimension, the update time is polylogarithmic in the parameters of the problem.
[ { "created": "Sat, 14 Sep 2019 18:48:51 GMT", "version": "v1" } ]
2019-09-17
[ [ "Goranci", "Gramoz", "" ], [ "Henzinger", "Monika", "" ], [ "Leniowski", "Dariusz", "" ] ]
We study the metric facility location problem with client insertions and deletions. This setting differs from the classic dynamic facility location problem, where the set of clients remains the same, but the metric space can change over time. We show a deterministic algorithm that maintains a constant factor approximation to the optimal solution in worst-case time $\tilde O(2^{O(\kappa^2)})$ per client insertion or deletion in metric spaces while answering queries about the cost in $O(1)$ time, where $\kappa$ denotes the doubling dimension of the metric. For metric spaces with bounded doubling dimension, the update time is polylogarithmic in the parameters of the problem.
2401.13613
Naresh Lahajal Kumar
Naresh Kumar Lahajal and Harini S
Enhancing Image Retrieval : A Comprehensive Study on Photo Search using the CLIP Mode
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Photo search, the task of retrieving images based on textual queries, has witnessed significant advancements with the introduction of CLIP (Contrastive Language-Image Pretraining) model. CLIP leverages a vision-language pre training approach, wherein it learns a shared representation space for images and text, enabling cross-modal understanding. This model demonstrates the capability to understand the semantic relationships between diverse image and text pairs, allowing for efficient and accurate retrieval of images based on natural language queries. By training on a large-scale dataset containing images and their associated textual descriptions, CLIP achieves remarkable generalization, providing a powerful tool for tasks such as zero-shot learning and few-shot classification. This abstract summarizes the foundational principles of CLIP and highlights its potential impact on advancing the field of photo search, fostering a seamless integration of natural language understanding and computer vision for improved information retrieval in multimedia applications
[ { "created": "Wed, 24 Jan 2024 17:35:38 GMT", "version": "v1" } ]
2024-01-25
[ [ "Lahajal", "Naresh Kumar", "" ], [ "S", "Harini", "" ] ]
Photo search, the task of retrieving images based on textual queries, has witnessed significant advancements with the introduction of CLIP (Contrastive Language-Image Pretraining) model. CLIP leverages a vision-language pre training approach, wherein it learns a shared representation space for images and text, enabling cross-modal understanding. This model demonstrates the capability to understand the semantic relationships between diverse image and text pairs, allowing for efficient and accurate retrieval of images based on natural language queries. By training on a large-scale dataset containing images and their associated textual descriptions, CLIP achieves remarkable generalization, providing a powerful tool for tasks such as zero-shot learning and few-shot classification. This abstract summarizes the foundational principles of CLIP and highlights its potential impact on advancing the field of photo search, fostering a seamless integration of natural language understanding and computer vision for improved information retrieval in multimedia applications
2307.16342
Boyang Li
Boyang Li, Bingyu Shen, Qing Lu, Taeho Jung, Yiyu Shi
Proof-of-Federated-Learning-Subchain: Free Partner Selection Subchain Based on Federated Learning
7 pages, 7 figures
null
null
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
The continuous thriving of the Blockchain society motivates research in novel designs of schemes supporting cryptocurrencies. Previously multiple Proof-of-Deep-Learning(PoDL) consensuses have been proposed to replace hashing with useful work such as deep learning model training tasks. The energy will be more efficiently used while maintaining the ledger. However deep learning models are problem-specific and can be extremely complex. Current PoDL consensuses still require much work to realize in the real world. In this paper, we proposed a novel consensus named Proof-of-Federated-Learning-Subchain(PoFLSC) to fill the gap. We applied a subchain to record the training, challenging, and auditing activities and emphasized the importance of valuable datasets in partner selection. We simulated 20 miners in the subchain to demonstrate the effectiveness of PoFLSC. When we reduce the pool size concerning the reservation priority order, the drop rate difference in the performance in different scenarios further exhibits that the miner with a higher Shapley Value (SV) will gain a better opportunity to be selected when the size of the subchain pool is limited. In the conducted experiments, the PoFLSC consensus supported the subchain manager to be aware of reservation priority and the core partition of contributors to establish and maintain a competitive subchain.
[ { "created": "Sun, 30 Jul 2023 23:39:58 GMT", "version": "v1" } ]
2023-08-01
[ [ "Li", "Boyang", "" ], [ "Shen", "Bingyu", "" ], [ "Lu", "Qing", "" ], [ "Jung", "Taeho", "" ], [ "Shi", "Yiyu", "" ] ]
The continuous thriving of the Blockchain society motivates research in novel designs of schemes supporting cryptocurrencies. Previously multiple Proof-of-Deep-Learning(PoDL) consensuses have been proposed to replace hashing with useful work such as deep learning model training tasks. The energy will be more efficiently used while maintaining the ledger. However deep learning models are problem-specific and can be extremely complex. Current PoDL consensuses still require much work to realize in the real world. In this paper, we proposed a novel consensus named Proof-of-Federated-Learning-Subchain(PoFLSC) to fill the gap. We applied a subchain to record the training, challenging, and auditing activities and emphasized the importance of valuable datasets in partner selection. We simulated 20 miners in the subchain to demonstrate the effectiveness of PoFLSC. When we reduce the pool size concerning the reservation priority order, the drop rate difference in the performance in different scenarios further exhibits that the miner with a higher Shapley Value (SV) will gain a better opportunity to be selected when the size of the subchain pool is limited. In the conducted experiments, the PoFLSC consensus supported the subchain manager to be aware of reservation priority and the core partition of contributors to establish and maintain a competitive subchain.
2102.01297
Pedro J. Rivera Torres
Pedro J. Rivera Torres, Carlos Gershenson Garc\'ia, Samir Kanaan Izquierdo
Reinforcement Learning with Probabilistic Boolean Network Models of Smart Grid Devices
null
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
The area of Smart Power Grids needs to constantly improve its efficiency and resilience, to pro-vide high quality electrical power, in a resistant grid, managing faults and avoiding failures. Achieving this requires high component reliability, adequate maintenance, and a studied failure occurrence. Correct system operation involves those activities, and novel methodologies to detect, classify, and isolate faults and failures, model and simulate processes with predictive algorithms and analytics (using data analysis and asset condition to plan and perform activities). We show-case the application of a complex-adaptive, self-organizing modeling method, Probabilistic Boolean Networks (PBN), as a way towards the understanding of the dynamics of smart grid devices, and to model and characterize their behavior. This work demonstrates that PBNs are is equivalent to the standard Reinforcement Learning Cycle, in which the agent/model has an inter-action with its environment and receives feedback from it in the form of a reward signal. Differ-ent reward structures were created in order to characterize preferred behavior. This information can be used to guide the PBN to avoid fault conditions and failures.
[ { "created": "Tue, 2 Feb 2021 04:13:30 GMT", "version": "v1" } ]
2021-02-03
[ [ "Torres", "Pedro J. Rivera", "" ], [ "García", "Carlos Gershenson", "" ], [ "Izquierdo", "Samir Kanaan", "" ] ]
The area of Smart Power Grids needs to constantly improve its efficiency and resilience, to pro-vide high quality electrical power, in a resistant grid, managing faults and avoiding failures. Achieving this requires high component reliability, adequate maintenance, and a studied failure occurrence. Correct system operation involves those activities, and novel methodologies to detect, classify, and isolate faults and failures, model and simulate processes with predictive algorithms and analytics (using data analysis and asset condition to plan and perform activities). We show-case the application of a complex-adaptive, self-organizing modeling method, Probabilistic Boolean Networks (PBN), as a way towards the understanding of the dynamics of smart grid devices, and to model and characterize their behavior. This work demonstrates that PBNs are is equivalent to the standard Reinforcement Learning Cycle, in which the agent/model has an inter-action with its environment and receives feedback from it in the form of a reward signal. Differ-ent reward structures were created in order to characterize preferred behavior. This information can be used to guide the PBN to avoid fault conditions and failures.
2112.13562
Rui Wang
Tao Wang and Rui Wang and Di Jin and Dongxiao He and Yuxiao Huang
Powerful Graph Convolutioal Networks with Adaptive Propagation Mechanism for Homophily and Heterophily
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.
[ { "created": "Mon, 27 Dec 2021 08:19:23 GMT", "version": "v1" } ]
2021-12-28
[ [ "Wang", "Tao", "" ], [ "Wang", "Rui", "" ], [ "Jin", "Di", "" ], [ "He", "Dongxiao", "" ], [ "Huang", "Yuxiao", "" ] ]
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.
1207.0261
Yutaka Hori
Yutaka Hori and Shinji Hara
Biochemical Oscillations in Delayed Negative Cyclic Feedback: Harmonic Balance Analysis with Applications
Appendix A and some references have been added
null
null
null
cs.SY math.OC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oscillatory chemical reactions often serve as a timing clock of cellular processes in living cells. The temporal dynamics of protein concentration levels is thus of great interest in biology. Here we propose a theoretical framework to analyze the frequency, phase and amplitude of oscillatory protein concentrations in gene regulatory networks with negative cyclic feedback. We first formulate the analysis framework of oscillation profiles based on multivariable harmonic balance. With this framework, the frequency, phase and amplitude are obtained analytically in terms of kinetic constants of the reactions despite the nonlinearity of the dynamics. These results are demonstrated with the Pentilator and Hes7 self-repression network, and it is shown that the developed analysis method indeed predicts the profiles of the oscillations. A distinctive feature of the presented result is that the waveform of oscillations is analytically obtained for a broad class of biochemical systems. Thus, it is easy to see how the waveform is determined from the system's parameters and structures. We present general biological insights that are applicable for any gene regulatory networks with negative cyclic feedback.
[ { "created": "Mon, 2 Jul 2012 01:01:42 GMT", "version": "v1" }, { "created": "Thu, 27 Dec 2012 08:08:57 GMT", "version": "v2" } ]
2015-03-20
[ [ "Hori", "Yutaka", "" ], [ "Hara", "Shinji", "" ] ]
Oscillatory chemical reactions often serve as a timing clock of cellular processes in living cells. The temporal dynamics of protein concentration levels is thus of great interest in biology. Here we propose a theoretical framework to analyze the frequency, phase and amplitude of oscillatory protein concentrations in gene regulatory networks with negative cyclic feedback. We first formulate the analysis framework of oscillation profiles based on multivariable harmonic balance. With this framework, the frequency, phase and amplitude are obtained analytically in terms of kinetic constants of the reactions despite the nonlinearity of the dynamics. These results are demonstrated with the Pentilator and Hes7 self-repression network, and it is shown that the developed analysis method indeed predicts the profiles of the oscillations. A distinctive feature of the presented result is that the waveform of oscillations is analytically obtained for a broad class of biochemical systems. Thus, it is easy to see how the waveform is determined from the system's parameters and structures. We present general biological insights that are applicable for any gene regulatory networks with negative cyclic feedback.
2208.12020
Zihuai Lin
Likun Sui, Zihuai Lin, Pei Xiao, Branka Vucetic
Performance Analysis for Reconfigurable Intelligent Surface Assisted MIMO Systems
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the maximal achievable rate for a given average error probability and blocklength for the reconfigurable intelligent surface (RIS) assisted multiple-input and multiple-output (MIMO) system. The result consists of a finite blocklength channel coding achievability bound and a converse bound based on the Berry-Esseen theorem, the Mellin transform and the mutual information. Numerical evaluation shows fast speed of convergence to the maximal achievable rate as the blocklength increases and also proves that the channel variance is a sound measurement of the backoff from the maximal achievable rate due to finite blocklength.
[ { "created": "Thu, 25 Aug 2022 11:54:43 GMT", "version": "v1" } ]
2022-08-26
[ [ "Sui", "Likun", "" ], [ "Lin", "Zihuai", "" ], [ "Xiao", "Pei", "" ], [ "Vucetic", "Branka", "" ] ]
This paper investigates the maximal achievable rate for a given average error probability and blocklength for the reconfigurable intelligent surface (RIS) assisted multiple-input and multiple-output (MIMO) system. The result consists of a finite blocklength channel coding achievability bound and a converse bound based on the Berry-Esseen theorem, the Mellin transform and the mutual information. Numerical evaluation shows fast speed of convergence to the maximal achievable rate as the blocklength increases and also proves that the channel variance is a sound measurement of the backoff from the maximal achievable rate due to finite blocklength.
1301.3551
Luis Sanchez Giraldo
Luis G. Sanchez Giraldo and Jose C. Principe
Information Theoretic Learning with Infinitely Divisible Kernels
Modified submission for International Conference on Learning Representations 2013
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's axiomatic definition of entropy and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. As an application example, we derive a supervised metric learning algorithm using a matrix based analogue to conditional entropy achieving results comparable with the state of the art.
[ { "created": "Wed, 16 Jan 2013 01:49:52 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2013 06:40:01 GMT", "version": "v2" }, { "created": "Fri, 22 Mar 2013 14:53:42 GMT", "version": "v3" }, { "created": "Tue, 16 Apr 2013 00:12:21 GMT", "version": "v4" }, { "created": "Wed, 1 May 2013 06:18:31 GMT", "version": "v5" }, { "created": "Tue, 4 Jun 2013 04:42:39 GMT", "version": "v6" } ]
2013-06-05
[ [ "Giraldo", "Luis G. Sanchez", "" ], [ "Principe", "Jose C.", "" ] ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's axiomatic definition of entropy and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. As an application example, we derive a supervised metric learning algorithm using a matrix based analogue to conditional entropy achieving results comparable with the state of the art.
2304.06708
Liliane Momeni
Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, Cordelia Schmid
Verbs in Action: Improving verb understanding in video-language models
null
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding verbs is crucial to modelling how people and objects interact with each other and the environment through space and time. Recently, state-of-the-art video-language models based on CLIP have been shown to have limited verb understanding and to rely extensively on nouns, restricting their performance in real-world video applications that require action and temporal understanding. In this work, we improve verb understanding for CLIP-based video-language models by proposing a new Verb-Focused Contrastive (VFC) framework. This consists of two main components: (1) leveraging pretrained large language models (LLMs) to create hard negatives for cross-modal contrastive learning, together with a calibration strategy to balance the occurrence of concepts in positive and negative pairs; and (2) enforcing a fine-grained, verb phrase alignment loss. Our method achieves state-of-the-art results for zero-shot performance on three downstream tasks that focus on verb understanding: video-text matching, video question-answering and video classification. To the best of our knowledge, this is the first work which proposes a method to alleviate the verb understanding problem, and does not simply highlight it.
[ { "created": "Thu, 13 Apr 2023 17:57:01 GMT", "version": "v1" } ]
2023-04-14
[ [ "Momeni", "Liliane", "" ], [ "Caron", "Mathilde", "" ], [ "Nagrani", "Arsha", "" ], [ "Zisserman", "Andrew", "" ], [ "Schmid", "Cordelia", "" ] ]
Understanding verbs is crucial to modelling how people and objects interact with each other and the environment through space and time. Recently, state-of-the-art video-language models based on CLIP have been shown to have limited verb understanding and to rely extensively on nouns, restricting their performance in real-world video applications that require action and temporal understanding. In this work, we improve verb understanding for CLIP-based video-language models by proposing a new Verb-Focused Contrastive (VFC) framework. This consists of two main components: (1) leveraging pretrained large language models (LLMs) to create hard negatives for cross-modal contrastive learning, together with a calibration strategy to balance the occurrence of concepts in positive and negative pairs; and (2) enforcing a fine-grained, verb phrase alignment loss. Our method achieves state-of-the-art results for zero-shot performance on three downstream tasks that focus on verb understanding: video-text matching, video question-answering and video classification. To the best of our knowledge, this is the first work which proposes a method to alleviate the verb understanding problem, and does not simply highlight it.
2406.06611
Raffaele Romagnoli
Raffaele Romagnoli, Jasmine Ratchford, Mark H. Klein
Building Hybrid B-Spline And Neural Network Operators
null
null
null
null
cs.LG cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Control systems are indispensable for ensuring the safety of cyber-physical systems (CPS), spanning various domains such as automobiles, airplanes, and missiles. Safeguarding CPS necessitates runtime methodologies that continuously monitor safety-critical conditions and respond in a verifiably safe manner. A fundamental aspect of many safety approaches involves predicting the future behavior of systems. However, achieving this requires accurate models that can operate in real time. Motivated by DeepONets, we propose a novel strategy that combines the inductive bias of B-splines with data-driven neural networks to facilitate real-time predictions of CPS behavior. We introduce our hybrid B-spline neural operator, establishing its capability as a universal approximator and providing rigorous bounds on the approximation error. These findings are applicable to a broad class of nonlinear autonomous systems and are validated through experimentation on a controlled 6-degree-of-freedom (DOF) quadrotor with a 12 dimensional state space. Furthermore, we conduct a comparative analysis of different network architectures, specifically fully connected networks (FCNN) and recurrent neural networks (RNN), to elucidate the practical utility and trade-offs associated with each architecture in real-world scenarios.
[ { "created": "Thu, 6 Jun 2024 21:54:59 GMT", "version": "v1" } ]
2024-06-12
[ [ "Romagnoli", "Raffaele", "" ], [ "Ratchford", "Jasmine", "" ], [ "Klein", "Mark H.", "" ] ]
Control systems are indispensable for ensuring the safety of cyber-physical systems (CPS), spanning various domains such as automobiles, airplanes, and missiles. Safeguarding CPS necessitates runtime methodologies that continuously monitor safety-critical conditions and respond in a verifiably safe manner. A fundamental aspect of many safety approaches involves predicting the future behavior of systems. However, achieving this requires accurate models that can operate in real time. Motivated by DeepONets, we propose a novel strategy that combines the inductive bias of B-splines with data-driven neural networks to facilitate real-time predictions of CPS behavior. We introduce our hybrid B-spline neural operator, establishing its capability as a universal approximator and providing rigorous bounds on the approximation error. These findings are applicable to a broad class of nonlinear autonomous systems and are validated through experimentation on a controlled 6-degree-of-freedom (DOF) quadrotor with a 12 dimensional state space. Furthermore, we conduct a comparative analysis of different network architectures, specifically fully connected networks (FCNN) and recurrent neural networks (RNN), to elucidate the practical utility and trade-offs associated with each architecture in real-world scenarios.
2004.08745
Jonah Philion
Jonah Philion, Amlan Kar, Sanja Fidler
Learning to Evaluate Perception Models Using Planner-Centric Metrics
CVPR 2020 poster
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variants of accuracy and precision are the gold-standard by which the computer vision community measures progress of perception algorithms. One reason for the ubiquity of these metrics is that they are largely task-agnostic; we in general seek to detect zero false negatives or positives. The downside of these metrics is that, at worst, they penalize all incorrect detections equally without conditioning on the task or scene, and at best, heuristics need to be chosen to ensure that different mistakes count differently. In this paper, we propose a principled metric for 3D object detection specifically for the task of self-driving. The core idea behind our metric is to isolate the task of object detection and measure the impact the produced detections would induce on the downstream task of driving. Without hand-designing it to, we find that our metric penalizes many of the mistakes that other metrics penalize by design. In addition, our metric downweighs detections based on additional factors such as distance from a detection to the ego car and the speed of the detection in intuitive ways that other detection metrics do not. For human evaluation, we generate scenes in which standard metrics and our metric disagree and find that humans side with our metric 79% of the time. Our project page including an evaluation server can be found at https://nv-tlabs.github.io/detection-relevance.
[ { "created": "Sun, 19 Apr 2020 02:14:00 GMT", "version": "v1" } ]
2020-04-21
[ [ "Philion", "Jonah", "" ], [ "Kar", "Amlan", "" ], [ "Fidler", "Sanja", "" ] ]
Variants of accuracy and precision are the gold-standard by which the computer vision community measures progress of perception algorithms. One reason for the ubiquity of these metrics is that they are largely task-agnostic; we in general seek to detect zero false negatives or positives. The downside of these metrics is that, at worst, they penalize all incorrect detections equally without conditioning on the task or scene, and at best, heuristics need to be chosen to ensure that different mistakes count differently. In this paper, we propose a principled metric for 3D object detection specifically for the task of self-driving. The core idea behind our metric is to isolate the task of object detection and measure the impact the produced detections would induce on the downstream task of driving. Without hand-designing it to, we find that our metric penalizes many of the mistakes that other metrics penalize by design. In addition, our metric downweighs detections based on additional factors such as distance from a detection to the ego car and the speed of the detection in intuitive ways that other detection metrics do not. For human evaluation, we generate scenes in which standard metrics and our metric disagree and find that humans side with our metric 79% of the time. Our project page including an evaluation server can be found at https://nv-tlabs.github.io/detection-relevance.
1804.10146
Stefan Gerdjikov
Stefan Gerdjikov
Note on the Lower Bounds of Bimachines
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is a brief note on the lower bound of bimachines. Particularly, we report that there is a class of functional transducers with $O(n)$ states that do not admit a bimachine with fewer than $\Theta(2^n)$ states.
[ { "created": "Sat, 14 Apr 2018 11:41:32 GMT", "version": "v1" } ]
2018-04-27
[ [ "Gerdjikov", "Stefan", "" ] ]
This is a brief note on the lower bound of bimachines. Particularly, we report that there is a class of functional transducers with $O(n)$ states that do not admit a bimachine with fewer than $\Theta(2^n)$ states.
1912.03049
Timoth\'ee Lesort
Timoth\'ee Lesort, Andrei Stoian, David Filliat
Regularization Shortcomings for Continual Learning
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In most machine learning algorithms, training data is assumed to be independent and identically distributed (iid). When it is not the case, the algorithm's performances are challenged, leading to the famous phenomenon of catastrophic forgetting. Algorithms dealing with it are gathered in the Continual Learning research field. In this paper, we study the regularization based approaches to continual learning and show that those approaches can not learn to discriminate classes from different tasks in an elemental continual benchmark: the class-incremental scenario. We make theoretical reasoning to prove this shortcoming and illustrate it with examples and experiments. Moreover, we show that it can have some important consequences on continual multi-tasks reinforcement learning or in pre-trained models used for continual learning. We believe that highlighting and understanding the shortcomings of regularization strategies will help us to use them more efficiently.
[ { "created": "Fri, 6 Dec 2019 10:11:18 GMT", "version": "v1" }, { "created": "Fri, 7 Feb 2020 12:10:55 GMT", "version": "v2" }, { "created": "Tue, 8 Dec 2020 17:25:56 GMT", "version": "v3" }, { "created": "Sun, 4 Apr 2021 00:21:23 GMT", "version": "v4" } ]
2021-04-06
[ [ "Lesort", "Timothée", "" ], [ "Stoian", "Andrei", "" ], [ "Filliat", "David", "" ] ]
In most machine learning algorithms, training data is assumed to be independent and identically distributed (iid). When it is not the case, the algorithm's performances are challenged, leading to the famous phenomenon of catastrophic forgetting. Algorithms dealing with it are gathered in the Continual Learning research field. In this paper, we study the regularization based approaches to continual learning and show that those approaches can not learn to discriminate classes from different tasks in an elemental continual benchmark: the class-incremental scenario. We make theoretical reasoning to prove this shortcoming and illustrate it with examples and experiments. Moreover, we show that it can have some important consequences on continual multi-tasks reinforcement learning or in pre-trained models used for continual learning. We believe that highlighting and understanding the shortcomings of regularization strategies will help us to use them more efficiently.
2307.08152
Jingqing Zhang
Jingqing Zhang, Kai Sun, Akshay Jagadeesh, Mahta Ghahfarokhi, Deepa Gupta, Ashok Gupta, Vibhor Gupta, Yike Guo
The Potential and Pitfalls of using a Large Language Model such as ChatGPT or GPT-4 as a Clinical Assistant
This manuscript is pre-print and in peer review. Supplementary materials will be published later
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent studies have demonstrated promising performance of ChatGPT and GPT-4 on several medical domain tasks. However, none have assessed its performance using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical diagnostic assistance for patients across a full range of disease presentation. We performed two analyses using ChatGPT and GPT-4, one to identify patients with specific medical diagnoses using a real-world large electronic health record database and the other, in providing diagnostic assistance to healthcare workers in the prospective evaluation of hypothetical patients. Our results show that GPT-4 across disease classification tasks with chain of thought and few-shot prompting can achieve performance as high as 96% F1 scores. For patient assessment, GPT-4 can accurately diagnose three out of four times. However, there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary investigations and overtreatment. These issues coupled with privacy concerns, make these models currently inadequate for real world clinical use. However, limited data and time needed for prompt engineering in comparison to configuration of conventional machine learning workflows highlight their potential for scalability across healthcare applications.
[ { "created": "Sun, 16 Jul 2023 21:19:47 GMT", "version": "v1" } ]
2023-07-18
[ [ "Zhang", "Jingqing", "" ], [ "Sun", "Kai", "" ], [ "Jagadeesh", "Akshay", "" ], [ "Ghahfarokhi", "Mahta", "" ], [ "Gupta", "Deepa", "" ], [ "Gupta", "Ashok", "" ], [ "Gupta", "Vibhor", "" ], [ "Guo", "Yike", "" ] ]
Recent studies have demonstrated promising performance of ChatGPT and GPT-4 on several medical domain tasks. However, none have assessed its performance using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical diagnostic assistance for patients across a full range of disease presentation. We performed two analyses using ChatGPT and GPT-4, one to identify patients with specific medical diagnoses using a real-world large electronic health record database and the other, in providing diagnostic assistance to healthcare workers in the prospective evaluation of hypothetical patients. Our results show that GPT-4 across disease classification tasks with chain of thought and few-shot prompting can achieve performance as high as 96% F1 scores. For patient assessment, GPT-4 can accurately diagnose three out of four times. However, there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary investigations and overtreatment. These issues coupled with privacy concerns, make these models currently inadequate for real world clinical use. However, limited data and time needed for prompt engineering in comparison to configuration of conventional machine learning workflows highlight their potential for scalability across healthcare applications.
2101.09710
Gerrit Ecke
Gerrit A. Ecke, Harald M. Papp, Hanspeter A. Mallot
Exploitation of Image Statistics with Sparse Coding in the Case of Stereo Vision
Author's accepted manuscript
Neural Networks, Volume 135, 2021, Pages 158-176
10.1016/j.neunet.2020.12.016
null
cs.CV q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The sparse coding algorithm has served as a model for early processing in mammalian vision. It has been assumed that the brain uses sparse coding to exploit statistical properties of the sensory stream. We hypothesize that sparse coding discovers patterns from the data set, which can be used to estimate a set of stimulus parameters by simple readout. In this study, we chose a model of stereo vision to test our hypothesis. We used the Locally Competitive Algorithm (LCA), followed by a na\"ive Bayes classifier, to infer stereo disparity. From the results we report three observations. First, disparity inference was successful with this naturalistic processing pipeline. Second, an expanded, highly redundant representation is required to robustly identify the input patterns. Third, the inference error can be predicted from the number of active coefficients in the LCA representation. We conclude that sparse coding can generate a suitable general representation for subsequent inference tasks. Keywords: Sparse coding; Locally Competitive Algorithm (LCA); Efficient coding; Compact code; Probabilistic inference; Stereo vision
[ { "created": "Sun, 24 Jan 2021 12:45:25 GMT", "version": "v1" }, { "created": "Tue, 26 Jan 2021 22:24:16 GMT", "version": "v2" } ]
2021-01-28
[ [ "Ecke", "Gerrit A.", "" ], [ "Papp", "Harald M.", "" ], [ "Mallot", "Hanspeter A.", "" ] ]
The sparse coding algorithm has served as a model for early processing in mammalian vision. It has been assumed that the brain uses sparse coding to exploit statistical properties of the sensory stream. We hypothesize that sparse coding discovers patterns from the data set, which can be used to estimate a set of stimulus parameters by simple readout. In this study, we chose a model of stereo vision to test our hypothesis. We used the Locally Competitive Algorithm (LCA), followed by a na\"ive Bayes classifier, to infer stereo disparity. From the results we report three observations. First, disparity inference was successful with this naturalistic processing pipeline. Second, an expanded, highly redundant representation is required to robustly identify the input patterns. Third, the inference error can be predicted from the number of active coefficients in the LCA representation. We conclude that sparse coding can generate a suitable general representation for subsequent inference tasks. Keywords: Sparse coding; Locally Competitive Algorithm (LCA); Efficient coding; Compact code; Probabilistic inference; Stereo vision
2310.14025
Maria Lymperaiou
Anastasia Kritharoula, Maria Lymperaiou and Giorgos Stamou
Large Language Models and Multimodal Retrieval for Visual Word Sense Disambiguation
Conference on Empirical Methods in Natural Language Processing (EMNLP) 2023
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
10.18653/v1/2023.emnlp-main.807
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Visual Word Sense Disambiguation (VWSD) is a novel challenging task with the goal of retrieving an image among a set of candidates, which better represents the meaning of an ambiguous word within a given context. In this paper, we make a substantial step towards unveiling this interesting task by applying a varying set of approaches. Since VWSD is primarily a text-image retrieval task, we explore the latest transformer-based methods for multimodal retrieval. Additionally, we utilize Large Language Models (LLMs) as knowledge bases to enhance the given phrases and resolve ambiguity related to the target word. We also study VWSD as a unimodal problem by converting to text-to-text and image-to-image retrieval, as well as question-answering (QA), to fully explore the capabilities of relevant models. To tap into the implicit knowledge of LLMs, we experiment with Chain-of-Thought (CoT) prompting to guide explainable answer generation. On top of all, we train a learn to rank (LTR) model in order to combine our different modules, achieving competitive ranking results. Extensive experiments on VWSD demonstrate valuable insights to effectively drive future directions.
[ { "created": "Sat, 21 Oct 2023 14:35:42 GMT", "version": "v1" } ]
2024-04-23
[ [ "Kritharoula", "Anastasia", "" ], [ "Lymperaiou", "Maria", "" ], [ "Stamou", "Giorgos", "" ] ]
Visual Word Sense Disambiguation (VWSD) is a novel challenging task with the goal of retrieving an image among a set of candidates, which better represents the meaning of an ambiguous word within a given context. In this paper, we make a substantial step towards unveiling this interesting task by applying a varying set of approaches. Since VWSD is primarily a text-image retrieval task, we explore the latest transformer-based methods for multimodal retrieval. Additionally, we utilize Large Language Models (LLMs) as knowledge bases to enhance the given phrases and resolve ambiguity related to the target word. We also study VWSD as a unimodal problem by converting to text-to-text and image-to-image retrieval, as well as question-answering (QA), to fully explore the capabilities of relevant models. To tap into the implicit knowledge of LLMs, we experiment with Chain-of-Thought (CoT) prompting to guide explainable answer generation. On top of all, we train a learn to rank (LTR) model in order to combine our different modules, achieving competitive ranking results. Extensive experiments on VWSD demonstrate valuable insights to effectively drive future directions.
1711.11217
Takuma Yagi
Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato
Future Person Localization in First-Person Videos
Accepted to CVPR 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new task that predicts future locations of people observed in first-person videos. Consider a first-person video stream continuously recorded by a wearable camera. Given a short clip of a person that is extracted from the complete stream, we aim to predict that person's location in future frames. To facilitate this future person localization ability, we make the following three key observations: a) First-person videos typically involve significant ego-motion which greatly affects the location of the target person in future frames; b) Scales of the target person act as a salient cue to estimate a perspective effect in first-person videos; c) First-person videos often capture people up-close, making it easier to leverage target poses (e.g., where they look) for predicting their future locations. We incorporate these three observations into a prediction framework with a multi-stream convolution-deconvolution architecture. Experimental results reveal our method to be effective on our new dataset as well as on a public social interaction dataset.
[ { "created": "Thu, 30 Nov 2017 04:16:03 GMT", "version": "v1" }, { "created": "Wed, 28 Mar 2018 01:29:15 GMT", "version": "v2" } ]
2018-03-29
[ [ "Yagi", "Takuma", "" ], [ "Mangalam", "Karttikeya", "" ], [ "Yonetani", "Ryo", "" ], [ "Sato", "Yoichi", "" ] ]
We present a new task that predicts future locations of people observed in first-person videos. Consider a first-person video stream continuously recorded by a wearable camera. Given a short clip of a person that is extracted from the complete stream, we aim to predict that person's location in future frames. To facilitate this future person localization ability, we make the following three key observations: a) First-person videos typically involve significant ego-motion which greatly affects the location of the target person in future frames; b) Scales of the target person act as a salient cue to estimate a perspective effect in first-person videos; c) First-person videos often capture people up-close, making it easier to leverage target poses (e.g., where they look) for predicting their future locations. We incorporate these three observations into a prediction framework with a multi-stream convolution-deconvolution architecture. Experimental results reveal our method to be effective on our new dataset as well as on a public social interaction dataset.
1801.05916
Shuhua Liu
Shuhua Monica Liu (1), Liting Pan (1), Xiaowei Chen (1) ((1) Department of Public Administration, Fudan University, Shanghai, China)
Citation Analysis of Innovative ICT and Advances of Governance (2008-2017)
Corrected first author's name spelling and added authors' affiliation in the metadata
null
null
null
cs.SI cs.CY cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper opens by introducing the Internet Plus Government (IPG), a new government initiative emerging in the last decade. To understand benefits and challenges associated with this initiative worldwide, we conducted analyses on research articles published in the e-governance area between 2008 and 2017. Content analysis and citation analysis were performed on 2105 articles to address three questions: (1) What types of new ICT have been adopted in the IPG initiative in the past decade? (2) How did scholars investigate interactions between the new ICTs and governance core to IPG? (3) How did the new ICTs interact and shape while also being shaped by the evolution of governance in the past decade? Our analysis suggests that IPG initiative has enriched the government information infrastructure. It presented opportunities to accumulate and use huge volume of data for better decision making and proactive government-citizen interaction. At the same time, the advance of open data, the widespread use of social media and the potential of data analytics also generated great pressure to address challenging questions and issues in the domain of e-democracy.
[ { "created": "Thu, 18 Jan 2018 02:57:25 GMT", "version": "v1" }, { "created": "Wed, 24 Jan 2018 12:32:25 GMT", "version": "v2" } ]
2018-01-25
[ [ "Liu", "Shuhua Monica", "" ], [ "Pan", "Liting", "" ], [ "Chen", "Xiaowei", "" ] ]
This paper opens by introducing the Internet Plus Government (IPG), a new government initiative emerging in the last decade. To understand benefits and challenges associated with this initiative worldwide, we conducted analyses on research articles published in the e-governance area between 2008 and 2017. Content analysis and citation analysis were performed on 2105 articles to address three questions: (1) What types of new ICT have been adopted in the IPG initiative in the past decade? (2) How did scholars investigate interactions between the new ICTs and governance core to IPG? (3) How did the new ICTs interact and shape while also being shaped by the evolution of governance in the past decade? Our analysis suggests that IPG initiative has enriched the government information infrastructure. It presented opportunities to accumulate and use huge volume of data for better decision making and proactive government-citizen interaction. At the same time, the advance of open data, the widespread use of social media and the potential of data analytics also generated great pressure to address challenging questions and issues in the domain of e-democracy.
2112.03517
Kyungmin Jo
Kyungmin Jo, Gyumin Shim, Sanghun Jung, Soyoung Yang, Jaegul Choo
CG-NeRF: Conditional Generative Neural Radiance Fields
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While recent NeRF-based generative models achieve the generation of diverse 3D-aware images, these approaches have limitations when generating images that contain user-specified characteristics. In this paper, we propose a novel model, referred to as the conditional generative neural radiance fields (CG-NeRF), which can generate multi-view images reflecting extra input conditions such as images or texts. While preserving the common characteristics of a given input condition, the proposed model generates diverse images in fine detail. We propose: 1) a novel unified architecture which disentangles the shape and appearance from a condition given in various forms and 2) the pose-consistent diversity loss for generating multimodal outputs while maintaining consistency of the view. Experimental results show that the proposed method maintains consistent image quality on various condition types and achieves superior fidelity and diversity compared to existing NeRF-based generative models.
[ { "created": "Tue, 7 Dec 2021 05:57:58 GMT", "version": "v1" } ]
2021-12-08
[ [ "Jo", "Kyungmin", "" ], [ "Shim", "Gyumin", "" ], [ "Jung", "Sanghun", "" ], [ "Yang", "Soyoung", "" ], [ "Choo", "Jaegul", "" ] ]
While recent NeRF-based generative models achieve the generation of diverse 3D-aware images, these approaches have limitations when generating images that contain user-specified characteristics. In this paper, we propose a novel model, referred to as the conditional generative neural radiance fields (CG-NeRF), which can generate multi-view images reflecting extra input conditions such as images or texts. While preserving the common characteristics of a given input condition, the proposed model generates diverse images in fine detail. We propose: 1) a novel unified architecture which disentangles the shape and appearance from a condition given in various forms and 2) the pose-consistent diversity loss for generating multimodal outputs while maintaining consistency of the view. Experimental results show that the proposed method maintains consistent image quality on various condition types and achieves superior fidelity and diversity compared to existing NeRF-based generative models.
2407.11522
Zhi Gao
Pengxiang Li, Zhi Gao, Bofei Zhang, Tao Yuan, Yuwei Wu, Mehrtash Harandi, Yunde Jia, Song-Chun Zhu, Qing Li
FIRE: A Dataset for Feedback Integration and Refinement Evaluation of Multimodal Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision language models (VLMs) have achieved impressive progress in diverse applications, becoming a prevalent research direction. In this paper, we build FIRE, a feedback-refinement dataset, consisting of 1.1M multi-turn conversations that are derived from 27 source datasets, empowering VLMs to spontaneously refine their responses based on user feedback across diverse tasks. To scale up the data collection, FIRE is collected in two components: FIRE-100K and FIRE-1M, where FIRE-100K is generated by GPT-4V, and FIRE-1M is freely generated via models trained on FIRE-100K. Then, we build FIRE-Bench, a benchmark to comprehensively evaluate the feedback-refining capability of VLMs, which contains 11K feedback-refinement conversations as the test data, two evaluation settings, and a model to provide feedback for VLMs. We develop the FIRE-LLaVA model by fine-tuning LLaVA on FIRE-100K and FIRE-1M, which shows remarkable feedback-refining capability on FIRE-Bench and outperforms untrained VLMs by 50%, making more efficient user-agent interactions and underscoring the significance of the FIRE dataset.
[ { "created": "Tue, 16 Jul 2024 09:00:45 GMT", "version": "v1" } ]
2024-07-17
[ [ "Li", "Pengxiang", "" ], [ "Gao", "Zhi", "" ], [ "Zhang", "Bofei", "" ], [ "Yuan", "Tao", "" ], [ "Wu", "Yuwei", "" ], [ "Harandi", "Mehrtash", "" ], [ "Jia", "Yunde", "" ], [ "Zhu", "Song-Chun", "" ], [ "Li", "Qing", "" ] ]
Vision language models (VLMs) have achieved impressive progress in diverse applications, becoming a prevalent research direction. In this paper, we build FIRE, a feedback-refinement dataset, consisting of 1.1M multi-turn conversations that are derived from 27 source datasets, empowering VLMs to spontaneously refine their responses based on user feedback across diverse tasks. To scale up the data collection, FIRE is collected in two components: FIRE-100K and FIRE-1M, where FIRE-100K is generated by GPT-4V, and FIRE-1M is freely generated via models trained on FIRE-100K. Then, we build FIRE-Bench, a benchmark to comprehensively evaluate the feedback-refining capability of VLMs, which contains 11K feedback-refinement conversations as the test data, two evaluation settings, and a model to provide feedback for VLMs. We develop the FIRE-LLaVA model by fine-tuning LLaVA on FIRE-100K and FIRE-1M, which shows remarkable feedback-refining capability on FIRE-Bench and outperforms untrained VLMs by 50%, making more efficient user-agent interactions and underscoring the significance of the FIRE dataset.
1804.09661
Aaron Jaech
Aaron Jaech and Mari Ostendorf
Personalized Language Model for Query Auto-Completion
ACL 2018
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Query auto-completion is a search engine feature whereby the system suggests completed queries as the user types. Recently, the use of a recurrent neural network language model was suggested as a method of generating query completions. We show how an adaptable language model can be used to generate personalized completions and how the model can use online updating to make predictions for users not seen during training. The personalized predictions are significantly better than a baseline that uses no user information.
[ { "created": "Wed, 25 Apr 2018 16:26:39 GMT", "version": "v1" } ]
2018-04-26
[ [ "Jaech", "Aaron", "" ], [ "Ostendorf", "Mari", "" ] ]
Query auto-completion is a search engine feature whereby the system suggests completed queries as the user types. Recently, the use of a recurrent neural network language model was suggested as a method of generating query completions. We show how an adaptable language model can be used to generate personalized completions and how the model can use online updating to make predictions for users not seen during training. The personalized predictions are significantly better than a baseline that uses no user information.
2208.01588
Vivek Kumar Singh Ph.D.
Anurag Kanaujia, Prashasti Singh, Abhirup Nandy, Vivek Kumar Singh
Research Contribution of major Centrally Funded Institution Systems of India
null
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
India is now among the major knowledge producers of the world, ranking among the top 5 countries in total research output, as per some recent reports. The institutional setup for Research & Development (R&D) in India comprises a diverse set of Institutions, including Universities, government departments, research laboratories, and private sector institutions etc. It may be noted that more than 45% share of India's Gross Expenditure on Research and Development (GERD) comes from the central government. In this context, this article attempts to explore the quantum of research contribution of centrally funded institutions and institution systems of India. The volume, proportionate share and growth patterns of research publications from the major centrally funded institutions, organised in 16 groups, is analysed. These institutions taken together account for 67.54% of Indian research output during 2001 to 2020. The research output of the centrally funded institutions in India has increased steadily since 2001 with a good value for CAGR. The paper presents noteworthy insights about scientific research production of India that may be useful to policymakers, researchers and science practitioners in India. It presents a case for increased activity by the state governments and private sector to further the cause of sustainable and inclusive research and development in the country.
[ { "created": "Tue, 2 Aug 2022 16:52:56 GMT", "version": "v1" } ]
2022-08-03
[ [ "Kanaujia", "Anurag", "" ], [ "Singh", "Prashasti", "" ], [ "Nandy", "Abhirup", "" ], [ "Singh", "Vivek Kumar", "" ] ]
India is now among the major knowledge producers of the world, ranking among the top 5 countries in total research output, as per some recent reports. The institutional setup for Research & Development (R&D) in India comprises a diverse set of Institutions, including Universities, government departments, research laboratories, and private sector institutions etc. It may be noted that more than 45% share of India's Gross Expenditure on Research and Development (GERD) comes from the central government. In this context, this article attempts to explore the quantum of research contribution of centrally funded institutions and institution systems of India. The volume, proportionate share and growth patterns of research publications from the major centrally funded institutions, organised in 16 groups, is analysed. These institutions taken together account for 67.54% of Indian research output during 2001 to 2020. The research output of the centrally funded institutions in India has increased steadily since 2001 with a good value for CAGR. The paper presents noteworthy insights about scientific research production of India that may be useful to policymakers, researchers and science practitioners in India. It presents a case for increased activity by the state governments and private sector to further the cause of sustainable and inclusive research and development in the country.
2309.09881
Johannes Busch
Johannes V. S. Busch, Robert Voelckner, Peter Sossalla, Christian L. Vielhaus, Roberto Calandra, Frank H. P. Fitzek
Deep Reinforcement Learning for the Joint Control of Traffic Light Signaling and Vehicle Speed Advice
6 pages, 2 figures, accepted for publication at IEEE ICMLA 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic congestion in dense urban centers presents an economical and environmental burden. In recent years, the availability of vehicle-to-anything communication allows for the transmission of detailed vehicle states to the infrastructure that can be used for intelligent traffic light control. The other way around, the infrastructure can provide vehicles with advice on driving behavior, such as appropriate velocities, which can improve the efficacy of the traffic system. Several research works applied deep reinforcement learning to either traffic light control or vehicle speed advice. In this work, we propose a first attempt to jointly learn the control of both. We show this to improve the efficacy of traffic systems. In our experiments, the joint control approach reduces average vehicle trip delays, w.r.t. controlling only traffic lights, in eight out of eleven benchmark scenarios. Analyzing the qualitative behavior of the vehicle speed advice policy, we observe that this is achieved by smoothing out the velocity profile of vehicles nearby a traffic light. Learning joint control of traffic signaling and speed advice in the real world could help to reduce congestion and mitigate the economical and environmental repercussions of today's traffic systems.
[ { "created": "Mon, 18 Sep 2023 15:45:22 GMT", "version": "v1" } ]
2023-09-19
[ [ "Busch", "Johannes V. S.", "" ], [ "Voelckner", "Robert", "" ], [ "Sossalla", "Peter", "" ], [ "Vielhaus", "Christian L.", "" ], [ "Calandra", "Roberto", "" ], [ "Fitzek", "Frank H. P.", "" ] ]
Traffic congestion in dense urban centers presents an economical and environmental burden. In recent years, the availability of vehicle-to-anything communication allows for the transmission of detailed vehicle states to the infrastructure that can be used for intelligent traffic light control. The other way around, the infrastructure can provide vehicles with advice on driving behavior, such as appropriate velocities, which can improve the efficacy of the traffic system. Several research works applied deep reinforcement learning to either traffic light control or vehicle speed advice. In this work, we propose a first attempt to jointly learn the control of both. We show this to improve the efficacy of traffic systems. In our experiments, the joint control approach reduces average vehicle trip delays, w.r.t. controlling only traffic lights, in eight out of eleven benchmark scenarios. Analyzing the qualitative behavior of the vehicle speed advice policy, we observe that this is achieved by smoothing out the velocity profile of vehicles nearby a traffic light. Learning joint control of traffic signaling and speed advice in the real world could help to reduce congestion and mitigate the economical and environmental repercussions of today's traffic systems.
2403.14885
Konrad Kulakowski
Jacek Szybowski, Konrad Ku{\l}akowski, Jiri Mazurek, Sebastian Ernst
Establishing a leader in a pairwise comparisons method
9 figures, 19 pages
null
null
null
cs.AI cs.CR cs.CY cs.DM
http://creativecommons.org/licenses/by/4.0/
Abstract Like electoral systems, decision-making methods are also vulnerable to manipulation by decision-makers. The ability to effectively defend against such threats can only come from thoroughly understanding the manipulation mechanisms. In the presented article, we show two algorithms that can be used to launch a manipulation attack. They allow for equating the weights of two selected alternatives in the pairwise comparison method and, consequently, choosing a leader. The theoretical considerations are accompanied by a Monte Carlo simulation showing the relationship between the size of the PC matrix, the degree of inconsistency, and the ease of manipulation. This work is a continuation of our previous research published in the paper (Szybowski et al., 2023)
[ { "created": "Thu, 21 Mar 2024 23:42:00 GMT", "version": "v1" } ]
2024-03-25
[ [ "Szybowski", "Jacek", "" ], [ "Kułakowski", "Konrad", "" ], [ "Mazurek", "Jiri", "" ], [ "Ernst", "Sebastian", "" ] ]
Abstract Like electoral systems, decision-making methods are also vulnerable to manipulation by decision-makers. The ability to effectively defend against such threats can only come from thoroughly understanding the manipulation mechanisms. In the presented article, we show two algorithms that can be used to launch a manipulation attack. They allow for equating the weights of two selected alternatives in the pairwise comparison method and, consequently, choosing a leader. The theoretical considerations are accompanied by a Monte Carlo simulation showing the relationship between the size of the PC matrix, the degree of inconsistency, and the ease of manipulation. This work is a continuation of our previous research published in the paper (Szybowski et al., 2023)
2312.00279
Xingqiu He
Xingqiu He, Chaoqun You, Tony Q. S. Quek
Age-Based Scheduling for Mobile Edge Computing: A Deep Reinforcement Learning Approach
null
null
null
null
cs.LG cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of Mobile Edge Computing (MEC), various real-time applications have been deployed to benefit people's daily lives. The performance of these applications relies heavily on the freshness of collected environmental information, which can be quantified by its Age of Information (AoI). In the traditional definition of AoI, it is assumed that the status information can be actively sampled and directly used. However, for many MEC-enabled applications, the desired status information is updated in an event-driven manner and necessitates data processing. To better serve these applications, we propose a new definition of AoI and, based on the redefined AoI, we formulate an online AoI minimization problem for MEC systems. Notably, the problem can be interpreted as a Markov Decision Process (MDP), thus enabling its solution through Reinforcement Learning (RL) algorithms. Nevertheless, the traditional RL algorithms are designed for MDPs with completely unknown system dynamics and hence usually suffer long convergence times. To accelerate the learning process, we introduce Post-Decision States (PDSs) to exploit the partial knowledge of the system's dynamics. We also combine PDSs with deep RL to further improve the algorithm's applicability, scalability, and robustness. Numerical results demonstrate that our algorithm outperforms the benchmarks under various scenarios.
[ { "created": "Fri, 1 Dec 2023 01:30:49 GMT", "version": "v1" }, { "created": "Fri, 23 Feb 2024 01:55:34 GMT", "version": "v2" } ]
2024-02-26
[ [ "He", "Xingqiu", "" ], [ "You", "Chaoqun", "" ], [ "Quek", "Tony Q. S.", "" ] ]
With the rapid development of Mobile Edge Computing (MEC), various real-time applications have been deployed to benefit people's daily lives. The performance of these applications relies heavily on the freshness of collected environmental information, which can be quantified by its Age of Information (AoI). In the traditional definition of AoI, it is assumed that the status information can be actively sampled and directly used. However, for many MEC-enabled applications, the desired status information is updated in an event-driven manner and necessitates data processing. To better serve these applications, we propose a new definition of AoI and, based on the redefined AoI, we formulate an online AoI minimization problem for MEC systems. Notably, the problem can be interpreted as a Markov Decision Process (MDP), thus enabling its solution through Reinforcement Learning (RL) algorithms. Nevertheless, the traditional RL algorithms are designed for MDPs with completely unknown system dynamics and hence usually suffer long convergence times. To accelerate the learning process, we introduce Post-Decision States (PDSs) to exploit the partial knowledge of the system's dynamics. We also combine PDSs with deep RL to further improve the algorithm's applicability, scalability, and robustness. Numerical results demonstrate that our algorithm outperforms the benchmarks under various scenarios.
2102.05918
Chao Jia
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
ICML 2021
International Conference on Machine Learning 2021
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
[ { "created": "Thu, 11 Feb 2021 10:08:12 GMT", "version": "v1" }, { "created": "Fri, 11 Jun 2021 07:51:39 GMT", "version": "v2" } ]
2021-06-14
[ [ "Jia", "Chao", "" ], [ "Yang", "Yinfei", "" ], [ "Xia", "Ye", "" ], [ "Chen", "Yi-Ting", "" ], [ "Parekh", "Zarana", "" ], [ "Pham", "Hieu", "" ], [ "Le", "Quoc V.", "" ], [ "Sung", "Yunhsuan", "" ], [ "Li", "Zhen", "" ], [ "Duerig", "Tom", "" ] ]
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
2306.03316
Jiaqing Yuan
Jiaqing Yuan and Michele Merler and Mihir Choudhury and Raju Pavuluri and Munindar P. Singh and Maja Vukovic
CoSiNES: Contrastive Siamese Network for Entity Standardization
Accepted by Matching Workshop at ACL2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Entity standardization maps noisy mentions from free-form text to standard entities in a knowledge base. The unique challenge of this task relative to other entity-related tasks is the lack of surrounding context and numerous variations in the surface form of the mentions, especially when it comes to generalization across domains where labeled data is scarce. Previous research mostly focuses on developing models either heavily relying on context, or dedicated solely to a specific domain. In contrast, we propose CoSiNES, a generic and adaptable framework with Contrastive Siamese Network for Entity Standardization that effectively adapts a pretrained language model to capture the syntax and semantics of the entities in a new domain. We construct a new dataset in the technology domain, which contains 640 technical stack entities and 6,412 mentions collected from industrial content management systems. We demonstrate that CoSiNES yields higher accuracy and faster runtime than baselines derived from leading methods in this domain. CoSiNES also achieves competitive performance in four standard datasets from the chemistry, medicine, and biomedical domains, demonstrating its cross-domain applicability.
[ { "created": "Mon, 5 Jun 2023 23:58:40 GMT", "version": "v1" } ]
2023-06-07
[ [ "Yuan", "Jiaqing", "" ], [ "Merler", "Michele", "" ], [ "Choudhury", "Mihir", "" ], [ "Pavuluri", "Raju", "" ], [ "Singh", "Munindar P.", "" ], [ "Vukovic", "Maja", "" ] ]
Entity standardization maps noisy mentions from free-form text to standard entities in a knowledge base. The unique challenge of this task relative to other entity-related tasks is the lack of surrounding context and numerous variations in the surface form of the mentions, especially when it comes to generalization across domains where labeled data is scarce. Previous research mostly focuses on developing models either heavily relying on context, or dedicated solely to a specific domain. In contrast, we propose CoSiNES, a generic and adaptable framework with Contrastive Siamese Network for Entity Standardization that effectively adapts a pretrained language model to capture the syntax and semantics of the entities in a new domain. We construct a new dataset in the technology domain, which contains 640 technical stack entities and 6,412 mentions collected from industrial content management systems. We demonstrate that CoSiNES yields higher accuracy and faster runtime than baselines derived from leading methods in this domain. CoSiNES also achieves competitive performance in four standard datasets from the chemistry, medicine, and biomedical domains, demonstrating its cross-domain applicability.
2001.09084
Dogan Altan
Dogan Altan, Sanem Sariel
What went wrong?: Identification of Everyday Object Manipulation Anomalies
null
null
null
null
cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extending the abilities of service robots is important for expanding what they can achieve in everyday manipulation tasks. On the other hand, it is also essential to ensure them to determine what they can not achieve in certain cases due to either anomalies or permanent failures during task execution. Robots need to identify these situations, and reveal the reasons behind these cases to overcome and recover from them. In this paper, we propose and analyze a Long Short-Term Memories-based (LSTM-based) awareness approach to reveal the reasons behind an anomaly case that occurs during a manipulation episode in an unstructured environment. The proposed method takes into account the real-time observations of the robot by fusing visual, auditory and proprioceptive sensory modalities to achieve this task. We also provide a comparative analysis of our method with Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs). The symptoms of anomalies are first learned from a given training set, then they can be classified in real-time based on the learned models. The approaches are evaluated on a Baxter robot executing object manipulation scenarios. The results indicate that the LSTM-based method outperforms the other methods with a 0.94 classification rate in revealing causes of anomalies in case of an unexpected deviation.
[ { "created": "Fri, 24 Jan 2020 16:51:41 GMT", "version": "v1" } ]
2020-01-27
[ [ "Altan", "Dogan", "" ], [ "Sariel", "Sanem", "" ] ]
Extending the abilities of service robots is important for expanding what they can achieve in everyday manipulation tasks. On the other hand, it is also essential to ensure them to determine what they can not achieve in certain cases due to either anomalies or permanent failures during task execution. Robots need to identify these situations, and reveal the reasons behind these cases to overcome and recover from them. In this paper, we propose and analyze a Long Short-Term Memories-based (LSTM-based) awareness approach to reveal the reasons behind an anomaly case that occurs during a manipulation episode in an unstructured environment. The proposed method takes into account the real-time observations of the robot by fusing visual, auditory and proprioceptive sensory modalities to achieve this task. We also provide a comparative analysis of our method with Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs). The symptoms of anomalies are first learned from a given training set, then they can be classified in real-time based on the learned models. The approaches are evaluated on a Baxter robot executing object manipulation scenarios. The results indicate that the LSTM-based method outperforms the other methods with a 0.94 classification rate in revealing causes of anomalies in case of an unexpected deviation.
2203.05360
Yu Tang Liu
Yu Tang Liu, Eric Price, Michael J. Black, Aamir Ahmad
Deep Residual Reinforcement Learning based Autonomous Blimp Control
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blimps are well suited to perform long-duration aerial tasks as they are energy efficient, relatively silent and safe. To address the blimp navigation and control task, in previous work we developed a hardware and software-in-the-loop framework and a PID-based controller for large blimps in the presence of wind disturbance. However, blimps have a deformable structure and their dynamics are inherently non-linear and time-delayed, making PID controllers difficult to tune. Thus, often resulting in large tracking errors. Moreover, the buoyancy of a blimp is constantly changing due to variations in ambient temperature and pressure. To address these issues, in this paper we present a learning-based framework based on deep residual reinforcement learning (DRRL), for the blimp control task. Within this framework, we first employ a PID controller to provide baseline performance. Subsequently, the DRRL agent learns to modify the PID decisions by interaction with the environment. We demonstrate in simulation that DRRL agent consistently improves the PID performance. Through rigorous simulation experiments, we show that the agent is robust to changes in wind speed and buoyancy. In real-world experiments, we demonstrate that the agent, trained only in simulation, is sufficiently robust to control an actual blimp in windy conditions. We openly provide the source code of our approach at https://github.com/ robot-perception-group/AutonomousBlimpDRL.
[ { "created": "Thu, 10 Mar 2022 13:23:33 GMT", "version": "v1" } ]
2022-03-11
[ [ "Liu", "Yu Tang", "" ], [ "Price", "Eric", "" ], [ "Black", "Michael J.", "" ], [ "Ahmad", "Aamir", "" ] ]
Blimps are well suited to perform long-duration aerial tasks as they are energy efficient, relatively silent and safe. To address the blimp navigation and control task, in previous work we developed a hardware and software-in-the-loop framework and a PID-based controller for large blimps in the presence of wind disturbance. However, blimps have a deformable structure and their dynamics are inherently non-linear and time-delayed, making PID controllers difficult to tune. Thus, often resulting in large tracking errors. Moreover, the buoyancy of a blimp is constantly changing due to variations in ambient temperature and pressure. To address these issues, in this paper we present a learning-based framework based on deep residual reinforcement learning (DRRL), for the blimp control task. Within this framework, we first employ a PID controller to provide baseline performance. Subsequently, the DRRL agent learns to modify the PID decisions by interaction with the environment. We demonstrate in simulation that DRRL agent consistently improves the PID performance. Through rigorous simulation experiments, we show that the agent is robust to changes in wind speed and buoyancy. In real-world experiments, we demonstrate that the agent, trained only in simulation, is sufficiently robust to control an actual blimp in windy conditions. We openly provide the source code of our approach at https://github.com/ robot-perception-group/AutonomousBlimpDRL.
2308.12059
Niklas Deckers
Niklas Deckers, Julia Peters, Martin Potthast
Manipulating Embeddings of Stable Diffusion Prompts
IJCAI 2024 camera ready version
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Prompt engineering is still the primary way for users of generative text-to-image models to manipulate generated images in a targeted way. Based on treating the model as a continuous function and by passing gradients between the image space and the prompt embedding space, we propose and analyze a new method to directly manipulate the embedding of a prompt instead of the prompt text. We then derive three practical interaction tools to support users with image generation: (1) Optimization of a metric defined in the image space that measures, for example, the image style. (2) Supporting a user in creative tasks by allowing them to navigate in the image space along a selection of directions of "near" prompt embeddings. (3) Changing the embedding of the prompt to include information that a user has seen in a particular seed but has difficulty describing in the prompt. Compared to prompt engineering, user-driven prompt embedding manipulation enables a more fine-grained, targeted control that integrates a user's intentions. Our user study shows that our methods are considered less tedious and that the resulting images are often preferred.
[ { "created": "Wed, 23 Aug 2023 10:59:41 GMT", "version": "v1" }, { "created": "Sat, 22 Jun 2024 16:58:19 GMT", "version": "v2" } ]
2024-06-25
[ [ "Deckers", "Niklas", "" ], [ "Peters", "Julia", "" ], [ "Potthast", "Martin", "" ] ]
Prompt engineering is still the primary way for users of generative text-to-image models to manipulate generated images in a targeted way. Based on treating the model as a continuous function and by passing gradients between the image space and the prompt embedding space, we propose and analyze a new method to directly manipulate the embedding of a prompt instead of the prompt text. We then derive three practical interaction tools to support users with image generation: (1) Optimization of a metric defined in the image space that measures, for example, the image style. (2) Supporting a user in creative tasks by allowing them to navigate in the image space along a selection of directions of "near" prompt embeddings. (3) Changing the embedding of the prompt to include information that a user has seen in a particular seed but has difficulty describing in the prompt. Compared to prompt engineering, user-driven prompt embedding manipulation enables a more fine-grained, targeted control that integrates a user's intentions. Our user study shows that our methods are considered less tedious and that the resulting images are often preferred.
2007.14938
Tiankui Zhang
Congshan Fan, Tiankui Zhang, Yuanwei Liu and Zhiming Zeng
Cache-enabled HetNets with Limited Backhaul: A Stochastic Geometry Model
null
null
10.1109/TCOMM.2020.3013633
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid explosion of data volume from mobile networks, edge caching has received significant attentions as an efficient approach to boost content delivery efficiency by bringing contents near users. In this article, cache-enabled heterogeneous networks (HetNets) considering the limited backhaul is analyzed with the aid of the stochastic geometry approach. A hybrid caching policy, in which the most popular contents are cached in the macro BSs tier with the deterministic caching strategy and the less popular contents are cached in the helpers tier with the probabilistic caching strategy, is proposed. Correspondingly, the content-centric association strategy is designed based on the comprehensive state of the access link, the cache and the backhaul link. Under the hybrid caching policy, new analytical results for successful content delivery probability, average successful delivery rate and energy efficiency are derived in the general scenario, the interference-limited scenario and the mean load scenario. The simulation results show that the proposed caching policy outperforms the most popular caching policy in HetNets with the limited backhaul. The performance gain is dramatically improved when the content popularity is less skewed, the cache capacity is sufficient and the helper density is relatively large. Furthermore, it is confirmed that there exists an optimal helper density to maximize the energy efficiency of the cache-enabled HetNets.
[ { "created": "Wed, 29 Jul 2020 16:24:37 GMT", "version": "v1" } ]
2020-08-13
[ [ "Fan", "Congshan", "" ], [ "Zhang", "Tiankui", "" ], [ "Liu", "Yuanwei", "" ], [ "Zeng", "Zhiming", "" ] ]
With the rapid explosion of data volume from mobile networks, edge caching has received significant attentions as an efficient approach to boost content delivery efficiency by bringing contents near users. In this article, cache-enabled heterogeneous networks (HetNets) considering the limited backhaul is analyzed with the aid of the stochastic geometry approach. A hybrid caching policy, in which the most popular contents are cached in the macro BSs tier with the deterministic caching strategy and the less popular contents are cached in the helpers tier with the probabilistic caching strategy, is proposed. Correspondingly, the content-centric association strategy is designed based on the comprehensive state of the access link, the cache and the backhaul link. Under the hybrid caching policy, new analytical results for successful content delivery probability, average successful delivery rate and energy efficiency are derived in the general scenario, the interference-limited scenario and the mean load scenario. The simulation results show that the proposed caching policy outperforms the most popular caching policy in HetNets with the limited backhaul. The performance gain is dramatically improved when the content popularity is less skewed, the cache capacity is sufficient and the helper density is relatively large. Furthermore, it is confirmed that there exists an optimal helper density to maximize the energy efficiency of the cache-enabled HetNets.
0809.2553
Paul Vitanyi
Paul M.B. Vitanyi (CWI and Univ. Amsterdam), Frank J. Balbach (Univ. Waterloo), Rudi L. Cilibrasi (CWI), and Ming Li (Univ. Waterloo)
Normalized Information Distance
33 pages, 12 figures, pdf, in: Normalized information distance, in: Information Theory and Statistical Learning, Eds. M. Dehmer, F. Emmert-Streib, Springer-Verlag, New-York, To appear
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.
[ { "created": "Mon, 15 Sep 2008 15:33:11 GMT", "version": "v1" } ]
2008-09-16
[ [ "Vitanyi", "Paul M. B.", "", "CWI and Univ. Amsterdam" ], [ "Balbach", "Frank J.", "", "Univ.\n Waterloo" ], [ "Cilibrasi", "Rudi L.", "", "CWI" ], [ "Li", "Ming", "", "Univ. Waterloo" ] ]
The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.
2212.09975
Yuyao Huang
Yuyao Huang, Tingzhao Fu, Honghao Huang, Sigang Yang, Hongwei Chen
Sophisticated deep learning with on-chip optical diffractive tensor processing
null
null
null
null
cs.ET cs.LG physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ever-growing deep learning technologies are making revolutionary changes for modern life. However, conventional computing architectures are designed to process sequential and digital programs, being extremely burdened with performing massive parallel and adaptive deep learning applications. Photonic integrated circuits provide an efficient approach to mitigate bandwidth limitations and power-wall brought by its electronic counterparts, showing great potential in ultrafast and energy-free high-performance computing. Here, we propose an optical computing architecture enabled by on-chip diffraction to implement convolutional acceleration, termed optical convolution unit (OCU). We demonstrate that any real-valued convolution kernels can be exploited by OCU with a prominent computational throughput boosting via the concept of structral re-parameterization. With OCU as the fundamental unit, we build an optical convolutional neural network (oCNN) to implement two popular deep learning tasks: classification and regression. For classification, Fashion-MNIST and CIFAR-4 datasets are tested with accuracy of 91.63% and 86.25%, respectively. For regression, we build an optical denoising convolutional neural network (oDnCNN) to handle Gaussian noise in gray scale images with noise level {\sigma} = 10, 15, 20, resulting clean images with average PSNR of 31.70dB, 29.39dB and 27.72dB, respectively. The proposed OCU presents remarkable performance of low energy consumption and high information density due to its fully passive nature and compact footprint, providing a highly parallel while lightweight solution for future computing architecture to handle high dimensional tensors in deep learning.
[ { "created": "Tue, 20 Dec 2022 03:33:26 GMT", "version": "v1" } ]
2022-12-21
[ [ "Huang", "Yuyao", "" ], [ "Fu", "Tingzhao", "" ], [ "Huang", "Honghao", "" ], [ "Yang", "Sigang", "" ], [ "Chen", "Hongwei", "" ] ]
The ever-growing deep learning technologies are making revolutionary changes for modern life. However, conventional computing architectures are designed to process sequential and digital programs, being extremely burdened with performing massive parallel and adaptive deep learning applications. Photonic integrated circuits provide an efficient approach to mitigate bandwidth limitations and power-wall brought by its electronic counterparts, showing great potential in ultrafast and energy-free high-performance computing. Here, we propose an optical computing architecture enabled by on-chip diffraction to implement convolutional acceleration, termed optical convolution unit (OCU). We demonstrate that any real-valued convolution kernels can be exploited by OCU with a prominent computational throughput boosting via the concept of structral re-parameterization. With OCU as the fundamental unit, we build an optical convolutional neural network (oCNN) to implement two popular deep learning tasks: classification and regression. For classification, Fashion-MNIST and CIFAR-4 datasets are tested with accuracy of 91.63% and 86.25%, respectively. For regression, we build an optical denoising convolutional neural network (oDnCNN) to handle Gaussian noise in gray scale images with noise level {\sigma} = 10, 15, 20, resulting clean images with average PSNR of 31.70dB, 29.39dB and 27.72dB, respectively. The proposed OCU presents remarkable performance of low energy consumption and high information density due to its fully passive nature and compact footprint, providing a highly parallel while lightweight solution for future computing architecture to handle high dimensional tensors in deep learning.
2111.11384
Ramviyas Parasuraman
Aiman Munir and Ramviyas Parasuraman
Analysis of Exploration vs. Exploitation in Adaptive Information Sampling
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptive information sampling approaches enable efficient selection of mobile robot's waypoints through which accurate sensing and mapping of a physical process, such as the radiation or field intensity, can be obtained. This paper analyzes the role of exploration and exploitation in such information-theoretic spatial sampling of the environmental processes. We use Gaussian processes to predict and estimate predictions with confidence bounds, thereby determining each point's informativeness in terms of exploration and exploitation. Specifically, we use a Gaussian process regression model to sample the Wi-Fi signal strength of the environment. For different variants of the informative function, we extensively analyze and evaluate the effectiveness and efficiency of information mapping through two different initial trajectories in both single robot and multi-robot settings. The results provide meaningful insights in choosing appropriate information function based on sampling objectives.
[ { "created": "Mon, 22 Nov 2021 17:47:44 GMT", "version": "v1" } ]
2021-11-23
[ [ "Munir", "Aiman", "" ], [ "Parasuraman", "Ramviyas", "" ] ]
Adaptive information sampling approaches enable efficient selection of mobile robot's waypoints through which accurate sensing and mapping of a physical process, such as the radiation or field intensity, can be obtained. This paper analyzes the role of exploration and exploitation in such information-theoretic spatial sampling of the environmental processes. We use Gaussian processes to predict and estimate predictions with confidence bounds, thereby determining each point's informativeness in terms of exploration and exploitation. Specifically, we use a Gaussian process regression model to sample the Wi-Fi signal strength of the environment. For different variants of the informative function, we extensively analyze and evaluate the effectiveness and efficiency of information mapping through two different initial trajectories in both single robot and multi-robot settings. The results provide meaningful insights in choosing appropriate information function based on sampling objectives.
1710.00110
Sayed Hadi Hashemi
Sayed Hadi Hashemi, Faraz Faghri, Roy H Campbell
Decentralized User-Centric Access Control using PubSub over Blockchain
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a mechanism that puts users in the center of control and empowers them to dictate the access to their collections of data. Revisiting the fundamental mechanisms in security for providing protection, our solution uses capabilities, access lists, and access rights following well-understood formal notions for reasoning about access. This contribution presents a practical, correct, auditable, transparent, distributed, and decentralized mechanism that is well-matched to the current emerging environments including Internet of Things, smart city, precision medicine, and autonomous cars. It is based on well-tested principles and practices used in a distributed authorization, cryptocurrencies, and scalable computing.
[ { "created": "Fri, 29 Sep 2017 22:22:28 GMT", "version": "v1" } ]
2017-10-03
[ [ "Hashemi", "Sayed Hadi", "" ], [ "Faghri", "Faraz", "" ], [ "Campbell", "Roy H", "" ] ]
We present a mechanism that puts users in the center of control and empowers them to dictate the access to their collections of data. Revisiting the fundamental mechanisms in security for providing protection, our solution uses capabilities, access lists, and access rights following well-understood formal notions for reasoning about access. This contribution presents a practical, correct, auditable, transparent, distributed, and decentralized mechanism that is well-matched to the current emerging environments including Internet of Things, smart city, precision medicine, and autonomous cars. It is based on well-tested principles and practices used in a distributed authorization, cryptocurrencies, and scalable computing.
2101.06189
Samuel Yen-Chi Chen
Samuel Yen-Chi Chen, Tzu-Chieh Wei, Chao Zhang, Haiwang Yu, Shinjae Yoo
Hybrid Quantum-Classical Graph Convolutional Network
null
null
null
null
cs.LG cs.CV hep-ex physics.data-an quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The high energy physics (HEP) community has a long history of dealing with large-scale datasets. To manage such voluminous data, classical machine learning and deep learning techniques have been employed to accelerate physics discovery. Recent advances in quantum machine learning (QML) have indicated the potential of applying these techniques in HEP. However, there are only limited results in QML applications currently available. In particular, the challenge of processing sparse data, common in HEP datasets, has not been extensively studied in QML models. This research provides a hybrid quantum-classical graph convolutional network (QGCNN) for learning HEP data. The proposed framework demonstrates an advantage over classical multilayer perceptron and convolutional neural networks in the aspect of number of parameters. Moreover, in terms of testing accuracy, the QGCNN shows comparable performance to a quantum convolutional neural network on the same HEP dataset while requiring less than $50\%$ of the parameters. Based on numerical simulation results, studying the application of graph convolutional operations and other QML models may prove promising in advancing HEP research and other scientific fields.
[ { "created": "Fri, 15 Jan 2021 16:02:52 GMT", "version": "v1" } ]
2021-01-18
[ [ "Chen", "Samuel Yen-Chi", "" ], [ "Wei", "Tzu-Chieh", "" ], [ "Zhang", "Chao", "" ], [ "Yu", "Haiwang", "" ], [ "Yoo", "Shinjae", "" ] ]
The high energy physics (HEP) community has a long history of dealing with large-scale datasets. To manage such voluminous data, classical machine learning and deep learning techniques have been employed to accelerate physics discovery. Recent advances in quantum machine learning (QML) have indicated the potential of applying these techniques in HEP. However, there are only limited results in QML applications currently available. In particular, the challenge of processing sparse data, common in HEP datasets, has not been extensively studied in QML models. This research provides a hybrid quantum-classical graph convolutional network (QGCNN) for learning HEP data. The proposed framework demonstrates an advantage over classical multilayer perceptron and convolutional neural networks in the aspect of number of parameters. Moreover, in terms of testing accuracy, the QGCNN shows comparable performance to a quantum convolutional neural network on the same HEP dataset while requiring less than $50\%$ of the parameters. Based on numerical simulation results, studying the application of graph convolutional operations and other QML models may prove promising in advancing HEP research and other scientific fields.
2311.02648
Daksh Dave
Daksh Dave, Dhruv Khut, Sahil Nawale, Pushkar Aggrawal, Disha Rastogi and Kailas Devadkar
Drone-Enabled Load Management for Solar Small Cell Networks in Next-Gen Communications Optimization for Solar Small Cells
5 pages, 3 figures, 1 table, 1 algorithm
null
10.1109/COMNETSAT59769.2023.10420594
null
cs.NI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
In recent years, the cellular industry has witnessed a major evolution in communication technologies. It is evident that the Next Generation of cellular networks(NGN) will play a pivotal role in the acceptance of emerging IoT applications supporting high data rates, better Quality of Service(QoS), and reduced latency. However, the deployment of NGN will introduce a power overhead on the communication infrastructure. Addressing the critical energy constraints in 5G and beyond, this study introduces an innovative load transfer method using drone-carried airborne base stations (BSs) for stable and secure power reallocation within a green micro-grid network. This method effectively manages energy deficit by transferring aerial BSs from high to low-energy cells, depending on user density and the availability of aerial BSs, optimizing power distribution in advanced cellular networks. The complexity of the proposed system is significantly lower as compared to existing power cable transmission systems currently employed in powering the BSs. Furthermore, our proposed algorithm has been shown to reduce BS power outages while requiring a minimum number of drone exchanges. We have conducted a thorough review on real-world dataset to prove the efficacy of our proposed approach to support BS during high load demand times
[ { "created": "Sun, 5 Nov 2023 13:21:38 GMT", "version": "v1" } ]
2024-02-09
[ [ "Dave", "Daksh", "" ], [ "Khut", "Dhruv", "" ], [ "Nawale", "Sahil", "" ], [ "Aggrawal", "Pushkar", "" ], [ "Rastogi", "Disha", "" ], [ "Devadkar", "Kailas", "" ] ]
In recent years, the cellular industry has witnessed a major evolution in communication technologies. It is evident that the Next Generation of cellular networks(NGN) will play a pivotal role in the acceptance of emerging IoT applications supporting high data rates, better Quality of Service(QoS), and reduced latency. However, the deployment of NGN will introduce a power overhead on the communication infrastructure. Addressing the critical energy constraints in 5G and beyond, this study introduces an innovative load transfer method using drone-carried airborne base stations (BSs) for stable and secure power reallocation within a green micro-grid network. This method effectively manages energy deficit by transferring aerial BSs from high to low-energy cells, depending on user density and the availability of aerial BSs, optimizing power distribution in advanced cellular networks. The complexity of the proposed system is significantly lower as compared to existing power cable transmission systems currently employed in powering the BSs. Furthermore, our proposed algorithm has been shown to reduce BS power outages while requiring a minimum number of drone exchanges. We have conducted a thorough review on real-world dataset to prove the efficacy of our proposed approach to support BS during high load demand times
1904.10644
Yu Chen
Yu Chen and Tom Diethe and Neil Lawrence
Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients
null
Continual Learning Workshop of 32nd Conference on Neural Information Processing Systems (NeurIPS 2018)
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continual learning aims to enable machine learning models to learn a general solution space for past and future tasks in a sequential manner. Conventional models tend to forget the knowledge of previous tasks while learning a new task, a phenomenon known as catastrophic forgetting. When using Bayesian models in continual learning, knowledge from previous tasks can be retained in two ways: 1). posterior distributions over the parameters, containing the knowledge gained from inference in previous tasks, which then serve as the priors for the following task; 2). coresets, containing knowledge of data distributions of previous tasks. Here, we show that Bayesian continual learning can be facilitated in terms of these two means through the use of natural gradients and Stein gradients respectively.
[ { "created": "Wed, 24 Apr 2019 05:18:32 GMT", "version": "v1" } ]
2019-04-25
[ [ "Chen", "Yu", "" ], [ "Diethe", "Tom", "" ], [ "Lawrence", "Neil", "" ] ]
Continual learning aims to enable machine learning models to learn a general solution space for past and future tasks in a sequential manner. Conventional models tend to forget the knowledge of previous tasks while learning a new task, a phenomenon known as catastrophic forgetting. When using Bayesian models in continual learning, knowledge from previous tasks can be retained in two ways: 1). posterior distributions over the parameters, containing the knowledge gained from inference in previous tasks, which then serve as the priors for the following task; 2). coresets, containing knowledge of data distributions of previous tasks. Here, we show that Bayesian continual learning can be facilitated in terms of these two means through the use of natural gradients and Stein gradients respectively.
2105.09837
Junxiang Wang
Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng and Liang Zhao
Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM Framework
Accepted by the IEEE Transactions on Neural Networks and Learning Systems (TNNLS). arXiv admin note: substantial text overlap with arXiv:2009.02868
null
null
null
cs.LG cs.DC math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While Graph Neural Networks (GNNs) are popular in the deep learning community, they suffer from several challenges including over-smoothing, over-squashing, and gradient vanishing. Recently, a series of models have attempted to relieve these issues by first augmenting the node features and then imposing node-wise functions based on Multi-Layer Perceptron (MLP), which are widely referred to as GA-MLP models. However, while GA-MLP models enjoy deeper architectures for better accuracy, their efficiency largely deteriorates. Moreover, popular acceleration techniques such as stochastic-version or data-parallelism cannot be effectively applied due to the dependency among samples (i.e., nodes) in graphs. To address these issues, in this paper, instead of data parallelism, we propose a parallel graph deep learning Alternating Direction Method of Multipliers (pdADMM-G) framework to achieve model parallelism: parameters in each layer of GA-MLP models can be updated in parallel. The extended pdADMM-G-Q algorithm reduces communication costs by introducing the quantization technique. Theoretical convergence to a (quantized) stationary point of the pdADMM-G algorithm and the pdADMM-G-Q algorithm is provided with a sublinear convergence rate $o(1/k)$, where $k$ is the number of iterations. Extensive experiments demonstrate the convergence of two proposed algorithms. Moreover, they lead to a more massive speedup and better performance than all state-of-the-art comparison methods on nine benchmark datasets. Last but not least, the proposed pdADMM-G-Q algorithm reduces communication overheads by up to $45\%$ without loss of performance. Our code is available at \url{https://github.com/xianggebenben/pdADMM-G}.
[ { "created": "Thu, 20 May 2021 15:37:42 GMT", "version": "v1" }, { "created": "Thu, 17 Nov 2022 02:23:25 GMT", "version": "v2" } ]
2022-11-18
[ [ "Wang", "Junxiang", "" ], [ "Li", "Hongyi", "" ], [ "Chai", "Zheng", "" ], [ "Wang", "Yongchao", "" ], [ "Cheng", "Yue", "" ], [ "Zhao", "Liang", "" ] ]
While Graph Neural Networks (GNNs) are popular in the deep learning community, they suffer from several challenges including over-smoothing, over-squashing, and gradient vanishing. Recently, a series of models have attempted to relieve these issues by first augmenting the node features and then imposing node-wise functions based on Multi-Layer Perceptron (MLP), which are widely referred to as GA-MLP models. However, while GA-MLP models enjoy deeper architectures for better accuracy, their efficiency largely deteriorates. Moreover, popular acceleration techniques such as stochastic-version or data-parallelism cannot be effectively applied due to the dependency among samples (i.e., nodes) in graphs. To address these issues, in this paper, instead of data parallelism, we propose a parallel graph deep learning Alternating Direction Method of Multipliers (pdADMM-G) framework to achieve model parallelism: parameters in each layer of GA-MLP models can be updated in parallel. The extended pdADMM-G-Q algorithm reduces communication costs by introducing the quantization technique. Theoretical convergence to a (quantized) stationary point of the pdADMM-G algorithm and the pdADMM-G-Q algorithm is provided with a sublinear convergence rate $o(1/k)$, where $k$ is the number of iterations. Extensive experiments demonstrate the convergence of two proposed algorithms. Moreover, they lead to a more massive speedup and better performance than all state-of-the-art comparison methods on nine benchmark datasets. Last but not least, the proposed pdADMM-G-Q algorithm reduces communication overheads by up to $45\%$ without loss of performance. Our code is available at \url{https://github.com/xianggebenben/pdADMM-G}.
1001.4181
Milan Derpich
Milan S. Derpich and Jan {\O}stergaard
Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources
47 pages, revised version submitted to IEEE Trans. Information Theory
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/3.0/
We improve the existing achievable rate regions for causal and for zero-delay source coding of stationary Gaussian sources under an average mean squared error (MSE) distortion measure. To begin with, we find a closed-form expression for the information-theoretic causal rate-distortion function (RDF) under such distortion measure, denoted by $R_{c}^{it}(D)$, for first-order Gauss-Markov processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that, for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze $R_{c}^{it}(D)$ for arbitrary zero-mean Gaussian stationary sources, we introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the reconstruction error is jointly stationary with the source. Based upon \bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two of these bounds are strictly smaller than 0.5 bits/sample at all rates. These bounds differ from one another in their tightness and ease of evaluation; the tighter the bound, the more involved its evaluation. We then show that, for any source spectral density and any positive distortion D\leq \sigma_{x}^{2}, \bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set of causal pre-, post-, and feedback filters. We show that finding such filters constitutes a convex optimization problem. In order to solve the latter, we propose an iterative optimization procedure that yields the optimal filters and is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a connection to feedback quantization we design a causal and a zero-delay coding scheme which, for Gaussian sources, achieves...
[ { "created": "Sat, 23 Jan 2010 18:02:46 GMT", "version": "v1" }, { "created": "Mon, 2 May 2011 00:46:11 GMT", "version": "v2" } ]
2011-05-03
[ [ "Derpich", "Milan S.", "" ], [ "Østergaard", "Jan", "" ] ]
We improve the existing achievable rate regions for causal and for zero-delay source coding of stationary Gaussian sources under an average mean squared error (MSE) distortion measure. To begin with, we find a closed-form expression for the information-theoretic causal rate-distortion function (RDF) under such distortion measure, denoted by $R_{c}^{it}(D)$, for first-order Gauss-Markov processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that, for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze $R_{c}^{it}(D)$ for arbitrary zero-mean Gaussian stationary sources, we introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the reconstruction error is jointly stationary with the source. Based upon \bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two of these bounds are strictly smaller than 0.5 bits/sample at all rates. These bounds differ from one another in their tightness and ease of evaluation; the tighter the bound, the more involved its evaluation. We then show that, for any source spectral density and any positive distortion D\leq \sigma_{x}^{2}, \bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set of causal pre-, post-, and feedback filters. We show that finding such filters constitutes a convex optimization problem. In order to solve the latter, we propose an iterative optimization procedure that yields the optimal filters and is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a connection to feedback quantization we design a causal and a zero-delay coding scheme which, for Gaussian sources, achieves...
1110.1075
Pantelis Bouboulis
Pantelis Bouboulis, Sergios Theodoridis, Michael Mavroforakis
The Augmented Complex Kernel LMS
manuscript submitted to IEE Transactions on Signal Processing
null
10.1109/TSP.2012.2200479
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, a unified framework for adaptive kernel based signal processing of complex data was presented by the authors, which, besides offering techniques to map the input data to complex Reproducing Kernel Hilbert Spaces, developed a suitable Wirtinger-like Calculus for general Hilbert Spaces. In this short paper, the extended Wirtinger's calculus is adopted to derive complex kernel-based widely-linear estimation filters. Furthermore, we illuminate several important characteristics of the widely linear filters. We show that, although in many cases the gains from adopting widely linear estimation filters, as alternatives to ordinary linear ones, are rudimentary, for the case of kernel based widely linear filters significant performance improvements can be obtained.
[ { "created": "Wed, 5 Oct 2011 19:03:35 GMT", "version": "v1" } ]
2015-05-30
[ [ "Bouboulis", "Pantelis", "" ], [ "Theodoridis", "Sergios", "" ], [ "Mavroforakis", "Michael", "" ] ]
Recently, a unified framework for adaptive kernel based signal processing of complex data was presented by the authors, which, besides offering techniques to map the input data to complex Reproducing Kernel Hilbert Spaces, developed a suitable Wirtinger-like Calculus for general Hilbert Spaces. In this short paper, the extended Wirtinger's calculus is adopted to derive complex kernel-based widely-linear estimation filters. Furthermore, we illuminate several important characteristics of the widely linear filters. We show that, although in many cases the gains from adopting widely linear estimation filters, as alternatives to ordinary linear ones, are rudimentary, for the case of kernel based widely linear filters significant performance improvements can be obtained.
2101.05357
Mehrshad Zandigohar
Mehrshad Zandigohar, Mo Han, Deniz Erdogmus, and Gunar Schirner
Towards Creating a Deployable Grasp Type Probability Estimator for a Prosthetic Hand
null
CyPhy 2019, WESE 2019. Lecture Notes in Computer Science, vol 11971. Springer, Cham
10.1007/978-3-030-41131-2_3
null
cs.LG cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For lower arm amputees, prosthetic hands promise to restore most of physical interaction capabilities. This requires to accurately predict hand gestures capable of grabbing varying objects and execute them timely as intended by the user. Current approaches often rely on physiological signal inputs such as Electromyography (EMG) signal from residual limb muscles to infer the intended motion. However, limited signal quality, user diversity and high variability adversely affect the system robustness. Instead of solely relying on EMG signals, our work enables augmenting EMG intent inference with physical state probability through machine learning and computer vision method. To this end, we: (1) study state-of-the-art deep neural network architectures to select a performant source of knowledge transfer for the prosthetic hand, (2) use a dataset containing object images and probability distribution of grasp types as a new form of labeling where instead of using absolute values of zero and one as the conventional classification labels, our labels are a set of probabilities whose sum is 1. The proposed method generates probabilistic predictions which could be fused with EMG prediction of probabilities over grasps by using the visual information from the palm camera of a prosthetic hand. Our results demonstrate that InceptionV3 achieves highest accuracy with 0.95 angular similarity followed by 1.4 MobileNetV2 with 0.93 at ~20% the amount of operations.
[ { "created": "Wed, 13 Jan 2021 21:39:41 GMT", "version": "v1" } ]
2021-01-15
[ [ "Zandigohar", "Mehrshad", "" ], [ "Han", "Mo", "" ], [ "Erdogmus", "Deniz", "" ], [ "Schirner", "Gunar", "" ] ]
For lower arm amputees, prosthetic hands promise to restore most of physical interaction capabilities. This requires to accurately predict hand gestures capable of grabbing varying objects and execute them timely as intended by the user. Current approaches often rely on physiological signal inputs such as Electromyography (EMG) signal from residual limb muscles to infer the intended motion. However, limited signal quality, user diversity and high variability adversely affect the system robustness. Instead of solely relying on EMG signals, our work enables augmenting EMG intent inference with physical state probability through machine learning and computer vision method. To this end, we: (1) study state-of-the-art deep neural network architectures to select a performant source of knowledge transfer for the prosthetic hand, (2) use a dataset containing object images and probability distribution of grasp types as a new form of labeling where instead of using absolute values of zero and one as the conventional classification labels, our labels are a set of probabilities whose sum is 1. The proposed method generates probabilistic predictions which could be fused with EMG prediction of probabilities over grasps by using the visual information from the palm camera of a prosthetic hand. Our results demonstrate that InceptionV3 achieves highest accuracy with 0.95 angular similarity followed by 1.4 MobileNetV2 with 0.93 at ~20% the amount of operations.
2311.15855
Hsuan-I Ho
Hsuan-I Ho, Jie Song, Otmar Hilliges
SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion
23 pages, 23 figures, CVPR 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long-standing goal of 3D human reconstruction is to create lifelike and fully detailed 3D humans from single-view images. The main challenge lies in inferring unknown body shapes, appearances, and clothing details in areas not visible in the images. To address this, we propose SiTH, a novel pipeline that uniquely integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow. At the core of our method lies the decomposition of the challenging single-view reconstruction problem into generative hallucination and reconstruction subproblems. For the former, we employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images. For the latter, we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images. SiTH requires as few as 500 3D human scans for training while maintaining its generality and robustness to diverse images. Extensive evaluations on two 3D human benchmarks, including our newly created one, highlighted our method's superior accuracy and perceptual quality in 3D textured human reconstruction. Our code and evaluation benchmark are available at https://ait.ethz.ch/sith
[ { "created": "Mon, 27 Nov 2023 14:22:07 GMT", "version": "v1" }, { "created": "Sat, 30 Mar 2024 14:21:40 GMT", "version": "v2" } ]
2024-04-02
[ [ "Ho", "Hsuan-I", "" ], [ "Song", "Jie", "" ], [ "Hilliges", "Otmar", "" ] ]
A long-standing goal of 3D human reconstruction is to create lifelike and fully detailed 3D humans from single-view images. The main challenge lies in inferring unknown body shapes, appearances, and clothing details in areas not visible in the images. To address this, we propose SiTH, a novel pipeline that uniquely integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow. At the core of our method lies the decomposition of the challenging single-view reconstruction problem into generative hallucination and reconstruction subproblems. For the former, we employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images. For the latter, we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images. SiTH requires as few as 500 3D human scans for training while maintaining its generality and robustness to diverse images. Extensive evaluations on two 3D human benchmarks, including our newly created one, highlighted our method's superior accuracy and perceptual quality in 3D textured human reconstruction. Our code and evaluation benchmark are available at https://ait.ethz.ch/sith
1901.02935
Nitish Kumar
Nitish Kumar, Stelian Coros
An optimization framework for simulation and kinematic control of Constrained Collaborative Mobile Agents (CCMA) system
8 pages, Accepted version, IROS 2019
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a concept of constrained collaborative mobile agents (CCMA) system, which consists of multiple wheeled mobile agents constrained by a passive kinematic chain. This mobile robotic system is modular in nature, the passive kinematic chain can be easily replaced with different designs and morphologies for different functions and task adaptability. Depending solely on the actuation of the mobile agents, this mobile robotic system can manipulate or position an end-effector. However, the complexity of the system due to presence of several mobile agents, passivity of the kinematic chain and the nature of the constrained collaborative manipulation requires development of an optimization framework. We therefore present an optimization framework for forward simulation and kinematic control of this system. With this optimization framework, the number of deployed mobile agents, actuation schemes, the design and morphology of the passive kinematic chain can be easily changed, which reinforces the modularity and collaborative aspects of the mobile robotic system. We present results, in simulation, for spatial 4-DOF to 6-DOF CCMA system examples. Finally, we present experimental quantitative results for two different fabricated 4-DOF prototypes, which demonstrate different actuation schemes, control and collaborative manipulation of an end-effector.
[ { "created": "Wed, 9 Jan 2019 21:18:46 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2019 08:35:42 GMT", "version": "v2" }, { "created": "Mon, 9 Sep 2019 19:41:07 GMT", "version": "v3" } ]
2019-09-11
[ [ "Kumar", "Nitish", "" ], [ "Coros", "Stelian", "" ] ]
We present a concept of constrained collaborative mobile agents (CCMA) system, which consists of multiple wheeled mobile agents constrained by a passive kinematic chain. This mobile robotic system is modular in nature, the passive kinematic chain can be easily replaced with different designs and morphologies for different functions and task adaptability. Depending solely on the actuation of the mobile agents, this mobile robotic system can manipulate or position an end-effector. However, the complexity of the system due to presence of several mobile agents, passivity of the kinematic chain and the nature of the constrained collaborative manipulation requires development of an optimization framework. We therefore present an optimization framework for forward simulation and kinematic control of this system. With this optimization framework, the number of deployed mobile agents, actuation schemes, the design and morphology of the passive kinematic chain can be easily changed, which reinforces the modularity and collaborative aspects of the mobile robotic system. We present results, in simulation, for spatial 4-DOF to 6-DOF CCMA system examples. Finally, we present experimental quantitative results for two different fabricated 4-DOF prototypes, which demonstrate different actuation schemes, control and collaborative manipulation of an end-effector.
2207.11025
Guillermo Gomez-Trenado
Guillermo Gomez-Trenado (1), St\'ephane Lathuili\`ere (2), Pablo Mesejo (1), \'Oscar Cord\'on (1) ((1) DaSCI research institute, DECSAI, University of Granada, Granada, Spain, (2) LTCI, T\'el\'ecom-Paris, Intitute Polytechnique de Paris, Palaiseau, France)
Custom Structure Preservation in Face Aging
36 pages, 21 figures
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose a novel architecture for face age editing that can produce structural modifications while maintaining relevant details present in the original image. We disentangle the style and content of the input image and propose a new decoder network that adopts a style-based strategy to combine the style and content representations of the input image while conditioning the output on the target age. We go beyond existing aging methods allowing users to adjust the degree of structure preservation in the input image during inference. To this purpose, we introduce a masking mechanism, the CUstom Structure Preservation module, that distinguishes relevant regions in the input image from those that should be discarded. CUSP requires no additional supervision. Finally, our quantitative and qualitative analysis which include a user study, show that our method outperforms prior art and demonstrates the effectiveness of our strategy regarding image editing and adjustable structure preservation. Code and pretrained models are available at https://github.com/guillermogotre/CUSP.
[ { "created": "Fri, 22 Jul 2022 11:58:33 GMT", "version": "v1" } ]
2022-07-25
[ [ "Gomez-Trenado", "Guillermo", "" ], [ "Lathuilière", "Stéphane", "" ], [ "Mesejo", "Pablo", "" ], [ "Cordón", "Óscar", "" ] ]
In this work, we propose a novel architecture for face age editing that can produce structural modifications while maintaining relevant details present in the original image. We disentangle the style and content of the input image and propose a new decoder network that adopts a style-based strategy to combine the style and content representations of the input image while conditioning the output on the target age. We go beyond existing aging methods allowing users to adjust the degree of structure preservation in the input image during inference. To this purpose, we introduce a masking mechanism, the CUstom Structure Preservation module, that distinguishes relevant regions in the input image from those that should be discarded. CUSP requires no additional supervision. Finally, our quantitative and qualitative analysis which include a user study, show that our method outperforms prior art and demonstrates the effectiveness of our strategy regarding image editing and adjustable structure preservation. Code and pretrained models are available at https://github.com/guillermogotre/CUSP.
2003.06594
Yue Hu
Yue Hu, Siheng Chen, Ya Zhang, and Xiao Gu
Collaborative Motion Prediction via Neural Motion Message Passing
Accepted by CVPR 2020 Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion prediction is essential and challenging for autonomous vehicles and social robots. One challenge of motion prediction is to model the interaction among traffic actors, which could cooperate with each other to avoid collisions or form groups. To address this challenge, we propose neural motion message passing (NMMP) to explicitly model the interaction and learn representations for directed interactions between actors. Based on the proposed NMMP, we design the motion prediction systems for two settings: the pedestrian setting and the joint pedestrian and vehicle setting. Both systems share a common pattern: we use an individual branch to model the behavior of a single actor and an interactive branch to model the interaction between actors, while with different wrappers to handle the varied input formats and characteristics. The experimental results show that both systems outperform the previous state-of-the-art methods on several existing benchmarks. Besides, we provide interpretability for interaction learning.
[ { "created": "Sat, 14 Mar 2020 10:12:54 GMT", "version": "v1" } ]
2020-03-17
[ [ "Hu", "Yue", "" ], [ "Chen", "Siheng", "" ], [ "Zhang", "Ya", "" ], [ "Gu", "Xiao", "" ] ]
Motion prediction is essential and challenging for autonomous vehicles and social robots. One challenge of motion prediction is to model the interaction among traffic actors, which could cooperate with each other to avoid collisions or form groups. To address this challenge, we propose neural motion message passing (NMMP) to explicitly model the interaction and learn representations for directed interactions between actors. Based on the proposed NMMP, we design the motion prediction systems for two settings: the pedestrian setting and the joint pedestrian and vehicle setting. Both systems share a common pattern: we use an individual branch to model the behavior of a single actor and an interactive branch to model the interaction between actors, while with different wrappers to handle the varied input formats and characteristics. The experimental results show that both systems outperform the previous state-of-the-art methods on several existing benchmarks. Besides, we provide interpretability for interaction learning.
1503.07405
Arkaitz Zubiaga
Bo Wang, Arkaitz Zubiaga, Maria Liakata, Rob Procter
Making the Most of Tweet-Inherent Features for Social Spam Detection on Twitter
null
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social spam produces a great amount of noise on social media services such as Twitter, which reduces the signal-to-noise ratio that both end users and data mining applications observe. Existing techniques on social spam detection have focused primarily on the identification of spam accounts by using extensive historical and network-based data. In this paper we focus on the detection of spam tweets, which optimises the amount of data that needs to be gathered by relying only on tweet-inherent features. This enables the application of the spam detection system to a large set of tweets in a timely fashion, potentially applicable in a real-time or near real-time setting. Using two large hand-labelled datasets of tweets containing spam, we study the suitability of five classification algorithms and four different feature sets to the social spam detection task. Our results show that, by using the limited set of features readily available in a tweet, we can achieve encouraging results which are competitive when compared against existing spammer detection systems that make use of additional, costly user features. Our study is the first that attempts at generalising conclusions on the optimal classifiers and sets of features for social spam detection over different datasets.
[ { "created": "Wed, 25 Mar 2015 14:58:59 GMT", "version": "v1" } ]
2015-03-26
[ [ "Wang", "Bo", "" ], [ "Zubiaga", "Arkaitz", "" ], [ "Liakata", "Maria", "" ], [ "Procter", "Rob", "" ] ]
Social spam produces a great amount of noise on social media services such as Twitter, which reduces the signal-to-noise ratio that both end users and data mining applications observe. Existing techniques on social spam detection have focused primarily on the identification of spam accounts by using extensive historical and network-based data. In this paper we focus on the detection of spam tweets, which optimises the amount of data that needs to be gathered by relying only on tweet-inherent features. This enables the application of the spam detection system to a large set of tweets in a timely fashion, potentially applicable in a real-time or near real-time setting. Using two large hand-labelled datasets of tweets containing spam, we study the suitability of five classification algorithms and four different feature sets to the social spam detection task. Our results show that, by using the limited set of features readily available in a tweet, we can achieve encouraging results which are competitive when compared against existing spammer detection systems that make use of additional, costly user features. Our study is the first that attempts at generalising conclusions on the optimal classifiers and sets of features for social spam detection over different datasets.
1905.09027
Ji Feng
Ji Feng, Qi-Zhi Cai, Zhi-Hua Zhou
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we consider one challenging training time attack by modifying training data with bounded perturbation, hoping to manipulate the behavior (both targeted or non-targeted) of any corresponding trained classifier during test time when facing clean samples. To achieve this, we proposed to use an auto-encoder-like network to generate the pertubation on the training data paired with one differentiable system acting as the imaginary victim classifier. The perturbation generator will learn to update its weights by watching the training procedure of the imaginary classifier in order to produce the most harmful and imperceivable noise which in turn will lead the lowest generalization power for the victim classifier. This can be formulated into a non-linear equality constrained optimization problem. Unlike GANs, solving such problem is computationally challenging, we then proposed a simple yet effective procedure to decouple the alternating updates for the two networks for stability. The method proposed in this paper can be easily extended to the label specific setting where the attacker can manipulate the predictions of the victim classifiers according to some predefined rules rather than only making wrong predictions. Experiments on various datasets including CIFAR-10 and a reduced version of ImageNet confirmed the effectiveness of the proposed method and empirical results showed that, such bounded perturbation have good transferability regardless of which classifier the victim is actually using on image data.
[ { "created": "Wed, 22 May 2019 09:06:40 GMT", "version": "v1" } ]
2019-05-23
[ [ "Feng", "Ji", "" ], [ "Cai", "Qi-Zhi", "" ], [ "Zhou", "Zhi-Hua", "" ] ]
In this work, we consider one challenging training time attack by modifying training data with bounded perturbation, hoping to manipulate the behavior (both targeted or non-targeted) of any corresponding trained classifier during test time when facing clean samples. To achieve this, we proposed to use an auto-encoder-like network to generate the pertubation on the training data paired with one differentiable system acting as the imaginary victim classifier. The perturbation generator will learn to update its weights by watching the training procedure of the imaginary classifier in order to produce the most harmful and imperceivable noise which in turn will lead the lowest generalization power for the victim classifier. This can be formulated into a non-linear equality constrained optimization problem. Unlike GANs, solving such problem is computationally challenging, we then proposed a simple yet effective procedure to decouple the alternating updates for the two networks for stability. The method proposed in this paper can be easily extended to the label specific setting where the attacker can manipulate the predictions of the victim classifiers according to some predefined rules rather than only making wrong predictions. Experiments on various datasets including CIFAR-10 and a reduced version of ImageNet confirmed the effectiveness of the proposed method and empirical results showed that, such bounded perturbation have good transferability regardless of which classifier the victim is actually using on image data.
1503.04426
Carlo Comin
Carlo Comin, Romeo Rizzi
An Improved Pseudo-Polynomial Upper Bound for the Value Problem and Optimal Strategy Synthesis in Mean Payoff Games
null
null
null
null
cs.DS cs.CC cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we offer an $O(|V|^2 |E|\, W)$ pseudo-polynomial time deterministic algorithm for solving the Value Problem and Optimal Strategy Synthesis in Mean Payoff Games. This improves by a factor $\log(|V|\, W)$ the best previously known pseudo-polynomial time upper bound due to Brim,~\etal The improvement hinges on a suitable characterization of values, and a description of optimal positional strategies, in terms of reweighted Energy Games and Small Energy-Progress Measures.
[ { "created": "Sun, 15 Mar 2015 13:48:06 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2015 09:09:21 GMT", "version": "v2" }, { "created": "Wed, 15 Apr 2015 10:08:35 GMT", "version": "v3" }, { "created": "Mon, 20 Apr 2015 20:57:29 GMT", "version": "v4" }, { "created": "Sun, 20 Dec 2015 16:18:38 GMT", "version": "v5" }, { "created": "Wed, 23 Dec 2015 20:43:43 GMT", "version": "v6" }, { "created": "Sun, 24 Apr 2016 16:35:44 GMT", "version": "v7" } ]
2016-04-26
[ [ "Comin", "Carlo", "" ], [ "Rizzi", "Romeo", "" ] ]
In this work we offer an $O(|V|^2 |E|\, W)$ pseudo-polynomial time deterministic algorithm for solving the Value Problem and Optimal Strategy Synthesis in Mean Payoff Games. This improves by a factor $\log(|V|\, W)$ the best previously known pseudo-polynomial time upper bound due to Brim,~\etal The improvement hinges on a suitable characterization of values, and a description of optimal positional strategies, in terms of reweighted Energy Games and Small Energy-Progress Measures.
2302.14500
Chong Fu
Chong Fu, Xuhong Zhang, Shouling Ji, Ting Wang, Peng Lin, Yanghe Feng, Jianwei Yin
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
Accepted by USENIX Security 2023
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence. A trojaned neural network behaves normally with clean inputs. However, if the input contains a particular trigger, the trojaned model will have attacker-chosen abnormal behavior. Although many backdoor detection methods exist, most of them assume that the defender has access to a set of clean validation samples or samples with the trigger, which may not hold in some crucial real-world cases, e.g., the case where the defender is the maintainer of model-sharing platforms. Thus, in this paper, we propose FreeEagle, the first data-free backdoor detection method that can effectively detect complex backdoor attacks on deep neural networks, without relying on the access to any clean samples or samples with the trigger. The evaluation results on diverse datasets and model architectures show that FreeEagle is effective against various complex backdoor attacks, even outperforming some state-of-the-art non-data-free backdoor detection methods.
[ { "created": "Tue, 28 Feb 2023 11:31:29 GMT", "version": "v1" } ]
2023-03-01
[ [ "Fu", "Chong", "" ], [ "Zhang", "Xuhong", "" ], [ "Ji", "Shouling", "" ], [ "Wang", "Ting", "" ], [ "Lin", "Peng", "" ], [ "Feng", "Yanghe", "" ], [ "Yin", "Jianwei", "" ] ]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence. A trojaned neural network behaves normally with clean inputs. However, if the input contains a particular trigger, the trojaned model will have attacker-chosen abnormal behavior. Although many backdoor detection methods exist, most of them assume that the defender has access to a set of clean validation samples or samples with the trigger, which may not hold in some crucial real-world cases, e.g., the case where the defender is the maintainer of model-sharing platforms. Thus, in this paper, we propose FreeEagle, the first data-free backdoor detection method that can effectively detect complex backdoor attacks on deep neural networks, without relying on the access to any clean samples or samples with the trigger. The evaluation results on diverse datasets and model architectures show that FreeEagle is effective against various complex backdoor attacks, even outperforming some state-of-the-art non-data-free backdoor detection methods.
2010.06283
Hendrik Schuff
Hendrik Schuff, Heike Adel, Ngoc Thang Vu
F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explainable question answering systems predict an answer together with an explanation showing why the answer has been selected. The goal is to enable users to assess the correctness of the system and understand its reasoning process. However, we show that current models and evaluation settings have shortcomings regarding the coupling of answer and explanation which might cause serious issues in user experience. As a remedy, we propose a hierarchical model and a new regularization term to strengthen the answer-explanation coupling as well as two evaluation scores to quantify the coupling. We conduct experiments on the HOTPOTQA benchmark data set and perform a user study. The user study shows that our models increase the ability of the users to judge the correctness of the system and that scores like F1 are not enough to estimate the usefulness of a model in a practical setting with human users. Our scores are better aligned with user experience, making them promising candidates for model selection.
[ { "created": "Tue, 13 Oct 2020 10:53:20 GMT", "version": "v1" } ]
2020-10-14
[ [ "Schuff", "Hendrik", "" ], [ "Adel", "Heike", "" ], [ "Vu", "Ngoc Thang", "" ] ]
Explainable question answering systems predict an answer together with an explanation showing why the answer has been selected. The goal is to enable users to assess the correctness of the system and understand its reasoning process. However, we show that current models and evaluation settings have shortcomings regarding the coupling of answer and explanation which might cause serious issues in user experience. As a remedy, we propose a hierarchical model and a new regularization term to strengthen the answer-explanation coupling as well as two evaluation scores to quantify the coupling. We conduct experiments on the HOTPOTQA benchmark data set and perform a user study. The user study shows that our models increase the ability of the users to judge the correctness of the system and that scores like F1 are not enough to estimate the usefulness of a model in a practical setting with human users. Our scores are better aligned with user experience, making them promising candidates for model selection.
2404.04884
Huan Zhong
Huan Zhong and Chen Wu and Ziqi Xiao
LRNet: Change detection of high-resolution remote sensing imagery via strategy of localization-then-refinement
18 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Change detection, as a research hotspot in the field of remote sensing, has witnessed continuous development and progress. However, the discrimination of boundary details remains a significant bottleneck due to the complexity of surrounding elements between change areas and backgrounds. Discriminating the boundaries of large change areas results in misalignment, while connecting boundaries occurs for small change targets. To address the above issues, a novel network based on the localization-then-refinement strategy is proposed in this paper, namely LRNet. LRNet consists of two stages: localization and refinement. In the localization stage, a three-branch encoder simultaneously extracts original image features and their differential features for interactive localization of the position of each change area. To minimize information loss during feature extraction, learnable optimal pooling (LOP) is proposed to replace the widely used max-pooling. Additionally, this process is trainable and contributes to the overall optimization of the network. To effectively interact features from different branches and accurately locate change areas of various sizes, change alignment attention (C2A) and hierarchical change alignment module (HCA) are proposed. In the refinement stage, the localization results from the localization stage are corrected by constraining the change areas and change edges through the edge-area alignment module (E2A). Subsequently, the decoder, combined with the difference features strengthened by C2A in the localization phase, refines change areas of different sizes, ultimately achieving accurate boundary discrimination of change areas. The proposed LRNet outperforms 13 other state-of-the-art methods in terms of comprehensive evaluation metrics and provides the most precise boundary discrimination results on the LEVIR-CD and WHU-CD datasets.
[ { "created": "Sun, 7 Apr 2024 09:05:04 GMT", "version": "v1" } ]
2024-04-09
[ [ "Zhong", "Huan", "" ], [ "Wu", "Chen", "" ], [ "Xiao", "Ziqi", "" ] ]
Change detection, as a research hotspot in the field of remote sensing, has witnessed continuous development and progress. However, the discrimination of boundary details remains a significant bottleneck due to the complexity of surrounding elements between change areas and backgrounds. Discriminating the boundaries of large change areas results in misalignment, while connecting boundaries occurs for small change targets. To address the above issues, a novel network based on the localization-then-refinement strategy is proposed in this paper, namely LRNet. LRNet consists of two stages: localization and refinement. In the localization stage, a three-branch encoder simultaneously extracts original image features and their differential features for interactive localization of the position of each change area. To minimize information loss during feature extraction, learnable optimal pooling (LOP) is proposed to replace the widely used max-pooling. Additionally, this process is trainable and contributes to the overall optimization of the network. To effectively interact features from different branches and accurately locate change areas of various sizes, change alignment attention (C2A) and hierarchical change alignment module (HCA) are proposed. In the refinement stage, the localization results from the localization stage are corrected by constraining the change areas and change edges through the edge-area alignment module (E2A). Subsequently, the decoder, combined with the difference features strengthened by C2A in the localization phase, refines change areas of different sizes, ultimately achieving accurate boundary discrimination of change areas. The proposed LRNet outperforms 13 other state-of-the-art methods in terms of comprehensive evaluation metrics and provides the most precise boundary discrimination results on the LEVIR-CD and WHU-CD datasets.
2011.09393
Nurislam Tursynbek
Nurislam Tursynbek, Ilya Vilkoviskiy, Maria Sindeeva, Ivan Oseledets
Adversarial Turing Patterns from Cellular Automata
Published as a conference paper at AAAI 2021 (camera-ready version)
null
null
null
cs.NE cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art deep classifiers are intriguingly vulnerable to universal adversarial perturbations: single disturbances of small magnitude that lead to misclassification of most in-puts. This phenomena may potentially result in a serious security problem. Despite the extensive research in this area,there is a lack of theoretical understanding of the structure of these perturbations. In image domain, there is a certain visual similarity between patterns, that represent these perturbations, and classical Turing patterns, which appear as a solution of non-linear partial differential equations and are underlying concept of many processes in nature. In this paper,we provide a theoretical bridge between these two different theories, by mapping a simplified algorithm for crafting universal perturbations to (inhomogeneous) cellular automata,the latter is known to generate Turing patterns. Furthermore,we propose to use Turing patterns, generated by cellular automata, as universal perturbations, and experimentally show that they significantly degrade the performance of deep learning models. We found this method to be a fast and efficient way to create a data-agnostic quasi-imperceptible perturbation in the black-box scenario. The source code is available at https://github.com/NurislamT/advTuring.
[ { "created": "Wed, 18 Nov 2020 16:50:54 GMT", "version": "v1" }, { "created": "Mon, 8 Feb 2021 07:51:43 GMT", "version": "v2" }, { "created": "Tue, 6 Apr 2021 08:59:06 GMT", "version": "v3" } ]
2021-04-07
[ [ "Tursynbek", "Nurislam", "" ], [ "Vilkoviskiy", "Ilya", "" ], [ "Sindeeva", "Maria", "" ], [ "Oseledets", "Ivan", "" ] ]
State-of-the-art deep classifiers are intriguingly vulnerable to universal adversarial perturbations: single disturbances of small magnitude that lead to misclassification of most in-puts. This phenomena may potentially result in a serious security problem. Despite the extensive research in this area,there is a lack of theoretical understanding of the structure of these perturbations. In image domain, there is a certain visual similarity between patterns, that represent these perturbations, and classical Turing patterns, which appear as a solution of non-linear partial differential equations and are underlying concept of many processes in nature. In this paper,we provide a theoretical bridge between these two different theories, by mapping a simplified algorithm for crafting universal perturbations to (inhomogeneous) cellular automata,the latter is known to generate Turing patterns. Furthermore,we propose to use Turing patterns, generated by cellular automata, as universal perturbations, and experimentally show that they significantly degrade the performance of deep learning models. We found this method to be a fast and efficient way to create a data-agnostic quasi-imperceptible perturbation in the black-box scenario. The source code is available at https://github.com/NurislamT/advTuring.
1712.01770
Ricardo Borsoi
Ricardo Augusto Borsoi, Tales Imbiriba, Jos\'e Carlos Moreira Bermudez, C\'edric Richard
Tech Report: A Fast Multiscale Spatial Regularization for Sparse Hyperspectral Unmixing
null
null
10.1109/LGRS.2018.2878394
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sparse hyperspectral unmixing from large spectral libraries has been considered to circumvent limitations of endmember extraction algorithms in many applications. This strategy often leads to ill-posed inverse problems, which can benefit from spatial regularization strategies. While existing spatial regularization methods improve the problem conditioning and promote piecewise smooth solutions, they lead to large nonsmooth optimization problems. Thus, efficiently introducing spatial context in the unmixing problem remains a challenge, and a necessity for many real world applications. In this paper, a novel multiscale spatial regularization approach for sparse unmixing is proposed. The method uses a signal-adaptive spatial multiscale decomposition based on superpixels to decompose the unmixing problem into two simpler problems, one in the approximation domain and another in the original domain. Simulation results using both synthetic and real data indicate that the proposed method can outperform state-of-the-art Total Variation-based algorithms with a computation time comparable to that of their unregularized counterparts.
[ { "created": "Tue, 5 Dec 2017 17:24:54 GMT", "version": "v1" }, { "created": "Wed, 3 Oct 2018 03:28:50 GMT", "version": "v2" }, { "created": "Thu, 25 Oct 2018 22:21:08 GMT", "version": "v3" } ]
2018-10-29
[ [ "Borsoi", "Ricardo Augusto", "" ], [ "Imbiriba", "Tales", "" ], [ "Bermudez", "José Carlos Moreira", "" ], [ "Richard", "Cédric", "" ] ]
Sparse hyperspectral unmixing from large spectral libraries has been considered to circumvent limitations of endmember extraction algorithms in many applications. This strategy often leads to ill-posed inverse problems, which can benefit from spatial regularization strategies. While existing spatial regularization methods improve the problem conditioning and promote piecewise smooth solutions, they lead to large nonsmooth optimization problems. Thus, efficiently introducing spatial context in the unmixing problem remains a challenge, and a necessity for many real world applications. In this paper, a novel multiscale spatial regularization approach for sparse unmixing is proposed. The method uses a signal-adaptive spatial multiscale decomposition based on superpixels to decompose the unmixing problem into two simpler problems, one in the approximation domain and another in the original domain. Simulation results using both synthetic and real data indicate that the proposed method can outperform state-of-the-art Total Variation-based algorithms with a computation time comparable to that of their unregularized counterparts.
2005.00181
Yi Luan
Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins
Sparse, Dense, and Attentional Representations for Text Retrieval
To appear in TACL 2020. The arXiv version is a pre-MIT Press publication version
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dual encoders perform retrieval by encoding documents and queries into dense lowdimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval.
[ { "created": "Fri, 1 May 2020 02:21:17 GMT", "version": "v1" }, { "created": "Wed, 14 Oct 2020 19:12:42 GMT", "version": "v2" }, { "created": "Tue, 16 Feb 2021 23:18:25 GMT", "version": "v3" } ]
2021-02-18
[ [ "Luan", "Yi", "" ], [ "Eisenstein", "Jacob", "" ], [ "Toutanova", "Kristina", "" ], [ "Collins", "Michael", "" ] ]
Dual encoders perform retrieval by encoding documents and queries into dense lowdimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval.
1804.09601
Conor O Malley
Conor O'Malley, Drosos Kourounis, Gabriela Hug and Olaf Schenk
Optimizing gas networks using adjoint gradients
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An increasing amount of gas-fired power plants are currently being installed in modern power grids worldwide. This is due to their low cost and the inherent flexibility offered to the electrical network, particularly in the face of increasing renewable generation. However, the integration and operation of gas generators poses additional challenges to gas network operators, mainly because they can induce rapid changes in the demand. This paper presents an efficient minimization scheme of gas compression costs under dynamic conditions where deliveries to customers are described by time-dependent mass flows. The optimization scheme is comprised of a set of transient nonlinear partial differential equations that model the isothermal gas flow in pipes, an adjoint problem for efficient calculation of the objective gradients and constraint Jacobians, and state-of-the-art optimal control methods for solving nonlinear programs. As the evaluation of constraint Jacobians can become computationally costly as the number of constraints increases, efficient constraint lumping schemes are proposed and investigated with respect to accuracy and performance. The resulting optimal control problems are solved using both interior-point and sequential quadratic programming methods. The proposed optimization framework is validated through several benchmark cases of increasing complexity.
[ { "created": "Wed, 25 Apr 2018 14:40:54 GMT", "version": "v1" } ]
2018-04-26
[ [ "O'Malley", "Conor", "" ], [ "Kourounis", "Drosos", "" ], [ "Hug", "Gabriela", "" ], [ "Schenk", "Olaf", "" ] ]
An increasing amount of gas-fired power plants are currently being installed in modern power grids worldwide. This is due to their low cost and the inherent flexibility offered to the electrical network, particularly in the face of increasing renewable generation. However, the integration and operation of gas generators poses additional challenges to gas network operators, mainly because they can induce rapid changes in the demand. This paper presents an efficient minimization scheme of gas compression costs under dynamic conditions where deliveries to customers are described by time-dependent mass flows. The optimization scheme is comprised of a set of transient nonlinear partial differential equations that model the isothermal gas flow in pipes, an adjoint problem for efficient calculation of the objective gradients and constraint Jacobians, and state-of-the-art optimal control methods for solving nonlinear programs. As the evaluation of constraint Jacobians can become computationally costly as the number of constraints increases, efficient constraint lumping schemes are proposed and investigated with respect to accuracy and performance. The resulting optimal control problems are solved using both interior-point and sequential quadratic programming methods. The proposed optimization framework is validated through several benchmark cases of increasing complexity.
2208.06341
Ryo Suzuki
Hiroki Kaimoto, Kyzyl Monteiro, Mehrad Faridan, Jiatong Li, Samin Farajian, Yasuaki Kakehi, Ken Nakagaki, Ryo Suzuki
Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI
UIST 2022
null
10.1145/3526113.3545626
null
cs.HC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces Sketched Reality, an approach that combines AR sketching and actuated tangible user interfaces (TUI) for bidirectional sketching interaction. Bi-directional sketching enables virtual sketches and physical objects to "affect" each other through physical actuation and digital computation. In the existing AR sketching, the relationship between virtual and physical worlds is only one-directional -- while physical interaction can affect virtual sketches, virtual sketches have no return effect on the physical objects or environment. In contrast, bi-directional sketching interaction allows the seamless coupling between sketches and actuated TUIs. In this paper, we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching tool to demonstrate the concept. In our system, virtual sketches drawn and simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move, actuate, collide, and constrain physical Toio robots, as if virtual sketches and the physical objects exist in the same space through seamless coupling between AR and robot motion. This paper contributes a set of novel interactions and a design space of bi-directional AR sketching. We demonstrate a series of potential applications, such as tangible physics education, explorable mechanism, tangible gaming for children, and in-situ robot programming via sketching.
[ { "created": "Fri, 12 Aug 2022 16:01:31 GMT", "version": "v1" }, { "created": "Tue, 4 Oct 2022 17:14:57 GMT", "version": "v2" } ]
2022-10-05
[ [ "Kaimoto", "Hiroki", "" ], [ "Monteiro", "Kyzyl", "" ], [ "Faridan", "Mehrad", "" ], [ "Li", "Jiatong", "" ], [ "Farajian", "Samin", "" ], [ "Kakehi", "Yasuaki", "" ], [ "Nakagaki", "Ken", "" ], [ "Suzuki", "Ryo", "" ] ]
This paper introduces Sketched Reality, an approach that combines AR sketching and actuated tangible user interfaces (TUI) for bidirectional sketching interaction. Bi-directional sketching enables virtual sketches and physical objects to "affect" each other through physical actuation and digital computation. In the existing AR sketching, the relationship between virtual and physical worlds is only one-directional -- while physical interaction can affect virtual sketches, virtual sketches have no return effect on the physical objects or environment. In contrast, bi-directional sketching interaction allows the seamless coupling between sketches and actuated TUIs. In this paper, we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching tool to demonstrate the concept. In our system, virtual sketches drawn and simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move, actuate, collide, and constrain physical Toio robots, as if virtual sketches and the physical objects exist in the same space through seamless coupling between AR and robot motion. This paper contributes a set of novel interactions and a design space of bi-directional AR sketching. We demonstrate a series of potential applications, such as tangible physics education, explorable mechanism, tangible gaming for children, and in-situ robot programming via sketching.
1811.00078
Nikolaos Dionelis
Nikolaos Dionelis
On Single-Channel Speech Enhancement and On Non-Linear Modulation-Domain Kalman Filtering
13 pages
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report focuses on algorithms that perform single-channel speech enhancement. The author of this report uses modulation-domain Kalman filtering algorithms for speech enhancement, i.e. noise suppression and dereverberation, in [1], [2], [3], [4] and [5]. Modulation-domain Kalman filtering can be applied for both noise and late reverberation suppression and in [2], [1], [3] and [4], various model-based speech enhancement algorithms that perform modulation-domain Kalman filtering are designed, implemented and tested. The model-based enhancement algorithm in [2] estimates and tracks the speech phase. The short-time-Fourier-transform-based enhancement algorithm in [5] uses the active speech level estimator presented in [6]. This report describes how different algorithms perform speech enhancement and the algorithms discussed in this report are addressed to researchers interested in monaural speech enhancement. The algorithms are composed of different processing blocks and techniques [7]; understanding the implementation choices made during the system design is important because this provides insights that can assist the development of new algorithms. Index Terms - Speech enhancement, dereverberation, denoising, Kalman filter, minimum mean squared error estimation.
[ { "created": "Wed, 31 Oct 2018 19:30:39 GMT", "version": "v1" } ]
2018-11-02
[ [ "Dionelis", "Nikolaos", "" ] ]
This report focuses on algorithms that perform single-channel speech enhancement. The author of this report uses modulation-domain Kalman filtering algorithms for speech enhancement, i.e. noise suppression and dereverberation, in [1], [2], [3], [4] and [5]. Modulation-domain Kalman filtering can be applied for both noise and late reverberation suppression and in [2], [1], [3] and [4], various model-based speech enhancement algorithms that perform modulation-domain Kalman filtering are designed, implemented and tested. The model-based enhancement algorithm in [2] estimates and tracks the speech phase. The short-time-Fourier-transform-based enhancement algorithm in [5] uses the active speech level estimator presented in [6]. This report describes how different algorithms perform speech enhancement and the algorithms discussed in this report are addressed to researchers interested in monaural speech enhancement. The algorithms are composed of different processing blocks and techniques [7]; understanding the implementation choices made during the system design is important because this provides insights that can assist the development of new algorithms. Index Terms - Speech enhancement, dereverberation, denoising, Kalman filter, minimum mean squared error estimation.
2210.12375
Marten Lienen
Marten Lienen and Stephan G\"unnemann
torchode: A Parallel ODE Solver for PyTorch
Accepted at The Symbiosis of Deep Learning and Differential Equations Workshop, NeurIPS, 2022
null
null
null
cs.LG cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an ODE solver for the PyTorch ecosystem that can solve multiple ODEs in parallel independently from each other while achieving significant performance gains. Our implementation tracks each ODE's progress separately and is carefully optimized for GPUs and compatibility with PyTorch's JIT compiler. Its design lets researchers easily augment any aspect of the solver and collect and analyze internal solver statistics. In our experiments, our implementation is up to 4.3 times faster per step than other ODE solvers and it is robust against within-batch interactions that lead other solvers to take up to 4 times as many steps. Code available at https://github.com/martenlienen/torchode
[ { "created": "Sat, 22 Oct 2022 07:08:17 GMT", "version": "v1" }, { "created": "Tue, 17 Jan 2023 09:02:47 GMT", "version": "v2" } ]
2023-01-18
[ [ "Lienen", "Marten", "" ], [ "Günnemann", "Stephan", "" ] ]
We introduce an ODE solver for the PyTorch ecosystem that can solve multiple ODEs in parallel independently from each other while achieving significant performance gains. Our implementation tracks each ODE's progress separately and is carefully optimized for GPUs and compatibility with PyTorch's JIT compiler. Its design lets researchers easily augment any aspect of the solver and collect and analyze internal solver statistics. In our experiments, our implementation is up to 4.3 times faster per step than other ODE solvers and it is robust against within-batch interactions that lead other solvers to take up to 4 times as many steps. Code available at https://github.com/martenlienen/torchode
1702.02690
Hai Lin
Hai Lin, Feifei Gao, Shi Jin, Geoffrey Ye Li
A New View of Multi-User Hybrid Massive MIMO: Non-Orthogonal Angle Division Multiple Access
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new view of multi-user (MU) hybrid massive multiple-input and multiple-output (MIMO) systems from array signal processing perspective. We first show that the instantaneous channel vectors corresponding to different users are asymptotically orthogonal if the angles of arrival (AOAs) of users are different. We then decompose the channel matrix into an angle domain basis matrix and a gain matrix. The former can be formulated by steering vectors and the latter has the same size as the number of RF chains, which perfectly matches the structure of hybrid precoding. A novel hybrid channel estimation is proposed by separately estimating the angle information and the gain matrix, which could significantly save the training overhead and substantially improve the channel estimation accuracy compared to the conventional beamspace approach. Moreover, with the aid of the angle domain matrix, the MU massive MIMO system can be viewed as a type of non-orthogonal angle division multiple access (ADMA) to simultaneously serve multiple users at the same frequency band. Finally, the performance of the proposed scheme is validated by computer simulation results.
[ { "created": "Thu, 9 Feb 2017 03:34:14 GMT", "version": "v1" }, { "created": "Sat, 15 Jul 2017 15:02:08 GMT", "version": "v2" } ]
2022-10-18
[ [ "Lin", "Hai", "" ], [ "Gao", "Feifei", "" ], [ "Jin", "Shi", "" ], [ "Li", "Geoffrey Ye", "" ] ]
This paper presents a new view of multi-user (MU) hybrid massive multiple-input and multiple-output (MIMO) systems from array signal processing perspective. We first show that the instantaneous channel vectors corresponding to different users are asymptotically orthogonal if the angles of arrival (AOAs) of users are different. We then decompose the channel matrix into an angle domain basis matrix and a gain matrix. The former can be formulated by steering vectors and the latter has the same size as the number of RF chains, which perfectly matches the structure of hybrid precoding. A novel hybrid channel estimation is proposed by separately estimating the angle information and the gain matrix, which could significantly save the training overhead and substantially improve the channel estimation accuracy compared to the conventional beamspace approach. Moreover, with the aid of the angle domain matrix, the MU massive MIMO system can be viewed as a type of non-orthogonal angle division multiple access (ADMA) to simultaneously serve multiple users at the same frequency band. Finally, the performance of the proposed scheme is validated by computer simulation results.
2004.12691
Edward Frady
E. Paxon Frady, Garrick Orchard, David Florey, Nabil Imam, Ruokun Liu, Joyesh Mishra, Jonathan Tse, Andreas Wild, Friedrich T. Sommer, Mike Davies
Neuromorphic Nearest-Neighbor Search Using Intel's Pohoiki Springs
9 pages, 8 figures, 3 tables, submission to NICE 2020
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuromorphic computing applies insights from neuroscience to uncover innovations in computing technology. In the brain, billions of interconnected neurons perform rapid computations at extremely low energy levels by leveraging properties that are foreign to conventional computing systems, such as temporal spiking codes and finely parallelized processing units integrating both memory and computation. Here, we showcase the Pohoiki Springs neuromorphic system, a mesh of 768 interconnected Loihi chips that collectively implement 100 million spiking neurons in silicon. We demonstrate a scalable approximate k-nearest neighbor (k-NN) algorithm for searching large databases that exploits neuromorphic principles. Compared to state-of-the-art conventional CPU-based implementations, we achieve superior latency, index build time, and energy efficiency when evaluated on several standard datasets containing over 1 million high-dimensional patterns. Further, the system supports adding new data points to the indexed database online in O(1) time unlike all but brute force conventional k-NN implementations.
[ { "created": "Mon, 27 Apr 2020 10:23:47 GMT", "version": "v1" } ]
2020-04-28
[ [ "Frady", "E. Paxon", "" ], [ "Orchard", "Garrick", "" ], [ "Florey", "David", "" ], [ "Imam", "Nabil", "" ], [ "Liu", "Ruokun", "" ], [ "Mishra", "Joyesh", "" ], [ "Tse", "Jonathan", "" ], [ "Wild", "Andreas", "" ], [ "Sommer", "Friedrich T.", "" ], [ "Davies", "Mike", "" ] ]
Neuromorphic computing applies insights from neuroscience to uncover innovations in computing technology. In the brain, billions of interconnected neurons perform rapid computations at extremely low energy levels by leveraging properties that are foreign to conventional computing systems, such as temporal spiking codes and finely parallelized processing units integrating both memory and computation. Here, we showcase the Pohoiki Springs neuromorphic system, a mesh of 768 interconnected Loihi chips that collectively implement 100 million spiking neurons in silicon. We demonstrate a scalable approximate k-nearest neighbor (k-NN) algorithm for searching large databases that exploits neuromorphic principles. Compared to state-of-the-art conventional CPU-based implementations, we achieve superior latency, index build time, and energy efficiency when evaluated on several standard datasets containing over 1 million high-dimensional patterns. Further, the system supports adding new data points to the indexed database online in O(1) time unlike all but brute force conventional k-NN implementations.
1405.6137
Arun P V
S.K. Katiyar and P.V. Arun
An enhanced neural network based approach towards object extraction
null
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The improvements in spectral and spatial resolution of the satellite images have facilitated the automatic extraction and identification of the features from satellite images and aerial photographs. An automatic object extraction method is presented for extracting and identifying the various objects from satellite images and the accuracy of the system is verified with regard to IRS satellite images. The system is based on neural network and simulates the process of visual interpretation from remote sensing images and hence increases the efficiency of image analysis. This approach obtains the basic characteristics of the various features and the performance is enhanced by the automatic learning approach, intelligent interpretation, and intelligent interpolation. The major advantage of the method is its simplicity and that the system identifies the features not only based on pixel value but also based on the shape, haralick features etc of the objects. Further the system allows flexibility for identifying the features within the same category based on size and shape. The successful application of the system verified its effectiveness and the accuracy of the system were assessed by ground truth verification.
[ { "created": "Wed, 5 Feb 2014 20:05:34 GMT", "version": "v1" } ]
2014-05-26
[ [ "Katiyar", "S. K.", "" ], [ "Arun", "P. V.", "" ] ]
The improvements in spectral and spatial resolution of the satellite images have facilitated the automatic extraction and identification of the features from satellite images and aerial photographs. An automatic object extraction method is presented for extracting and identifying the various objects from satellite images and the accuracy of the system is verified with regard to IRS satellite images. The system is based on neural network and simulates the process of visual interpretation from remote sensing images and hence increases the efficiency of image analysis. This approach obtains the basic characteristics of the various features and the performance is enhanced by the automatic learning approach, intelligent interpretation, and intelligent interpolation. The major advantage of the method is its simplicity and that the system identifies the features not only based on pixel value but also based on the shape, haralick features etc of the objects. Further the system allows flexibility for identifying the features within the same category based on size and shape. The successful application of the system verified its effectiveness and the accuracy of the system were assessed by ground truth verification.
2111.15318
Michael Strecke
Michael Strecke and Joerg Stueckler
DiffSDFSim: Differentiable Rigid-Body Dynamics With Implicit Shapes
22 pages, 23 Figures (including supplementary material). Presented 3DV 2021. Project website: https://diffsdfsim.is.tue.mpg.de/
2021 International Conference on 3D Vision (3DV)
10.1109/3DV53792.2021.00020
null
cs.CV cs.GR cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differentiable physics is a powerful tool in computer vision and robotics for scene understanding and reasoning about interactions. Existing approaches have frequently been limited to objects with simple shape or shapes that are known in advance. In this paper, we propose a novel approach to differentiable physics with frictional contacts which represents object shapes implicitly using signed distance fields (SDFs). Our simulation supports contact point calculation even when the involved shapes are nonconvex. Moreover, we propose ways for differentiating the dynamics for the object shape to facilitate shape optimization using gradient-based methods. In our experiments, we demonstrate that our approach allows for model-based inference of physical parameters such as friction coefficients, mass, forces or shape parameters from trajectory and depth image observations in several challenging synthetic scenarios and a real image sequence.
[ { "created": "Tue, 30 Nov 2021 11:56:24 GMT", "version": "v1" }, { "created": "Mon, 10 Jan 2022 15:25:13 GMT", "version": "v2" } ]
2022-01-11
[ [ "Strecke", "Michael", "" ], [ "Stueckler", "Joerg", "" ] ]
Differentiable physics is a powerful tool in computer vision and robotics for scene understanding and reasoning about interactions. Existing approaches have frequently been limited to objects with simple shape or shapes that are known in advance. In this paper, we propose a novel approach to differentiable physics with frictional contacts which represents object shapes implicitly using signed distance fields (SDFs). Our simulation supports contact point calculation even when the involved shapes are nonconvex. Moreover, we propose ways for differentiating the dynamics for the object shape to facilitate shape optimization using gradient-based methods. In our experiments, we demonstrate that our approach allows for model-based inference of physical parameters such as friction coefficients, mass, forces or shape parameters from trajectory and depth image observations in several challenging synthetic scenarios and a real image sequence.
1905.02356
Leonardo Mu\~noz
Leonardo Munoz and Oscar Avila
A model to assess customer alignment through customer experience concepts
12 pages, Preprint version, BIS 2019 International Workshops, Seville, Spain, June 26 to 28, 2019, Revised Papers
Business Information Systems Workshops. BIS 2019. Lecture Notes in Business Information Processing, vol 373. Springer, Cham
10.1007/978-3-030-36691-9_29
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Business and Information Technology Alignment (BITA) has been one of the main concerns of IT and Business executives and directors due to its importance to overall company performance, especially today in the age of digital transformation. For BITA has been developed several models which in general has focused in the implementation of alignment strategies for the internal operation of the organizations and in the measurement of this internal alignment, but, there is still a big gap in measurement models of the alignment with the external environment of the organizations. In this paper is presented the design and application of a maturity measurement model for BITA with the customers, where the customers are actors of the external environment of the companies. The proposed model involves evaluation criteria and business practices which the companies ideally do for improve the relationship with their customers.
[ { "created": "Tue, 7 May 2019 05:08:45 GMT", "version": "v1" }, { "created": "Thu, 9 May 2019 21:08:47 GMT", "version": "v2" }, { "created": "Wed, 18 Dec 2019 17:01:03 GMT", "version": "v3" } ]
2019-12-19
[ [ "Munoz", "Leonardo", "" ], [ "Avila", "Oscar", "" ] ]
Business and Information Technology Alignment (BITA) has been one of the main concerns of IT and Business executives and directors due to its importance to overall company performance, especially today in the age of digital transformation. For BITA has been developed several models which in general has focused in the implementation of alignment strategies for the internal operation of the organizations and in the measurement of this internal alignment, but, there is still a big gap in measurement models of the alignment with the external environment of the organizations. In this paper is presented the design and application of a maturity measurement model for BITA with the customers, where the customers are actors of the external environment of the companies. The proposed model involves evaluation criteria and business practices which the companies ideally do for improve the relationship with their customers.
1310.6686
Carlos Alberto Fernandez-y-Fernandez
Reyes Ju\'arez-Ram\'irez, Karen Cort\'es Verd\'in, Beatriz Ang\'elica Toscano de la Torre, Hanna Oktaba, Carlos Alberto Fern\'andez-y-Fern\'andez, Brenda Leticia Flores R\'ios, Fabiola Angulo Molina
Estado Actual de la Pr\'actica de la Ingenier\'ia de Software en M\'exico
null
Congreso Internacional de Investigaci\'on e Innovaci\'on en Ingenier\'ia de Software (Conisoft 2013), pp. 3-14, Xalapa, Veracruz, M\'exico, 2013. ISBN: 978-0-615-89523-9
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The software engineering is a relatively new discipline compared to other sciences, since the origins of the term itself dates back to the years 1968 and 1969. At present, the market and the software industry have a significant relevance in several countries of the world; however, although Mexico is immersed in this race, has not even reached the level of success achieved in other countries in this sector. This paper presents an overview of the situation that keeps the practice of software engineering in Mexico, with emphasis on the academic realm. It shows a compilation of scientific research activity carried out in universities, as well as a brief analysis of undergraduate educational programs including the software engineering discipline . At the end, future work to be done is proposed in order to find a point of convergence between academia and industry, and also to support the flourishing of this business which somehow will have a positive impact on the economy of our country.
[ { "created": "Thu, 24 Oct 2013 17:58:50 GMT", "version": "v1" } ]
2013-10-25
[ [ "Juárez-Ramírez", "Reyes", "" ], [ "Verdín", "Karen Cortés", "" ], [ "de la Torre", "Beatriz Angélica Toscano", "" ], [ "Oktaba", "Hanna", "" ], [ "Fernández-y-Fernández", "Carlos Alberto", "" ], [ "Ríos", "Brenda Leticia Flores", "" ], [ "Molina", "Fabiola Angulo", "" ] ]
The software engineering is a relatively new discipline compared to other sciences, since the origins of the term itself dates back to the years 1968 and 1969. At present, the market and the software industry have a significant relevance in several countries of the world; however, although Mexico is immersed in this race, has not even reached the level of success achieved in other countries in this sector. This paper presents an overview of the situation that keeps the practice of software engineering in Mexico, with emphasis on the academic realm. It shows a compilation of scientific research activity carried out in universities, as well as a brief analysis of undergraduate educational programs including the software engineering discipline . At the end, future work to be done is proposed in order to find a point of convergence between academia and industry, and also to support the flourishing of this business which somehow will have a positive impact on the economy of our country.
2402.17151
Zhengxiang Wang
Zhengxiang Wang, Owen Rambow
Clustering Document Parts: Detecting and Characterizing Influence Campaigns from Documents
12 pages, 2 figures, 5 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose a novel clustering pipeline to detect and characterize influence campaigns from documents. This approach clusters parts of document, detects clusters that likely reflect an influence campaign, and then identifies documents linked to an influence campaign via their association with the high-influence clusters. Our approach outperforms both the direct document-level classification and the direct document-level clustering approach in predicting if a document is part of an influence campaign. We propose various novel techniques to enhance our pipeline, including using an existing event factuality prediction system to obtain document parts, and aggregating multiple clustering experiments to improve the performance of both cluster and document classification. Classifying documents after clustering not only accurately extracts the parts of the documents that are relevant to influence campaigns, but also captures influence campaigns as a coordinated and holistic phenomenon. Our approach makes possible more fine-grained and interpretable characterizations of influence campaigns from documents.
[ { "created": "Tue, 27 Feb 2024 02:36:43 GMT", "version": "v1" }, { "created": "Fri, 26 Apr 2024 20:01:28 GMT", "version": "v2" } ]
2024-04-30
[ [ "Wang", "Zhengxiang", "" ], [ "Rambow", "Owen", "" ] ]
We propose a novel clustering pipeline to detect and characterize influence campaigns from documents. This approach clusters parts of document, detects clusters that likely reflect an influence campaign, and then identifies documents linked to an influence campaign via their association with the high-influence clusters. Our approach outperforms both the direct document-level classification and the direct document-level clustering approach in predicting if a document is part of an influence campaign. We propose various novel techniques to enhance our pipeline, including using an existing event factuality prediction system to obtain document parts, and aggregating multiple clustering experiments to improve the performance of both cluster and document classification. Classifying documents after clustering not only accurately extracts the parts of the documents that are relevant to influence campaigns, but also captures influence campaigns as a coordinated and holistic phenomenon. Our approach makes possible more fine-grained and interpretable characterizations of influence campaigns from documents.
1308.4458
Reza Pournaghi
Reza Pournaghi and Xiaolin Wu
Coded Acquisition of High Frame Rate Video
null
null
10.1109/TIP.2014.2368359
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High frame video (HFV) is an important investigational tool in sciences, engineering and military. In ultra-high speed imaging, the obtainable temporal, spatial and spectral resolutions are limited by the sustainable throughput of in-camera mass memory, the lower bound of exposure time, and illumination conditions. In order to break these bottlenecks, we propose a new coded video acquisition framework that employs K > 2 conventional cameras, each of which makes random measurements of the 3D video signal in both temporal and spatial domains. For each of the K cameras, this multi-camera strategy greatly relaxes the stringent requirements in memory speed, shutter speed, and illumination strength. The recovery of HFV from these random measurements is posed and solved as a large scale l1 minimization problem by exploiting joint temporal and spatial sparsities of the 3D signal. Three coded video acquisition techniques of varied trade offs between performance and hardware complexity are developed: frame-wise coded acquisition, pixel-wise coded acquisition, and column-row-wise coded acquisition. The performances of these techniques are analyzed in relation to the sparsity of the underlying video signal. Simulations of these new HFV capture techniques are carried out and experimental results are reported.
[ { "created": "Wed, 21 Aug 2013 01:13:46 GMT", "version": "v1" } ]
2015-06-16
[ [ "Pournaghi", "Reza", "" ], [ "Wu", "Xiaolin", "" ] ]
High frame video (HFV) is an important investigational tool in sciences, engineering and military. In ultra-high speed imaging, the obtainable temporal, spatial and spectral resolutions are limited by the sustainable throughput of in-camera mass memory, the lower bound of exposure time, and illumination conditions. In order to break these bottlenecks, we propose a new coded video acquisition framework that employs K > 2 conventional cameras, each of which makes random measurements of the 3D video signal in both temporal and spatial domains. For each of the K cameras, this multi-camera strategy greatly relaxes the stringent requirements in memory speed, shutter speed, and illumination strength. The recovery of HFV from these random measurements is posed and solved as a large scale l1 minimization problem by exploiting joint temporal and spatial sparsities of the 3D signal. Three coded video acquisition techniques of varied trade offs between performance and hardware complexity are developed: frame-wise coded acquisition, pixel-wise coded acquisition, and column-row-wise coded acquisition. The performances of these techniques are analyzed in relation to the sparsity of the underlying video signal. Simulations of these new HFV capture techniques are carried out and experimental results are reported.
2310.03890
Zhu Wang
Zhu Wang, Praveen Raj Veluswami, Harsh Mishra, Sathya N. Ravi
Accelerated Neural Network Training with Rooted Logistic Objectives
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Many neural networks deployed in the real world scenarios are trained using cross entropy based loss functions. From the optimization perspective, it is known that the behavior of first order methods such as gradient descent crucially depend on the separability of datasets. In fact, even in the most simplest case of binary classification, the rate of convergence depends on two factors: (1) condition number of data matrix, and (2) separability of the dataset. With no further pre-processing techniques such as over-parametrization, data augmentation etc., separability is an intrinsic quantity of the data distribution under consideration. We focus on the landscape design of the logistic function and derive a novel sequence of {\em strictly} convex functions that are at least as strict as logistic loss. The minimizers of these functions coincide with those of the minimum norm solution wherever possible. The strict convexity of the derived function can be extended to finetune state-of-the-art models and applications. In empirical experimental analysis, we apply our proposed rooted logistic objective to multiple deep models, e.g., fully-connected neural networks and transformers, on various of classification benchmarks. Our results illustrate that training with rooted loss function is converged faster and gains performance improvements. Furthermore, we illustrate applications of our novel rooted loss function in generative modeling based downstream applications, such as finetuning StyleGAN model with the rooted loss. The code implementing our losses and models can be found here for open source software development purposes: https://anonymous.4open.science/r/rooted_loss.
[ { "created": "Thu, 5 Oct 2023 20:49:48 GMT", "version": "v1" } ]
2023-10-09
[ [ "Wang", "Zhu", "" ], [ "Veluswami", "Praveen Raj", "" ], [ "Mishra", "Harsh", "" ], [ "Ravi", "Sathya N.", "" ] ]
Many neural networks deployed in the real world scenarios are trained using cross entropy based loss functions. From the optimization perspective, it is known that the behavior of first order methods such as gradient descent crucially depend on the separability of datasets. In fact, even in the most simplest case of binary classification, the rate of convergence depends on two factors: (1) condition number of data matrix, and (2) separability of the dataset. With no further pre-processing techniques such as over-parametrization, data augmentation etc., separability is an intrinsic quantity of the data distribution under consideration. We focus on the landscape design of the logistic function and derive a novel sequence of {\em strictly} convex functions that are at least as strict as logistic loss. The minimizers of these functions coincide with those of the minimum norm solution wherever possible. The strict convexity of the derived function can be extended to finetune state-of-the-art models and applications. In empirical experimental analysis, we apply our proposed rooted logistic objective to multiple deep models, e.g., fully-connected neural networks and transformers, on various of classification benchmarks. Our results illustrate that training with rooted loss function is converged faster and gains performance improvements. Furthermore, we illustrate applications of our novel rooted loss function in generative modeling based downstream applications, such as finetuning StyleGAN model with the rooted loss. The code implementing our losses and models can be found here for open source software development purposes: https://anonymous.4open.science/r/rooted_loss.
1704.05960
Milad Zafar Nezhad
Milad Zafar Nezhad, Dongxiao Zhu, Xiangrui Li, Kai Yang, Phillip Levy
SAFS: A Deep Feature Selection Approach for Precision Medicine
null
null
10.1109/BIBM.2016.7822569
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new deep feature selection method based on deep architecture. Our method uses stacked auto-encoders for feature representation in higher-level abstraction. We developed and applied a novel feature learning approach to a specific precision medicine problem, which focuses on assessing and prioritizing risk factors for hypertension (HTN) in a vulnerable demographic subgroup (African-American). Our approach is to use deep learning to identify significant risk factors affecting left ventricular mass indexed to body surface area (LVMI) as an indicator of heart damage risk. The results show that our feature learning and representation approach leads to better results in comparison with others.
[ { "created": "Thu, 20 Apr 2017 00:01:28 GMT", "version": "v1" } ]
2017-04-21
[ [ "Nezhad", "Milad Zafar", "" ], [ "Zhu", "Dongxiao", "" ], [ "Li", "Xiangrui", "" ], [ "Yang", "Kai", "" ], [ "Levy", "Phillip", "" ] ]
In this paper, we propose a new deep feature selection method based on deep architecture. Our method uses stacked auto-encoders for feature representation in higher-level abstraction. We developed and applied a novel feature learning approach to a specific precision medicine problem, which focuses on assessing and prioritizing risk factors for hypertension (HTN) in a vulnerable demographic subgroup (African-American). Our approach is to use deep learning to identify significant risk factors affecting left ventricular mass indexed to body surface area (LVMI) as an indicator of heart damage risk. The results show that our feature learning and representation approach leads to better results in comparison with others.
2103.12487
Saeed Masoudian
Saeed Masoudian, Yevgeny Seldin
Improved Analysis of the Tsallis-INF Algorithm in Stochastically Constrained Adversarial Bandits and Stochastic Bandits with Adversarial Corruptions
Published Version in COLT 2021
Conference on Learning Theory 134 (2021) 3330-3350
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive improved regret bounds for the Tsallis-INF algorithm of Zimmert and Seldin (2021). We show that in adversarial regimes with a $(\Delta,C,T)$ self-bounding constraint the algorithm achieves $\mathcal{O}\left(\left(\sum_{i\neq i^*} \frac{1}{\Delta_i}\right)\log_+\left(\frac{(K-1)T}{\left(\sum_{i\neq i^*} \frac{1}{\Delta_i}\right)^2}\right)+\sqrt{C\left(\sum_{i\neq i^*}\frac{1}{\Delta_i}\right)\log_+\left(\frac{(K-1)T}{C\sum_{i\neq i^*}\frac{1}{\Delta_i}}\right)}\right)$ regret bound, where $T$ is the time horizon, $K$ is the number of arms, $\Delta_i$ are the suboptimality gaps, $i^*$ is the best arm, $C$ is the corruption magnitude, and $\log_+(x) = \max\left(1,\log x\right)$. The regime includes stochastic bandits, stochastically constrained adversarial bandits, and stochastic bandits with adversarial corruptions as special cases. Additionally, we provide a general analysis, which allows to achieve the same kind of improvement for generalizations of Tsallis-INF to other settings beyond multiarmed bandits.
[ { "created": "Tue, 23 Mar 2021 12:26:39 GMT", "version": "v1" }, { "created": "Mon, 13 Sep 2021 13:07:41 GMT", "version": "v2" } ]
2021-09-14
[ [ "Masoudian", "Saeed", "" ], [ "Seldin", "Yevgeny", "" ] ]
We derive improved regret bounds for the Tsallis-INF algorithm of Zimmert and Seldin (2021). We show that in adversarial regimes with a $(\Delta,C,T)$ self-bounding constraint the algorithm achieves $\mathcal{O}\left(\left(\sum_{i\neq i^*} \frac{1}{\Delta_i}\right)\log_+\left(\frac{(K-1)T}{\left(\sum_{i\neq i^*} \frac{1}{\Delta_i}\right)^2}\right)+\sqrt{C\left(\sum_{i\neq i^*}\frac{1}{\Delta_i}\right)\log_+\left(\frac{(K-1)T}{C\sum_{i\neq i^*}\frac{1}{\Delta_i}}\right)}\right)$ regret bound, where $T$ is the time horizon, $K$ is the number of arms, $\Delta_i$ are the suboptimality gaps, $i^*$ is the best arm, $C$ is the corruption magnitude, and $\log_+(x) = \max\left(1,\log x\right)$. The regime includes stochastic bandits, stochastically constrained adversarial bandits, and stochastic bandits with adversarial corruptions as special cases. Additionally, we provide a general analysis, which allows to achieve the same kind of improvement for generalizations of Tsallis-INF to other settings beyond multiarmed bandits.
1608.06197
Srinivas S S Kruthiventi
Lokesh Boominathan, Srinivas S S Kruthiventi and R. Venkatesh Babu
CrowdNet: A Deep Convolutional Network for Dense Crowd Counting
Accepted at ACM Multimedia (MM) 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face/body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (<100 images) and deep learning based approaches require large amounts of training data, we perform multi-scale data augmentation. Augmenting the training samples in such a manner helps in guiding the CNN to learn scale invariant representations. Our method is tested on the challenging UCF_CC_50 dataset, and shown to outperform the state of the art methods.
[ { "created": "Mon, 22 Aug 2016 15:43:29 GMT", "version": "v1" } ]
2016-08-23
[ [ "Boominathan", "Lokesh", "" ], [ "Kruthiventi", "Srinivas S S", "" ], [ "Babu", "R. Venkatesh", "" ] ]
Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face/body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (<100 images) and deep learning based approaches require large amounts of training data, we perform multi-scale data augmentation. Augmenting the training samples in such a manner helps in guiding the CNN to learn scale invariant representations. Our method is tested on the challenging UCF_CC_50 dataset, and shown to outperform the state of the art methods.
2305.15817
Yun Yue
Yun Yue, Jiadi Jiang, Zhiling Ye, Ning Gao, Yongchao Liu, Ke Zhang
Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
10 pages. Accepted as a conference paper at KDD '23
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks (DNNs) generalization is known to be closely related to the flatness of minima, leading to the development of Sharpness-Aware Minimization (SAM) for seeking flatter minima and better generalization. In this paper, we revisit the loss of SAM and propose a more general method, called WSAM, by incorporating sharpness as a regularization term. We prove its generalization bound through the combination of PAC and Bayes-PAC techniques, and evaluate its performance on various public datasets. The results demonstrate that WSAM achieves improved generalization, or is at least highly competitive, compared to the vanilla optimizer, SAM and its variants. The code is available at https://github.com/intelligent-machine-learning/dlrover/tree/master/atorch/atorch/optimizers.
[ { "created": "Thu, 25 May 2023 08:00:34 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 07:58:13 GMT", "version": "v2" } ]
2023-06-12
[ [ "Yue", "Yun", "" ], [ "Jiang", "Jiadi", "" ], [ "Ye", "Zhiling", "" ], [ "Gao", "Ning", "" ], [ "Liu", "Yongchao", "" ], [ "Zhang", "Ke", "" ] ]
Deep Neural Networks (DNNs) generalization is known to be closely related to the flatness of minima, leading to the development of Sharpness-Aware Minimization (SAM) for seeking flatter minima and better generalization. In this paper, we revisit the loss of SAM and propose a more general method, called WSAM, by incorporating sharpness as a regularization term. We prove its generalization bound through the combination of PAC and Bayes-PAC techniques, and evaluate its performance on various public datasets. The results demonstrate that WSAM achieves improved generalization, or is at least highly competitive, compared to the vanilla optimizer, SAM and its variants. The code is available at https://github.com/intelligent-machine-learning/dlrover/tree/master/atorch/atorch/optimizers.
1712.06442
Marc Hellmuth
Marc Hellmuth, Nicolas Wieseke, Marcus Lechner, Hans-Peter Lenhof, Martin Middendorf and Peter F. Stadler
Phylogenomics with Paralogs
null
PNAS 2015 112 (7) 2058-2063
10.1073/pnas.1412770112
null
cs.DM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenomics heavily relies on well-curated sequence data sets that consist, for each gene, exclusively of 1:1-orthologous. Paralogs are treated as a dangerous nuisance that has to be detected and removed. We show here that this severe restriction of the data sets is not necessary. Building upon recent advances in mathematical phylogenetics we demonstrate that gene duplications convey meaningful phylogenetic information and allow the inference of plausible phylogenetic trees, provided orthologs and paralogs can be distinguished with a degree of certainty. Starting from tree-free estimates of orthology, cograph editing can sufficiently reduce the noise in order to find correct event-annotated gene trees. The information of gene trees can then directly be translated into constraints on the species trees. While the resolution is very poor for individual gene families, we show that genome-wide data sets are sufficient to generate fully resolved phylogenetic trees, even in the presence of horizontal gene transfer. We demonstrate that the distribution of paralogs in large gene families contains in itself sufficient phylogenetic signal to infer fully resolved species phylogenies. This source of phylogenetic information is independent of information contained in orthologous sequences and is resilient against horizontal gene transfer. An important consequence is that phylogenomics data sets need not be restricted to 1:1 orthologs.
[ { "created": "Mon, 18 Dec 2017 14:58:20 GMT", "version": "v1" } ]
2017-12-19
[ [ "Hellmuth", "Marc", "" ], [ "Wieseke", "Nicolas", "" ], [ "Lechner", "Marcus", "" ], [ "Lenhof", "Hans-Peter", "" ], [ "Middendorf", "Martin", "" ], [ "Stadler", "Peter F.", "" ] ]
Phylogenomics heavily relies on well-curated sequence data sets that consist, for each gene, exclusively of 1:1-orthologous. Paralogs are treated as a dangerous nuisance that has to be detected and removed. We show here that this severe restriction of the data sets is not necessary. Building upon recent advances in mathematical phylogenetics we demonstrate that gene duplications convey meaningful phylogenetic information and allow the inference of plausible phylogenetic trees, provided orthologs and paralogs can be distinguished with a degree of certainty. Starting from tree-free estimates of orthology, cograph editing can sufficiently reduce the noise in order to find correct event-annotated gene trees. The information of gene trees can then directly be translated into constraints on the species trees. While the resolution is very poor for individual gene families, we show that genome-wide data sets are sufficient to generate fully resolved phylogenetic trees, even in the presence of horizontal gene transfer. We demonstrate that the distribution of paralogs in large gene families contains in itself sufficient phylogenetic signal to infer fully resolved species phylogenies. This source of phylogenetic information is independent of information contained in orthologous sequences and is resilient against horizontal gene transfer. An important consequence is that phylogenomics data sets need not be restricted to 1:1 orthologs.
1908.09345
Bingcong Li
Bingcong Li, Lingda Wang, Georgios B. Giannakis
Almost Tune-Free Variance Reduction
null
ICML 2020
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The variance reduction class of algorithms including the representative ones, SVRG and SARAH, have well documented merits for empirical risk minimization problems. However, they require grid search to tune parameters (step size and the number of iterations per inner loop) for optimal performance. This work introduces `almost tune-free' SVRG and SARAH schemes equipped with i) Barzilai-Borwein (BB) step sizes; ii) averaging; and, iii) the inner loop length adjusted to the BB step sizes. In particular, SVRG, SARAH, and their BB variants are first reexamined through an `estimate sequence' lens to enable new averaging methods that tighten their convergence rates theoretically, and improve their performance empirically when the step size or the inner loop length is chosen large. Then a simple yet effective means to adjust the number of iterations per inner loop is developed to enhance the merits of the proposed averaging schemes and BB step sizes. Numerical tests corroborate the proposed methods.
[ { "created": "Sun, 25 Aug 2019 15:24:04 GMT", "version": "v1" }, { "created": "Wed, 10 Jun 2020 12:14:42 GMT", "version": "v2" } ]
2020-06-11
[ [ "Li", "Bingcong", "" ], [ "Wang", "Lingda", "" ], [ "Giannakis", "Georgios B.", "" ] ]
The variance reduction class of algorithms including the representative ones, SVRG and SARAH, have well documented merits for empirical risk minimization problems. However, they require grid search to tune parameters (step size and the number of iterations per inner loop) for optimal performance. This work introduces `almost tune-free' SVRG and SARAH schemes equipped with i) Barzilai-Borwein (BB) step sizes; ii) averaging; and, iii) the inner loop length adjusted to the BB step sizes. In particular, SVRG, SARAH, and their BB variants are first reexamined through an `estimate sequence' lens to enable new averaging methods that tighten their convergence rates theoretically, and improve their performance empirically when the step size or the inner loop length is chosen large. Then a simple yet effective means to adjust the number of iterations per inner loop is developed to enhance the merits of the proposed averaging schemes and BB step sizes. Numerical tests corroborate the proposed methods.
2208.09318
Marco Faroni
Marco Faroni and Nicola Pedrocchi and Manuel Beschi
Adaptive Hybrid Local-Global Sampling for Fast Informed Sampling-Based Optimal Path Planning
Preprint of manuscript accepted for publication on Autonomous Robots, Springer Nature
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper improves the performance of RRT$^*$-like sampling-based path planners by combining admissible informed sampling and local sampling (i.e., sampling the neighborhood of the current solution). An adaptive strategy regulates the trade-off between exploration (admissible informed sampling) and exploitation (local sampling) based on online rewards from previous samples. The paper demonstrates that the algorithm is asymptotically optimal and has a better convergence rate than state-of-the-art path planners (e.g., Informed-RRT*) in several simulated and real-world scenarios. An open-source, ROS-compatible implementation of the algorithm is publicly available.
[ { "created": "Fri, 19 Aug 2022 13:03:52 GMT", "version": "v1" }, { "created": "Mon, 15 Apr 2024 09:03:06 GMT", "version": "v2" } ]
2024-04-16
[ [ "Faroni", "Marco", "" ], [ "Pedrocchi", "Nicola", "" ], [ "Beschi", "Manuel", "" ] ]
This paper improves the performance of RRT$^*$-like sampling-based path planners by combining admissible informed sampling and local sampling (i.e., sampling the neighborhood of the current solution). An adaptive strategy regulates the trade-off between exploration (admissible informed sampling) and exploitation (local sampling) based on online rewards from previous samples. The paper demonstrates that the algorithm is asymptotically optimal and has a better convergence rate than state-of-the-art path planners (e.g., Informed-RRT*) in several simulated and real-world scenarios. An open-source, ROS-compatible implementation of the algorithm is publicly available.
2009.03665
Lyes Khacef
Lyes Khacef, Vincent Gripon, Benoit Miramond
GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised Learning
Accepted for publication in the International Conference on Neural Information Processing (ICONIP) 2020. arXiv admin note: text overlap with arXiv:2009.02174
null
null
null
cs.NE cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot classification is a challenge in machine learning where the goal is to train a classifier using a very limited number of labeled examples. This scenario is likely to occur frequently in real life, for example when data acquisition or labeling is expensive. In this work, we consider the problem of post-labeled few-shot unsupervised learning, a classification task where representations are learned in an unsupervised fashion, to be later labeled using very few annotated examples. We argue that this problem is very likely to occur on the edge, when the embedded device directly acquires the data, and the expert needed to perform labeling cannot be prompted often. To address this problem, we consider an algorithm consisting of the concatenation of transfer learning with clustering using Self-Organizing Maps (SOMs). We introduce a TensorFlow-based implementation to speed-up the process in multi-core CPUs and GPUs. Finally, we demonstrate the effectiveness of the method using standard off-the-shelf few-shot classification benchmarks.
[ { "created": "Fri, 4 Sep 2020 13:22:28 GMT", "version": "v1" } ]
2020-09-09
[ [ "Khacef", "Lyes", "" ], [ "Gripon", "Vincent", "" ], [ "Miramond", "Benoit", "" ] ]
Few-shot classification is a challenge in machine learning where the goal is to train a classifier using a very limited number of labeled examples. This scenario is likely to occur frequently in real life, for example when data acquisition or labeling is expensive. In this work, we consider the problem of post-labeled few-shot unsupervised learning, a classification task where representations are learned in an unsupervised fashion, to be later labeled using very few annotated examples. We argue that this problem is very likely to occur on the edge, when the embedded device directly acquires the data, and the expert needed to perform labeling cannot be prompted often. To address this problem, we consider an algorithm consisting of the concatenation of transfer learning with clustering using Self-Organizing Maps (SOMs). We introduce a TensorFlow-based implementation to speed-up the process in multi-core CPUs and GPUs. Finally, we demonstrate the effectiveness of the method using standard off-the-shelf few-shot classification benchmarks.
2404.12474
Michael Shaham
Michael H. Shaham and Taskin Padir
Learning a Stable, Safe, Distributed Feedback Controller for a Heterogeneous Platoon of Vehicles
null
null
null
null
cs.LG cs.AI cs.MA cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Platooning of autonomous vehicles has the potential to increase safety and fuel efficiency on highways. The goal of platooning is to have each vehicle drive at some speed (set by the leader) while maintaining a safe distance from its neighbors. Many prior works have analyzed various controllers for platooning, most commonly linear feedback and distributed model predictive controllers. In this work, we introduce an algorithm for learning a stable, safe, distributed controller for a heterogeneous platoon. Our algorithm relies on recent developments in learning neural network stability and safety certificates. We train a controller for autonomous platooning in simulation and evaluate its performance on hardware with a platoon of four F1Tenth vehicles. We then perform further analysis in simulation with a platoon of 100 vehicles. Experimental results demonstrate the practicality of the algorithm and the learned controller by comparing the performance of the neural network controller to linear feedback and distributed model predictive controllers.
[ { "created": "Thu, 18 Apr 2024 19:11:34 GMT", "version": "v1" } ]
2024-04-22
[ [ "Shaham", "Michael H.", "" ], [ "Padir", "Taskin", "" ] ]
Platooning of autonomous vehicles has the potential to increase safety and fuel efficiency on highways. The goal of platooning is to have each vehicle drive at some speed (set by the leader) while maintaining a safe distance from its neighbors. Many prior works have analyzed various controllers for platooning, most commonly linear feedback and distributed model predictive controllers. In this work, we introduce an algorithm for learning a stable, safe, distributed controller for a heterogeneous platoon. Our algorithm relies on recent developments in learning neural network stability and safety certificates. We train a controller for autonomous platooning in simulation and evaluate its performance on hardware with a platoon of four F1Tenth vehicles. We then perform further analysis in simulation with a platoon of 100 vehicles. Experimental results demonstrate the practicality of the algorithm and the learned controller by comparing the performance of the neural network controller to linear feedback and distributed model predictive controllers.
2106.01885
Ayushi Rastogi
Ayushi Rastogi, Georgios Gousios
How does Software Change?
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software evolves with changes to its codebase over time. Internally, software changes in response to decisions to include some code change into the codebase and discard others. Explaining the mechanism of software evolution, this paper presents a theory of software change. Our theory is grounded in multiple evidence sources (e.g., GitHub documentation and relevant scientific literature) relating to the pull-based development model in GitHub. The resulting theory explains the influence of project-related core concepts (e.g., people and governance) as well as its ecosystem on the decision of software change.
[ { "created": "Thu, 3 Jun 2021 14:31:37 GMT", "version": "v1" } ]
2021-06-04
[ [ "Rastogi", "Ayushi", "" ], [ "Gousios", "Georgios", "" ] ]
Software evolves with changes to its codebase over time. Internally, software changes in response to decisions to include some code change into the codebase and discard others. Explaining the mechanism of software evolution, this paper presents a theory of software change. Our theory is grounded in multiple evidence sources (e.g., GitHub documentation and relevant scientific literature) relating to the pull-based development model in GitHub. The resulting theory explains the influence of project-related core concepts (e.g., people and governance) as well as its ecosystem on the decision of software change.
2207.04106
Joseph Fisher
Tom Ayoola, Joseph Fisher, Andrea Pierleoni
Improving Entity Disambiguation by Reasoning over a Knowledge Base
Accepted at NAACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work in entity disambiguation (ED) has typically neglected structured knowledge base (KB) facts, and instead relied on a limited subset of KB information, such as entity descriptions or types. This limits the range of contexts in which entities can be disambiguated. To allow the use of all KB facts, as well as descriptions and types, we introduce an ED model which links entities by reasoning over a symbolic knowledge base in a fully differentiable fashion. Our model surpasses state-of-the-art baselines on six well-established ED datasets by 1.3 F1 on average. By allowing access to all KB information, our model is less reliant on popularity-based entity priors, and improves performance on the challenging ShadowLink dataset (which emphasises infrequent and ambiguous entities) by 12.7 F1.
[ { "created": "Fri, 8 Jul 2022 19:13:53 GMT", "version": "v1" } ]
2022-07-12
[ [ "Ayoola", "Tom", "" ], [ "Fisher", "Joseph", "" ], [ "Pierleoni", "Andrea", "" ] ]
Recent work in entity disambiguation (ED) has typically neglected structured knowledge base (KB) facts, and instead relied on a limited subset of KB information, such as entity descriptions or types. This limits the range of contexts in which entities can be disambiguated. To allow the use of all KB facts, as well as descriptions and types, we introduce an ED model which links entities by reasoning over a symbolic knowledge base in a fully differentiable fashion. Our model surpasses state-of-the-art baselines on six well-established ED datasets by 1.3 F1 on average. By allowing access to all KB information, our model is less reliant on popularity-based entity priors, and improves performance on the challenging ShadowLink dataset (which emphasises infrequent and ambiguous entities) by 12.7 F1.
1905.10214
Th\'eo Ryffel
Theo Ryffel, Edouard Dufour-Sans, Romain Gay, Francis Bach, David Pointcheval
Partially Encrypted Machine Learning using Functional Encryption
null
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning on encrypted data has received a lot of attention thanks to recent breakthroughs in homomorphic encryption and secure multi-party computation. It allows outsourcing computation to untrusted servers without sacrificing privacy of sensitive data. We propose a practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption. We first present a new functional encryption scheme to efficiently compute quadratic functions so that the data owner controls what can be computed but is not involved in the calculation: it provides a decryption key which allows one to learn a specific function evaluation of some encrypted data. We then show how to use it in machine learning to partially encrypt neural networks with quadratic activation functions at evaluation time, and we provide a thorough analysis of the information leaks based on indistinguishability of data items of the same label. Last, since most encryption schemes cannot deal with the last thresholding operation used for classification, we propose a training method to prevent selected sensitive features from leaking, which adversarially optimizes the network against an adversary trying to identify these features. This is interesting for several existing works using partially encrypted machine learning as it comes with little reduction on the model's accuracy and significantly improves data privacy.
[ { "created": "Fri, 24 May 2019 13:06:53 GMT", "version": "v1" }, { "created": "Tue, 28 May 2019 08:41:02 GMT", "version": "v2" }, { "created": "Wed, 29 May 2019 17:14:29 GMT", "version": "v3" }, { "created": "Tue, 22 Oct 2019 10:02:43 GMT", "version": "v4" }, { "created": "Thu, 23 Sep 2021 09:23:44 GMT", "version": "v5" } ]
2021-09-24
[ [ "Ryffel", "Theo", "" ], [ "Dufour-Sans", "Edouard", "" ], [ "Gay", "Romain", "" ], [ "Bach", "Francis", "" ], [ "Pointcheval", "David", "" ] ]
Machine learning on encrypted data has received a lot of attention thanks to recent breakthroughs in homomorphic encryption and secure multi-party computation. It allows outsourcing computation to untrusted servers without sacrificing privacy of sensitive data. We propose a practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption. We first present a new functional encryption scheme to efficiently compute quadratic functions so that the data owner controls what can be computed but is not involved in the calculation: it provides a decryption key which allows one to learn a specific function evaluation of some encrypted data. We then show how to use it in machine learning to partially encrypt neural networks with quadratic activation functions at evaluation time, and we provide a thorough analysis of the information leaks based on indistinguishability of data items of the same label. Last, since most encryption schemes cannot deal with the last thresholding operation used for classification, we propose a training method to prevent selected sensitive features from leaking, which adversarially optimizes the network against an adversary trying to identify these features. This is interesting for several existing works using partially encrypted machine learning as it comes with little reduction on the model's accuracy and significantly improves data privacy.
2203.10289
Christian Haase
Christian Haase, Timo R\"oseler, Mattias Seidel
METL: a modern ETL pipeline with a dynamic mapping matrix
version 6: clean up
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern ETL streaming pipelines extract data from various sources and forward it to multiple consumers, such as data warehouses (DW) and analytical systems that leverage machine learning (ML). However, the increasing number of systems that are connected to such pipelines requires new solutions for data integration. The canonical (or common) data model (CDM) offers such an integration. It is particular useful for integrating microservice systems into ETL pipelines. (Villaca et al 2020, Oliveira et al 2019) However, a mapping to a CDM is complex. (Lemcke et al 2012) There are three complexity problems, namely the size of the required mapping matrix, the automation of updates of the matrix in response to changes in the extraction sources and the time efficiency of the mapping. In this paper, we present a new solution for these problems. More precisely, we present a new dynamic mapping matrix (DMM), which is based on permutation matrices that are obtained by block-partitioning the full mapping matrix. We show that the DMM can be used for automated updates in response to schema changes, for parallel computation in near real-time and for highly efficient compacting. For the solution, we draw on research into matrix partitioning (Quinn 2004) and dynamic networks (Haase et al 2021). The DMM has been implemented into an app called Message ETL (METL). METL is the key part of a new ETL streaming pipeline at EOS that conducts the transformation to a CDM. The ETL pipeline is based on Kafka-streams. It extracts data from more than 80 microservices with log-based Change Data Capture (CDC) with Debezium and loads the data to a DW and an ML platform. EOS is part of the Otto-Group, the second-largest e-commerce provider in Europe.
[ { "created": "Sat, 19 Mar 2022 10:18:51 GMT", "version": "v1" }, { "created": "Tue, 22 Mar 2022 17:28:58 GMT", "version": "v2" }, { "created": "Wed, 23 Mar 2022 11:24:43 GMT", "version": "v3" }, { "created": "Tue, 29 Mar 2022 13:54:22 GMT", "version": "v4" }, { "created": "Wed, 30 Mar 2022 14:45:26 GMT", "version": "v5" }, { "created": "Thu, 31 Mar 2022 11:16:25 GMT", "version": "v6" } ]
2022-04-01
[ [ "Haase", "Christian", "" ], [ "Röseler", "Timo", "" ], [ "Seidel", "Mattias", "" ] ]
Modern ETL streaming pipelines extract data from various sources and forward it to multiple consumers, such as data warehouses (DW) and analytical systems that leverage machine learning (ML). However, the increasing number of systems that are connected to such pipelines requires new solutions for data integration. The canonical (or common) data model (CDM) offers such an integration. It is particular useful for integrating microservice systems into ETL pipelines. (Villaca et al 2020, Oliveira et al 2019) However, a mapping to a CDM is complex. (Lemcke et al 2012) There are three complexity problems, namely the size of the required mapping matrix, the automation of updates of the matrix in response to changes in the extraction sources and the time efficiency of the mapping. In this paper, we present a new solution for these problems. More precisely, we present a new dynamic mapping matrix (DMM), which is based on permutation matrices that are obtained by block-partitioning the full mapping matrix. We show that the DMM can be used for automated updates in response to schema changes, for parallel computation in near real-time and for highly efficient compacting. For the solution, we draw on research into matrix partitioning (Quinn 2004) and dynamic networks (Haase et al 2021). The DMM has been implemented into an app called Message ETL (METL). METL is the key part of a new ETL streaming pipeline at EOS that conducts the transformation to a CDM. The ETL pipeline is based on Kafka-streams. It extracts data from more than 80 microservices with log-based Change Data Capture (CDC) with Debezium and loads the data to a DW and an ML platform. EOS is part of the Otto-Group, the second-largest e-commerce provider in Europe.
2405.05787
Tianpeng Zhang
Tianpeng Zhang (1), Sekeun Kim (2), Jerome Charton (2), Haitong Ma (1), Kyungsang Kim (2), Na Li (1), Quanzheng Li (2)((1) SEAS, Harvard University (2) CAMCA, Massachusetts General Hospital and Harvard Medical School)
Autonomous Robotic Ultrasound System for Liver Follow-up Diagnosis: Pilot Phantom Study
null
null
null
null
cs.RO cs.CV cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper introduces a novel autonomous robot ultrasound (US) system targeting liver follow-up scans for outpatients in local communities. Given a computed tomography (CT) image with specific target regions of interest, the proposed system carries out the autonomous follow-up scan in three steps: (i) initial robot contact to surface, (ii) coordinate mapping between CT image and robot, and (iii) target US scan. Utilizing 3D US-CT registration and deep learning-based segmentation networks, we can achieve precise imaging of 3D hepatic veins, facilitating accurate coordinate mapping between CT and the robot. This enables the automatic localization of follow-up targets within the CT image, allowing the robot to navigate precisely to the target's surface. Evaluation of the ultrasound phantom confirms the quality of the US-CT registration and shows the robot reliably locates the targets in repeated trials. The proposed framework holds the potential to significantly reduce time and costs for healthcare providers, clinicians, and follow-up patients, thereby addressing the increasing healthcare burden associated with chronic disease in local communities.
[ { "created": "Thu, 9 May 2024 14:11:20 GMT", "version": "v1" } ]
2024-05-10
[ [ "Zhang", "Tianpeng", "" ], [ "Kim", "Sekeun", "" ], [ "Charton", "Jerome", "" ], [ "Ma", "Haitong", "" ], [ "Kim", "Kyungsang", "" ], [ "Li", "Na", "" ], [ "Li", "Quanzheng", "" ] ]
The paper introduces a novel autonomous robot ultrasound (US) system targeting liver follow-up scans for outpatients in local communities. Given a computed tomography (CT) image with specific target regions of interest, the proposed system carries out the autonomous follow-up scan in three steps: (i) initial robot contact to surface, (ii) coordinate mapping between CT image and robot, and (iii) target US scan. Utilizing 3D US-CT registration and deep learning-based segmentation networks, we can achieve precise imaging of 3D hepatic veins, facilitating accurate coordinate mapping between CT and the robot. This enables the automatic localization of follow-up targets within the CT image, allowing the robot to navigate precisely to the target's surface. Evaluation of the ultrasound phantom confirms the quality of the US-CT registration and shows the robot reliably locates the targets in repeated trials. The proposed framework holds the potential to significantly reduce time and costs for healthcare providers, clinicians, and follow-up patients, thereby addressing the increasing healthcare burden associated with chronic disease in local communities.
cs/0210009
Maxim Makatchev
Maxim Makatchev
On the Cell-based Complexity of Recognition of Bounded Configurations by Finite Dynamic Cellular Automata
11 pages, 1 figure
null
null
null
cs.CC cs.CV
null
This paper studies complexity of recognition of classes of bounded configurations by a generalization of conventional cellular automata (CA) -- finite dynamic cellular automata (FDCA). Inspired by the CA-based models of biological and computer vision, this study attempts to derive the properties of a complexity measure and of the classes of input configurations that make it beneficial to realize the recognition via a two-layered automaton as compared to a one-layered automaton. A formalized model of an image pattern recognition task is utilized to demonstrate that the derived conditions can be satisfied for a non-empty set of practical problems.
[ { "created": "Fri, 11 Oct 2002 19:55:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Makatchev", "Maxim", "" ] ]
This paper studies complexity of recognition of classes of bounded configurations by a generalization of conventional cellular automata (CA) -- finite dynamic cellular automata (FDCA). Inspired by the CA-based models of biological and computer vision, this study attempts to derive the properties of a complexity measure and of the classes of input configurations that make it beneficial to realize the recognition via a two-layered automaton as compared to a one-layered automaton. A formalized model of an image pattern recognition task is utilized to demonstrate that the derived conditions can be satisfied for a non-empty set of practical problems.
1806.11509
Chengbo Yang
Chengbo Yang
An Efficient Dispatcher for Large Scale GraphProcessing on OpenCL-based FPGAs
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point.
[ { "created": "Sun, 3 Jun 2018 11:14:38 GMT", "version": "v1" } ]
2018-07-02
[ [ "Yang", "Chengbo", "" ] ]
High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point.
1510.07357
Bogdan Chlebus
Bogdan S. Chlebus and Dariusz R. Kowalski and Shailesh Vaya
Distributed Bare-Bones Communication in Wireless Networks
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider wireless networks operating under the SINR model of interference. Nodes have limited individual knowledge and capabilities: they do not know their positions in a coordinate system in the plane, further they do not know their neighborhoods, nor do they know the size of the network $n$, and finally they cannot sense collisions resulting from simultaneous transmissions by at least two neighbors. Each node is equipped with a unique integer name, where $N$ as an upper bound on the a range of names. We refer as a backbone to a subnetwork induced by a diameter-preserving dominating set of nodes. Let $\Delta$ denote a maximum number of nodes that can successfully receive a message transmitted by a node when no other nodes transmit concurrently. We study distributed algorithms for communication problems in three settings. In the single-node-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm that wakes up all nodes in $O(n \log^2 N)$ rounds with high probability. For the synchronized-start case, when all nodes start an execution simultaneously, we give a randomized algorithm computing a backbone in $O(\Delta\log^{7} N)$ rounds with high probability. In the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone in time $O(n\log^2 N +\Delta\log^{7} N)$ with high probability.
[ { "created": "Mon, 26 Oct 2015 02:56:40 GMT", "version": "v1" }, { "created": "Sat, 28 Jul 2018 17:34:10 GMT", "version": "v2" }, { "created": "Sun, 13 Jun 2021 17:57:49 GMT", "version": "v3" } ]
2021-06-15
[ [ "Chlebus", "Bogdan S.", "" ], [ "Kowalski", "Dariusz R.", "" ], [ "Vaya", "Shailesh", "" ] ]
We consider wireless networks operating under the SINR model of interference. Nodes have limited individual knowledge and capabilities: they do not know their positions in a coordinate system in the plane, further they do not know their neighborhoods, nor do they know the size of the network $n$, and finally they cannot sense collisions resulting from simultaneous transmissions by at least two neighbors. Each node is equipped with a unique integer name, where $N$ as an upper bound on the a range of names. We refer as a backbone to a subnetwork induced by a diameter-preserving dominating set of nodes. Let $\Delta$ denote a maximum number of nodes that can successfully receive a message transmitted by a node when no other nodes transmit concurrently. We study distributed algorithms for communication problems in three settings. In the single-node-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm that wakes up all nodes in $O(n \log^2 N)$ rounds with high probability. For the synchronized-start case, when all nodes start an execution simultaneously, we give a randomized algorithm computing a backbone in $O(\Delta\log^{7} N)$ rounds with high probability. In the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone in time $O(n\log^2 N +\Delta\log^{7} N)$ with high probability.
2206.06022
Juan G\'omez-Luna
Juan G\'omez-Luna, Yuxin Guo, Sylvan Brocard, Julien Legriel, Remy Cimadomo, Geraldo F. Oliveira, Gagandeep Singh, Onur Mutlu
Machine Learning Training on a Real Processing-in-Memory System
This extended abstract appears as an invited paper at the 2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Training machine learning algorithms is a computationally intensive process, which is frequently memory-bound due to repeatedly accessing large training datasets. As a result, processor-centric systems (e.g., CPU, GPU) suffer from costly data movement between memory units and processing units, which consumes large amounts of energy and execution cycles. Memory-centric computing systems, i.e., computing systems with processing-in-memory (PIM) capabilities, can alleviate this data movement bottleneck. Our goal is to understand the potential of modern general-purpose PIM architectures to accelerate machine learning training. To do so, we (1) implement several representative classic machine learning algorithms (namely, linear regression, logistic regression, decision tree, K-means clustering) on a real-world general-purpose PIM architecture, (2) characterize them in terms of accuracy, performance and scaling, and (3) compare to their counterpart implementations on CPU and GPU. Our experimental evaluation on a memory-centric computing system with more than 2500 PIM cores shows that general-purpose PIM architectures can greatly accelerate memory-bound machine learning workloads, when the necessary operations and datatypes are natively supported by PIM hardware. To our knowledge, our work is the first one to evaluate training of machine learning algorithms on a real-world general-purpose PIM architecture.
[ { "created": "Mon, 13 Jun 2022 10:20:23 GMT", "version": "v1" }, { "created": "Wed, 3 Aug 2022 15:21:34 GMT", "version": "v2" } ]
2022-08-04
[ [ "Gómez-Luna", "Juan", "" ], [ "Guo", "Yuxin", "" ], [ "Brocard", "Sylvan", "" ], [ "Legriel", "Julien", "" ], [ "Cimadomo", "Remy", "" ], [ "Oliveira", "Geraldo F.", "" ], [ "Singh", "Gagandeep", "" ], [ "Mutlu", "Onur", "" ] ]
Training machine learning algorithms is a computationally intensive process, which is frequently memory-bound due to repeatedly accessing large training datasets. As a result, processor-centric systems (e.g., CPU, GPU) suffer from costly data movement between memory units and processing units, which consumes large amounts of energy and execution cycles. Memory-centric computing systems, i.e., computing systems with processing-in-memory (PIM) capabilities, can alleviate this data movement bottleneck. Our goal is to understand the potential of modern general-purpose PIM architectures to accelerate machine learning training. To do so, we (1) implement several representative classic machine learning algorithms (namely, linear regression, logistic regression, decision tree, K-means clustering) on a real-world general-purpose PIM architecture, (2) characterize them in terms of accuracy, performance and scaling, and (3) compare to their counterpart implementations on CPU and GPU. Our experimental evaluation on a memory-centric computing system with more than 2500 PIM cores shows that general-purpose PIM architectures can greatly accelerate memory-bound machine learning workloads, when the necessary operations and datatypes are natively supported by PIM hardware. To our knowledge, our work is the first one to evaluate training of machine learning algorithms on a real-world general-purpose PIM architecture.
1601.00019
Hyoungju Ji
Hyoungju Ji, Younsun Kim, Juho Lee, Eko Onggosanusi, Younghan Nam, Jianzhong Zhang, Byungju Lee, Byonghyo Shim
Overview of Full-Dimension MIMO in LTE-Advanced Pro
null
IEEE Communications Magazine ( Volume: 55, Issue: 2, February 2017 )
10.1109/MCOM.2016.1500743RP
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple-input multiple-output (MIMO) systems with a large number of basestation antennas, often called massive MIMO, have received much attention in academia and industry as a means to improve the spectral efficiency, energy efficiency, and processing complexity of next generation cellular system. Mobile communication industry has initiated a feasibility study of massive MIMO systems to meet the increasing demand of future wireless systems. Field trials of the proof-of-concept systems have demonstrated the potential gain of the Full-Dimension MIMO (FD-MIMO), an official name for the MIMO enhancement in 3rd generation partnership project (3GPP). 3GPP initiated standardization activity for the seamless integration of this technology into current 4G LTE systems. In this article, we provide an overview of the FD-MIMO system, with emphasis on the discussion and debate conducted on the standardization process of Release 13. We present key features for FD-MIMO systems, a summary of the major issues for the standardization and practical system design, and performance evaluations for typical FD-MIMO scenarios.
[ { "created": "Thu, 31 Dec 2015 22:07:41 GMT", "version": "v1" }, { "created": "Fri, 6 May 2016 07:36:20 GMT", "version": "v2" }, { "created": "Mon, 8 Aug 2016 04:51:04 GMT", "version": "v3" }, { "created": "Wed, 10 Aug 2016 04:27:08 GMT", "version": "v4" } ]
2017-04-20
[ [ "Ji", "Hyoungju", "" ], [ "Kim", "Younsun", "" ], [ "Lee", "Juho", "" ], [ "Onggosanusi", "Eko", "" ], [ "Nam", "Younghan", "" ], [ "Zhang", "Jianzhong", "" ], [ "Lee", "Byungju", "" ], [ "Shim", "Byonghyo", "" ] ]
Multiple-input multiple-output (MIMO) systems with a large number of basestation antennas, often called massive MIMO, have received much attention in academia and industry as a means to improve the spectral efficiency, energy efficiency, and processing complexity of next generation cellular system. Mobile communication industry has initiated a feasibility study of massive MIMO systems to meet the increasing demand of future wireless systems. Field trials of the proof-of-concept systems have demonstrated the potential gain of the Full-Dimension MIMO (FD-MIMO), an official name for the MIMO enhancement in 3rd generation partnership project (3GPP). 3GPP initiated standardization activity for the seamless integration of this technology into current 4G LTE systems. In this article, we provide an overview of the FD-MIMO system, with emphasis on the discussion and debate conducted on the standardization process of Release 13. We present key features for FD-MIMO systems, a summary of the major issues for the standardization and practical system design, and performance evaluations for typical FD-MIMO scenarios.
2406.18629
Xin Lai
Xin Lai, Zhuotao Tian, Yukang Chen, Senqiao Yang, Xiangru Peng, Jiaya Jia
Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Code, data, and models are available at https://github.com/dvlab-research/Step-DPO
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Mathematical reasoning presents a significant challenge for Large Language Models (LLMs) due to the extensive and precise chain of reasoning required for accuracy. Ensuring the correctness of each reasoning step is critical. To address this, we aim to enhance the robustness and factuality of LLMs by learning from human feedback. However, Direct Preference Optimization (DPO) has shown limited benefits for long-chain mathematical reasoning, as models employing DPO struggle to identify detailed errors in incorrect answers. This limitation stems from a lack of fine-grained process supervision. We propose a simple, effective, and data-efficient method called Step-DPO, which treats individual reasoning steps as units for preference optimization rather than evaluating answers holistically. Additionally, we have developed a data construction pipeline for Step-DPO, enabling the creation of a high-quality dataset containing 10K step-wise preference pairs. We also observe that in DPO, self-generated data is more effective than data generated by humans or GPT-4, due to the latter's out-of-distribution nature. Our findings demonstrate that as few as 10K preference data pairs and fewer than 500 Step-DPO training steps can yield a nearly 3% gain in accuracy on MATH for models with over 70B parameters. Notably, Step-DPO, when applied to Qwen2-72B-Instruct, achieves scores of 70.8% and 94.0% on the test sets of MATH and GSM8K, respectively, surpassing a series of closed-source models, including GPT-4-1106, Claude-3-Opus, and Gemini-1.5-Pro. Our code, data, and models are available at https://github.com/dvlab-research/Step-DPO.
[ { "created": "Wed, 26 Jun 2024 17:43:06 GMT", "version": "v1" } ]
2024-06-28
[ [ "Lai", "Xin", "" ], [ "Tian", "Zhuotao", "" ], [ "Chen", "Yukang", "" ], [ "Yang", "Senqiao", "" ], [ "Peng", "Xiangru", "" ], [ "Jia", "Jiaya", "" ] ]
Mathematical reasoning presents a significant challenge for Large Language Models (LLMs) due to the extensive and precise chain of reasoning required for accuracy. Ensuring the correctness of each reasoning step is critical. To address this, we aim to enhance the robustness and factuality of LLMs by learning from human feedback. However, Direct Preference Optimization (DPO) has shown limited benefits for long-chain mathematical reasoning, as models employing DPO struggle to identify detailed errors in incorrect answers. This limitation stems from a lack of fine-grained process supervision. We propose a simple, effective, and data-efficient method called Step-DPO, which treats individual reasoning steps as units for preference optimization rather than evaluating answers holistically. Additionally, we have developed a data construction pipeline for Step-DPO, enabling the creation of a high-quality dataset containing 10K step-wise preference pairs. We also observe that in DPO, self-generated data is more effective than data generated by humans or GPT-4, due to the latter's out-of-distribution nature. Our findings demonstrate that as few as 10K preference data pairs and fewer than 500 Step-DPO training steps can yield a nearly 3% gain in accuracy on MATH for models with over 70B parameters. Notably, Step-DPO, when applied to Qwen2-72B-Instruct, achieves scores of 70.8% and 94.0% on the test sets of MATH and GSM8K, respectively, surpassing a series of closed-source models, including GPT-4-1106, Claude-3-Opus, and Gemini-1.5-Pro. Our code, data, and models are available at https://github.com/dvlab-research/Step-DPO.
2201.03339
Jinqi Huang
Jinqi Huang, Spyros Stathopoulos, Alex Serb, and Themis Prodromakis
NeuroPack: An Algorithm-level Python-based Simulator for Memristor-empowered Neuro-inspired Computing
null
null
10.3389/fnano.2022.851856
null
cs.ET cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Emerging two terminal nanoscale memory devices, known as memristors, have over the past decade demonstrated great potential for implementing energy efficient neuro-inspired computing architectures. As a result, a wide-range of technologies have been developed that in turn are described via distinct empirical models. This diversity of technologies requires the establishment of versatile tools that can enable designers to translate memristors' attributes in novel neuro-inspired topologies. In this paper, we present NeuroPack, a modular, algorithm level Python-based simulation platform that can support studies of memristor neuro-inspired architectures for performing online learning or offline classification. The NeuroPack environment is designed with versatility being central, allowing the user to chose from a variety of neuron models, learning rules and memristors models. Its hierarchical structure, empowers NeuroPack to predict any memristor state changes and the corresponding neural network behavior across a variety of design decisions and user parameters options. The use of NeuroPack is demonstrated herein via an application example of performing handwritten digit classification with the MNIST dataset and an existing empirical model for metal-oxide memristors.
[ { "created": "Mon, 10 Jan 2022 13:35:25 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 16:05:47 GMT", "version": "v2" } ]
2022-07-29
[ [ "Huang", "Jinqi", "" ], [ "Stathopoulos", "Spyros", "" ], [ "Serb", "Alex", "" ], [ "Prodromakis", "Themis", "" ] ]
Emerging two terminal nanoscale memory devices, known as memristors, have over the past decade demonstrated great potential for implementing energy efficient neuro-inspired computing architectures. As a result, a wide-range of technologies have been developed that in turn are described via distinct empirical models. This diversity of technologies requires the establishment of versatile tools that can enable designers to translate memristors' attributes in novel neuro-inspired topologies. In this paper, we present NeuroPack, a modular, algorithm level Python-based simulation platform that can support studies of memristor neuro-inspired architectures for performing online learning or offline classification. The NeuroPack environment is designed with versatility being central, allowing the user to chose from a variety of neuron models, learning rules and memristors models. Its hierarchical structure, empowers NeuroPack to predict any memristor state changes and the corresponding neural network behavior across a variety of design decisions and user parameters options. The use of NeuroPack is demonstrated herein via an application example of performing handwritten digit classification with the MNIST dataset and an existing empirical model for metal-oxide memristors.
1203.5188
Martin Monperrus
Stefan Hen{\ss}, Martin Monperrus (INRIA Lille - Nord Europe), Mira Mezini
Semi-Automatically Extracting FAQs to Improve Accessibility of Software Development Knowledge
ICSE - 34th International Conference on Software Engineering (2012)
ICSE - 34th International Conference on Software Engineering, 2012
10.1109/ICSE.2012.6227139
null
cs.SE cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts.
[ { "created": "Fri, 23 Mar 2012 07:13:06 GMT", "version": "v1" } ]
2018-07-06
[ [ "Henß", "Stefan", "", "INRIA Lille - Nord Europe" ], [ "Monperrus", "Martin", "", "INRIA Lille - Nord Europe" ], [ "Mezini", "Mira", "" ] ]
Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts.
1904.10159
Liyong Lin
Liyong Lin, Yuting Zhu, Rong Su
Synthesis of Covert Actuator Attackers for Free
The paper has been accepted for the journal Discrete Event Dynamic Systems
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we shall formulate and address a problem of covert actuator attacker synthesis for cyber-physical systems that are modelled by discrete-event systems. We assume the actuator attacker partially observes the execution of the closed-loop system and is able to modify each control command issued by the supervisor on a specified attackable subset of controllable events. We provide straightforward but in general exponential-time reductions, due to the use of subset construction procedure, from the covert actuator attacker synthesis problems to the Ramadge-Wonham supervisor synthesis problems. It then follows that it is possible to use the many techniques and tools already developed for solving the supervisor synthesis problem to solve the covert actuator attacker synthesis problem for free. In particular, we show that, if the attacker cannot attack unobservable events to the supervisor, then the reductions can be carried out in polynomial time. We also provide a brief discussion on some other conditions under which the exponential blowup in state size can be avoided. Finally, we show how the reduction based synthesis procedure can be extended for the synthesis of successful covert actuator attackers that also eavesdrop the control commands issued by the supervisor.
[ { "created": "Tue, 23 Apr 2019 05:41:31 GMT", "version": "v1" }, { "created": "Sat, 20 Mar 2021 15:06:47 GMT", "version": "v2" } ]
2021-03-23
[ [ "Lin", "Liyong", "" ], [ "Zhu", "Yuting", "" ], [ "Su", "Rong", "" ] ]
In this paper, we shall formulate and address a problem of covert actuator attacker synthesis for cyber-physical systems that are modelled by discrete-event systems. We assume the actuator attacker partially observes the execution of the closed-loop system and is able to modify each control command issued by the supervisor on a specified attackable subset of controllable events. We provide straightforward but in general exponential-time reductions, due to the use of subset construction procedure, from the covert actuator attacker synthesis problems to the Ramadge-Wonham supervisor synthesis problems. It then follows that it is possible to use the many techniques and tools already developed for solving the supervisor synthesis problem to solve the covert actuator attacker synthesis problem for free. In particular, we show that, if the attacker cannot attack unobservable events to the supervisor, then the reductions can be carried out in polynomial time. We also provide a brief discussion on some other conditions under which the exponential blowup in state size can be avoided. Finally, we show how the reduction based synthesis procedure can be extended for the synthesis of successful covert actuator attackers that also eavesdrop the control commands issued by the supervisor.