id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1908.07899
Tobias Hinz
Marcus Soll, Tobias Hinz, Sven Magg, Stefan Wermter
Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples
Published at the International Conference on Artificial Neural Networks (ICANN) 2019
null
null
null
cs.CL cs.CR cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial examples are artificially modified input samples which lead to misclassifications, while not being detectable by humans. These adversarial examples are a challenge for many tasks such as image and text classification, especially as research shows that many adversarial examples are transferable between different classifiers. In this work, we evaluate the performance of a popular defensive strategy for adversarial examples called defensive distillation, which can be successful in hardening neural networks against adversarial examples in the image domain. However, instead of applying defensive distillation to networks for image classification, we examine, for the first time, its performance on text classification tasks and also evaluate its effect on the transferability of adversarial text examples. Our results indicate that defensive distillation only has a minimal impact on text classifying neural networks and does neither help with increasing their robustness against adversarial examples nor prevent the transferability of adversarial examples between neural networks.
[ { "created": "Wed, 21 Aug 2019 14:50:13 GMT", "version": "v1" } ]
2019-08-22
[ [ "Soll", "Marcus", "" ], [ "Hinz", "Tobias", "" ], [ "Magg", "Sven", "" ], [ "Wermter", "Stefan", "" ] ]
Adversarial examples are artificially modified input samples which lead to misclassifications, while not being detectable by humans. These adversarial examples are a challenge for many tasks such as image and text classification, especially as research shows that many adversarial examples are transferable between different classifiers. In this work, we evaluate the performance of a popular defensive strategy for adversarial examples called defensive distillation, which can be successful in hardening neural networks against adversarial examples in the image domain. However, instead of applying defensive distillation to networks for image classification, we examine, for the first time, its performance on text classification tasks and also evaluate its effect on the transferability of adversarial text examples. Our results indicate that defensive distillation only has a minimal impact on text classifying neural networks and does neither help with increasing their robustness against adversarial examples nor prevent the transferability of adversarial examples between neural networks.
2204.03738
Felipe Oviedo
Felipe Oviedo, Srinivas Vinnakota, Eugene Seleznev, Hemant Malhotra, Saqib Shaikh, Juan Lavista Ferres
BankNote-Net: Open dataset for assistive universal currency recognition
Pre-print
null
null
null
cs.CV cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
Millions of people around the world have low or no vision. Assistive software applications have been developed for a variety of day-to-day tasks, including optical character recognition, scene identification, person recognition, and currency recognition. This last task, the recognition of banknotes from different denominations, has been addressed by the use of computer vision models for image recognition. However, the datasets and models available for this task are limited, both in terms of dataset size and in variety of currencies covered. In this work, we collect a total of 24,826 images of banknotes in variety of assistive settings, spanning 17 currencies and 112 denominations. Using supervised contrastive learning, we develop a machine learning model for universal currency recognition. This model learns compliant embeddings of banknote images in a variety of contexts, which can be shared publicly (as a compressed vector representation), and can be used to train and test specialized downstream models for any currency, including those not covered by our dataset or for which only a few real images per denomination are available (few-shot learning). We deploy a variation of this model for public use in the last version of the Seeing AI app developed by Microsoft. We share our encoder model and the embeddings as an open dataset in our BankNote-Net repository.
[ { "created": "Thu, 7 Apr 2022 21:16:54 GMT", "version": "v1" } ]
2022-04-11
[ [ "Oviedo", "Felipe", "" ], [ "Vinnakota", "Srinivas", "" ], [ "Seleznev", "Eugene", "" ], [ "Malhotra", "Hemant", "" ], [ "Shaikh", "Saqib", "" ], [ "Ferres", "Juan Lavista", "" ] ]
Millions of people around the world have low or no vision. Assistive software applications have been developed for a variety of day-to-day tasks, including optical character recognition, scene identification, person recognition, and currency recognition. This last task, the recognition of banknotes from different denominations, has been addressed by the use of computer vision models for image recognition. However, the datasets and models available for this task are limited, both in terms of dataset size and in variety of currencies covered. In this work, we collect a total of 24,826 images of banknotes in variety of assistive settings, spanning 17 currencies and 112 denominations. Using supervised contrastive learning, we develop a machine learning model for universal currency recognition. This model learns compliant embeddings of banknote images in a variety of contexts, which can be shared publicly (as a compressed vector representation), and can be used to train and test specialized downstream models for any currency, including those not covered by our dataset or for which only a few real images per denomination are available (few-shot learning). We deploy a variation of this model for public use in the last version of the Seeing AI app developed by Microsoft. We share our encoder model and the embeddings as an open dataset in our BankNote-Net repository.
2004.07610
Roshan Singh
Roshan Singh, Pranav Kumar Singh
Connecting the Dots of COVID-19 Transmissions in India
Withdrawing for improving research scope
null
null
null
cs.SI physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Witnessing its first case in late January 2020 India has seen a sharp rise in the number of positive cases of COVID-19. 34 States/UT (s) of the country have been found to be affected by the pandemic to date. We in this work, study the progress of COVID-19 pandemic in India. We aim to create transmission network visualization (s) of COVID-19 in India and perform analysis upon them. Using the transmission networks obtained we attempt to find the possible Super Spreader Individual (s) and Super Spreader Events (SSE) responsible for the outbreak in their respective regions. We discuss the potentials of network analysis in mitigating the further spread of the disease. This is one of the initial studies of the outbreak in India and the first attempt to study the pandemic in the country from a transmission network perspective.
[ { "created": "Thu, 16 Apr 2020 11:40:04 GMT", "version": "v1" }, { "created": "Sat, 25 Jul 2020 13:18:39 GMT", "version": "v2" } ]
2020-07-28
[ [ "Singh", "Roshan", "" ], [ "Singh", "Pranav Kumar", "" ] ]
Witnessing its first case in late January 2020 India has seen a sharp rise in the number of positive cases of COVID-19. 34 States/UT (s) of the country have been found to be affected by the pandemic to date. We in this work, study the progress of COVID-19 pandemic in India. We aim to create transmission network visualization (s) of COVID-19 in India and perform analysis upon them. Using the transmission networks obtained we attempt to find the possible Super Spreader Individual (s) and Super Spreader Events (SSE) responsible for the outbreak in their respective regions. We discuss the potentials of network analysis in mitigating the further spread of the disease. This is one of the initial studies of the outbreak in India and the first attempt to study the pandemic in the country from a transmission network perspective.
2408.01163
Alexander Olza
Alexander Olza, David Soto, Roberto Santana
Domain Adaptation-Enhanced Searchlight: Enabling brain decoding from visual perception to mental imagery
null
null
null
null
cs.LG q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
In cognitive neuroscience and brain-computer interface research, accurately predicting imagined stimuli is crucial. This study investigates the effectiveness of Domain Adaptation (DA) in enhancing imagery prediction using primarily visual data from fMRI scans of 18 subjects. Initially, we train a baseline model on visual stimuli to predict imagined stimuli, utilizing data from 14 brain regions. We then develop several models to improve imagery prediction, comparing different DA methods. Our results demonstrate that DA significantly enhances imagery prediction, especially with the Regular Transfer approach. We then conduct a DA-enhanced searchlight analysis using Regular Transfer, followed by permutation-based statistical tests to identify brain regions where imagery decoding is consistently above chance across subjects. Our DA-enhanced searchlight predicts imagery contents in a highly distributed set of brain regions, including the visual cortex and the frontoparietal cortex, thereby outperforming standard cross-domain classification methods. The complete code and data for this paper have been made openly available for the use of the scientific community.
[ { "created": "Fri, 2 Aug 2024 10:25:19 GMT", "version": "v1" } ]
2024-08-05
[ [ "Olza", "Alexander", "" ], [ "Soto", "David", "" ], [ "Santana", "Roberto", "" ] ]
In cognitive neuroscience and brain-computer interface research, accurately predicting imagined stimuli is crucial. This study investigates the effectiveness of Domain Adaptation (DA) in enhancing imagery prediction using primarily visual data from fMRI scans of 18 subjects. Initially, we train a baseline model on visual stimuli to predict imagined stimuli, utilizing data from 14 brain regions. We then develop several models to improve imagery prediction, comparing different DA methods. Our results demonstrate that DA significantly enhances imagery prediction, especially with the Regular Transfer approach. We then conduct a DA-enhanced searchlight analysis using Regular Transfer, followed by permutation-based statistical tests to identify brain regions where imagery decoding is consistently above chance across subjects. Our DA-enhanced searchlight predicts imagery contents in a highly distributed set of brain regions, including the visual cortex and the frontoparietal cortex, thereby outperforming standard cross-domain classification methods. The complete code and data for this paper have been made openly available for the use of the scientific community.
2007.03680
Ioannis Mavromatis Dr
Ioannis Mavromatis, Robert J. Piechocki, Mahesh Sooriyabandara, Arjun Parekh
DRIVE: A Digital Network Oracle for Cooperative Intelligent Transportation Systems
Accepted for publication at IEEE ISCC 2020
null
10.1109/ISCC50000.2020.9219683
null
cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a world where Artificial Intelligence revolutionizes inference, prediction and decision-making tasks, Digital Twins emerge as game-changing tools. A case in point is the development and optimization of Cooperative Intelligent Transportation Systems (C-ITSs): a confluence of cyber-physical digital infrastructure and (semi)automated mobility. Herein we introduce Digital Twin for self-dRiving Intelligent VEhicles (DRIVE). The developed framework tackles shortcomings of traditional vehicular and network simulators. It provides a flexible, modular, and scalable implementation to ensure large-scale, city-wide experimentation with a moderate computational cost. The defining feature of our Digital Twin is a unique architecture allowing for submission of sequential queries, to which the Digital Twin provides instantaneous responses with the "state of the world", and hence is an Oracle. With such bidirectional interaction with external intelligent agents and realistic mobility traces, DRIVE provides the environment for development, training and optimization of Machine Learning based C-ITS solutions.
[ { "created": "Tue, 7 Jul 2020 09:34:09 GMT", "version": "v1" } ]
2022-09-05
[ [ "Mavromatis", "Ioannis", "" ], [ "Piechocki", "Robert J.", "" ], [ "Sooriyabandara", "Mahesh", "" ], [ "Parekh", "Arjun", "" ] ]
In a world where Artificial Intelligence revolutionizes inference, prediction and decision-making tasks, Digital Twins emerge as game-changing tools. A case in point is the development and optimization of Cooperative Intelligent Transportation Systems (C-ITSs): a confluence of cyber-physical digital infrastructure and (semi)automated mobility. Herein we introduce Digital Twin for self-dRiving Intelligent VEhicles (DRIVE). The developed framework tackles shortcomings of traditional vehicular and network simulators. It provides a flexible, modular, and scalable implementation to ensure large-scale, city-wide experimentation with a moderate computational cost. The defining feature of our Digital Twin is a unique architecture allowing for submission of sequential queries, to which the Digital Twin provides instantaneous responses with the "state of the world", and hence is an Oracle. With such bidirectional interaction with external intelligent agents and realistic mobility traces, DRIVE provides the environment for development, training and optimization of Machine Learning based C-ITS solutions.
2304.05895
Damien Dablain
Damien A. Dablain and Nitesh V. Chawla
Towards Understanding How Data Augmentation Works with Imbalanced Data
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Data augmentation forms the cornerstone of many modern machine learning training pipelines; yet, the mechanisms by which it works are not clearly understood. Much of the research on data augmentation (DA) has focused on improving existing techniques, examining its regularization effects in the context of neural network over-fitting, or investigating its impact on features. Here, we undertake a holistic examination of the effect of DA on three different classifiers, convolutional neural networks, support vector machines, and logistic regression models, which are commonly used in supervised classification of imbalanced data. We support our examination with testing on three image and five tabular datasets. Our research indicates that DA, when applied to imbalanced data, produces substantial changes in model weights, support vectors and feature selection; even though it may only yield relatively modest changes to global metrics, such as balanced accuracy or F1 measure. We hypothesize that DA works by facilitating variances in data, so that machine learning models can associate changes in the data with labels. By diversifying the range of feature amplitudes that a model must recognize to predict a label, DA improves a model's capacity to generalize when learning with imbalanced data.
[ { "created": "Wed, 12 Apr 2023 15:01:22 GMT", "version": "v1" } ]
2023-04-13
[ [ "Dablain", "Damien A.", "" ], [ "Chawla", "Nitesh V.", "" ] ]
Data augmentation forms the cornerstone of many modern machine learning training pipelines; yet, the mechanisms by which it works are not clearly understood. Much of the research on data augmentation (DA) has focused on improving existing techniques, examining its regularization effects in the context of neural network over-fitting, or investigating its impact on features. Here, we undertake a holistic examination of the effect of DA on three different classifiers, convolutional neural networks, support vector machines, and logistic regression models, which are commonly used in supervised classification of imbalanced data. We support our examination with testing on three image and five tabular datasets. Our research indicates that DA, when applied to imbalanced data, produces substantial changes in model weights, support vectors and feature selection; even though it may only yield relatively modest changes to global metrics, such as balanced accuracy or F1 measure. We hypothesize that DA works by facilitating variances in data, so that machine learning models can associate changes in the data with labels. By diversifying the range of feature amplitudes that a model must recognize to predict a label, DA improves a model's capacity to generalize when learning with imbalanced data.
2001.05581
Andreas Z\"ufle
Tobias Emrich, Hans-Peter Kriegel, Andreas Z\"ufle, Peer Kr\"oger, Matthias Renz
Complete and Sufficient Spatial Domination of Multidimensional Rectangles
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rectangles are used to approximate objects, or sets of objects, in a plethora of applications, systems and index structures. Many tasks, such as nearest neighbor search and similarity ranking, require to decide if objects in one rectangle A may, must, or must not be closer to objects in a second rectangle B, than objects in a third rectangle R. To decide this relation of "Spatial Domination" it can be shown that using minimum and maximum distances it is often impossible to detect spatial domination. This spatial gem provides a necessary and sufficient decision criterion for spatial domination that can be computed efficiently even in higher dimensional space. In addition, this spatial gem provides an example, pseudocode and an implementation in Python.
[ { "created": "Wed, 15 Jan 2020 22:24:40 GMT", "version": "v1" } ]
2020-01-17
[ [ "Emrich", "Tobias", "" ], [ "Kriegel", "Hans-Peter", "" ], [ "Züfle", "Andreas", "" ], [ "Kröger", "Peer", "" ], [ "Renz", "Matthias", "" ] ]
Rectangles are used to approximate objects, or sets of objects, in a plethora of applications, systems and index structures. Many tasks, such as nearest neighbor search and similarity ranking, require to decide if objects in one rectangle A may, must, or must not be closer to objects in a second rectangle B, than objects in a third rectangle R. To decide this relation of "Spatial Domination" it can be shown that using minimum and maximum distances it is often impossible to detect spatial domination. This spatial gem provides a necessary and sufficient decision criterion for spatial domination that can be computed efficiently even in higher dimensional space. In addition, this spatial gem provides an example, pseudocode and an implementation in Python.
1306.5838
Elena Khramtcova
Panagiotis Cheilaris, Elena Khramtcova, Evanthia Papadopoulou
Randomized incremental construction of the Hausdorff Voronoi diagram of non-crossing clusters
This paper has been withdrawn by the author because the substantially updated version (improved results, major text revision) is now submitted (arXiv:1312.3904)
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Hausdorff Voronoi diagram of a set of clusters of points in the plane, the distance between a point t and a cluster P is the maximum Euclidean distance between t and a point in P. This diagram has direct applications in VLSI design. We consider so-called "non-crossing" clusters. The complexity of the Hausdorff diagram of m such clusters is linear in the total number n of points in the convex hulls of all clusters. We present randomized incremental constructions for computing efficiently the diagram, improving considerably previous results. Our best complexity algorithm runs in expected time O((n + m(log log(n))^2)log^2(n)) and worst-case space O(n). We also provide a more practical algorithm whose expected running time is O((n + m log(n))log^2(n)) and expected space complexity is O(n). To achieve these bounds, we augment the randomized incremental paradigm for the construction of Voronoi diagrams with the ability to efficiently handle non-standard characteristics of generalized Voronoi diagrams, such as sites of non-constant complexity, sites that are not enclosed in their Voronoi regions, and empty Voronoi regions.
[ { "created": "Tue, 25 Jun 2013 03:12:56 GMT", "version": "v1" }, { "created": "Mon, 16 Dec 2013 12:31:12 GMT", "version": "v2" } ]
2013-12-17
[ [ "Cheilaris", "Panagiotis", "" ], [ "Khramtcova", "Elena", "" ], [ "Papadopoulou", "Evanthia", "" ] ]
In the Hausdorff Voronoi diagram of a set of clusters of points in the plane, the distance between a point t and a cluster P is the maximum Euclidean distance between t and a point in P. This diagram has direct applications in VLSI design. We consider so-called "non-crossing" clusters. The complexity of the Hausdorff diagram of m such clusters is linear in the total number n of points in the convex hulls of all clusters. We present randomized incremental constructions for computing efficiently the diagram, improving considerably previous results. Our best complexity algorithm runs in expected time O((n + m(log log(n))^2)log^2(n)) and worst-case space O(n). We also provide a more practical algorithm whose expected running time is O((n + m log(n))log^2(n)) and expected space complexity is O(n). To achieve these bounds, we augment the randomized incremental paradigm for the construction of Voronoi diagrams with the ability to efficiently handle non-standard characteristics of generalized Voronoi diagrams, such as sites of non-constant complexity, sites that are not enclosed in their Voronoi regions, and empty Voronoi regions.
2008.06471
Gal Metzer
Gal Metzer, Rana Hanocka, Raja Giryes, Daniel Cohen-Or
Self-Sampling for Neural Point Cloud Consolidation
TOG 2021
null
null
null
cs.GR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point upsampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
[ { "created": "Fri, 14 Aug 2020 17:16:02 GMT", "version": "v1" }, { "created": "Thu, 11 Mar 2021 17:09:31 GMT", "version": "v2" }, { "created": "Fri, 13 May 2022 09:19:55 GMT", "version": "v3" } ]
2022-05-16
[ [ "Metzer", "Gal", "" ], [ "Hanocka", "Rana", "" ], [ "Giryes", "Raja", "" ], [ "Cohen-Or", "Daniel", "" ] ]
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point upsampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
1708.09540
Sukhdev Singh
Ayush Sharma, Piyush Bajpai, Sukhdev Singh and Kiran Khatter
Virtual Reality: Blessings and Risk Assessment
22 page and 1 Table
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: This paper presents an up-to-date overview of research performed in the Virtual Reality (VR) environment ranging from definitions, its presence in the various fields, and existing market players and their projects in the VR technology. Further an attempt is made to gain an insight on the psychological mechanism underlying experience in using VR device. Methods: Our literature survey is based on the research articles, analysis of the projects of various companies and their findings for different areas of interest. Findings: In our literature survey we observed that the recent advances in virtual reality enabling technologies have led to variety of virtual devices that facilitate people to interact with the digital world. In fact in the past two decades researchers have tried to integrate reality and VR in the form of intuitive computer interface. Improvements: This has led to variety of potential benefits of VR in many applications such as News, Healthcare, Entertainment, Tourism, Military and Defence etc. However despite the extensive research efforts in creating virtual system environments it is yet to become apparent in normal daily life.
[ { "created": "Thu, 31 Aug 2017 02:34:22 GMT", "version": "v1" } ]
2017-09-01
[ [ "Sharma", "Ayush", "" ], [ "Bajpai", "Piyush", "" ], [ "Singh", "Sukhdev", "" ], [ "Khatter", "Kiran", "" ] ]
Objectives: This paper presents an up-to-date overview of research performed in the Virtual Reality (VR) environment ranging from definitions, its presence in the various fields, and existing market players and their projects in the VR technology. Further an attempt is made to gain an insight on the psychological mechanism underlying experience in using VR device. Methods: Our literature survey is based on the research articles, analysis of the projects of various companies and their findings for different areas of interest. Findings: In our literature survey we observed that the recent advances in virtual reality enabling technologies have led to variety of virtual devices that facilitate people to interact with the digital world. In fact in the past two decades researchers have tried to integrate reality and VR in the form of intuitive computer interface. Improvements: This has led to variety of potential benefits of VR in many applications such as News, Healthcare, Entertainment, Tourism, Military and Defence etc. However despite the extensive research efforts in creating virtual system environments it is yet to become apparent in normal daily life.
0905.1307
Michael Goodrich
Michael T. Goodrich, Roberto Tamassia, Jasminka Hasic
An Efficient Dynamic and Distributed RSA Accumulator
Expanded version of a paper appearing in the 5th International Information Security Conference (ISC)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show how to use the RSA one-way accumulator to realize an efficient and dynamic authenticated dictionary, where untrusted directories provide cryptographically verifiable answers to membership queries on a set maintained by a trusted source. Our accumulator-based scheme for authenticated dictionaries supports efficient incremental updates of the underlying set by insertions and deletions of elements. Also, the user can optimally verify in constant time the authenticity of the answer provided by a directory with a simple and practical algorithm. We have also implemented this scheme and we give empirical results that can be used to determine the best strategy for systems implementation with respect to resources that are available. This work has applications to certificate revocation in public key infrastructure and end-to-end integrity of data collections published by third parties on the Internet.
[ { "created": "Fri, 8 May 2009 17:49:57 GMT", "version": "v1" } ]
2009-05-11
[ [ "Goodrich", "Michael T.", "" ], [ "Tamassia", "Roberto", "" ], [ "Hasic", "Jasminka", "" ] ]
We show how to use the RSA one-way accumulator to realize an efficient and dynamic authenticated dictionary, where untrusted directories provide cryptographically verifiable answers to membership queries on a set maintained by a trusted source. Our accumulator-based scheme for authenticated dictionaries supports efficient incremental updates of the underlying set by insertions and deletions of elements. Also, the user can optimally verify in constant time the authenticity of the answer provided by a directory with a simple and practical algorithm. We have also implemented this scheme and we give empirical results that can be used to determine the best strategy for systems implementation with respect to resources that are available. This work has applications to certificate revocation in public key infrastructure and end-to-end integrity of data collections published by third parties on the Internet.
2202.13114
Hoang Lam Nguyen
Hoang Lam Nguyen, Lars Grunske
BeDivFuzz: Integrating Behavioral Diversity into Generator-based Fuzzing
To appear in the proceedings of the 44th International Conference on Software Engineering (ICSE 2022)
null
10.1145/3510003.3510182
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A popular metric to evaluate the performance of fuzzers is branch coverage. However, we argue that focusing solely on covering many different branches (i.e., the richness) is not sufficient since the majority of the covered branches may have been exercised only once, which does not inspire a high confidence in the reliability of the covered code. Instead, the distribution of the executed branches (i.e., the evenness) should also be considered. That is, behavioral diversity is only given if the generated inputs not only trigger many different branches, but also trigger them evenly often with diverse inputs. We introduce BeDivFuzz, a feedback-driven fuzzing technique for generator-based fuzzers. BeDivFuzz distinguishes between structure-preserving and structure-changing mutations in the space of syntactically valid inputs, and biases its mutation strategy towards validity and behavioral diversity based on the received program feedback. We have evaluated BeDivFuzz on Ant, Maven, Rhino, Closure, Nashorn, and Tomcat. The results show that BeDivFuzz achieves better behavioral diversity than the state of the art, measured by established biodiversity metrics, namely the Hill numbers, from the field of ecology.
[ { "created": "Sat, 26 Feb 2022 11:03:35 GMT", "version": "v1" } ]
2022-03-01
[ [ "Nguyen", "Hoang Lam", "" ], [ "Grunske", "Lars", "" ] ]
A popular metric to evaluate the performance of fuzzers is branch coverage. However, we argue that focusing solely on covering many different branches (i.e., the richness) is not sufficient since the majority of the covered branches may have been exercised only once, which does not inspire a high confidence in the reliability of the covered code. Instead, the distribution of the executed branches (i.e., the evenness) should also be considered. That is, behavioral diversity is only given if the generated inputs not only trigger many different branches, but also trigger them evenly often with diverse inputs. We introduce BeDivFuzz, a feedback-driven fuzzing technique for generator-based fuzzers. BeDivFuzz distinguishes between structure-preserving and structure-changing mutations in the space of syntactically valid inputs, and biases its mutation strategy towards validity and behavioral diversity based on the received program feedback. We have evaluated BeDivFuzz on Ant, Maven, Rhino, Closure, Nashorn, and Tomcat. The results show that BeDivFuzz achieves better behavioral diversity than the state of the art, measured by established biodiversity metrics, namely the Hill numbers, from the field of ecology.
2311.08552
Mahmoud Salem
Mahmoud G. Salem, Jiayu Ye, Chu-Cheng Lin, Frederick Liu
UT5: Pretraining Non autoregressive T5 with unrolled denoising
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent advances in Transformer-based Large Language Models have made great strides in natural language generation. However, to decode K tokens, an autoregressive model needs K sequential forward passes, which may be a performance bottleneck for large language models. Many non-autoregressive (NAR) research are aiming to address this sequentiality bottleneck, albeit many have focused on a dedicated architecture in supervised benchmarks. In this work, we studied unsupervised pretraining for non auto-regressive T5 models via unrolled denoising and shown its SoTA results in downstream generation tasks such as SQuAD question generation and XSum.
[ { "created": "Tue, 14 Nov 2023 21:28:10 GMT", "version": "v1" } ]
2023-11-16
[ [ "Salem", "Mahmoud G.", "" ], [ "Ye", "Jiayu", "" ], [ "Lin", "Chu-Cheng", "" ], [ "Liu", "Frederick", "" ] ]
Recent advances in Transformer-based Large Language Models have made great strides in natural language generation. However, to decode K tokens, an autoregressive model needs K sequential forward passes, which may be a performance bottleneck for large language models. Many non-autoregressive (NAR) research are aiming to address this sequentiality bottleneck, albeit many have focused on a dedicated architecture in supervised benchmarks. In this work, we studied unsupervised pretraining for non auto-regressive T5 models via unrolled denoising and shown its SoTA results in downstream generation tasks such as SQuAD question generation and XSum.
1706.08675
Dorien Herremans
Dorien Herremans, Ching-Hua Chuan
Proceedings of the First International Workshop on Deep Learning and Music
null
Proceedings of the First International Workshop on Deep Learning and Music, joint with IJCNN, Anchorage, US, May 17-18, 2017
10.13140/RG.2.2.22227.99364/1
null
cs.NE cs.LG cs.MM cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proceedings of the First International Workshop on Deep Learning and Music, joint with IJCNN, Anchorage, US, May 17-18, 2017
[ { "created": "Tue, 27 Jun 2017 05:28:06 GMT", "version": "v1" } ]
2017-06-28
[ [ "Herremans", "Dorien", "" ], [ "Chuan", "Ching-Hua", "" ] ]
Proceedings of the First International Workshop on Deep Learning and Music, joint with IJCNN, Anchorage, US, May 17-18, 2017
2210.07474
Xiaojian Ma
Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, Siyuan Huang
SQA3D: Situated Question Answering in 3D Scenes
ICLR 2023. First two authors contributed equally. Project website: https://sqa3d.github.io
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context (e.g., 3D scan), SQA3D requires the tested agent to first understand its situation (position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.
[ { "created": "Fri, 14 Oct 2022 02:52:26 GMT", "version": "v1" }, { "created": "Sat, 22 Oct 2022 15:25:26 GMT", "version": "v2" }, { "created": "Sat, 11 Feb 2023 01:57:41 GMT", "version": "v3" }, { "created": "Wed, 22 Feb 2023 08:25:24 GMT", "version": "v4" }, { "created": "Wed, 12 Apr 2023 20:05:41 GMT", "version": "v5" } ]
2023-04-14
[ [ "Ma", "Xiaojian", "" ], [ "Yong", "Silong", "" ], [ "Zheng", "Zilong", "" ], [ "Li", "Qing", "" ], [ "Liang", "Yitao", "" ], [ "Zhu", "Song-Chun", "" ], [ "Huang", "Siyuan", "" ] ]
We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context (e.g., 3D scan), SQA3D requires the tested agent to first understand its situation (position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.
2308.02874
Zijie Wu
Zijie Wu, Yaonan Wang, Mingtao Feng, He Xie, Ajmal Mian
Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation
Accepted by ICCV 2023
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
Diffusion probabilistic models have achieved remarkable success in text guided image generation. However, generating 3D shapes is still challenging due to the lack of sufficient data containing 3D models along with their descriptions. Moreover, text based descriptions of 3D shapes are inherently ambiguous and lack details. In this paper, we propose a sketch and text guided probabilistic diffusion model for colored point cloud generation that conditions the denoising process jointly with a hand drawn sketch of the object and its textual description. We incrementally diffuse the point coordinates and color values in a joint diffusion process to reach a Gaussian distribution. Colored point cloud generation thus amounts to learning the reverse diffusion process, conditioned by the sketch and text, to iteratively recover the desired shape and color. Specifically, to learn effective sketch-text embedding, our model adaptively aggregates the joint embedding of text prompt and the sketch based on a capsule attention network. Our model uses staged diffusion to generate the shape and then assign colors to different parts conditioned on the appearance prompt while preserving precise shapes from the first stage. This gives our model the flexibility to extend to multiple tasks, such as appearance re-editing and part segmentation. Experimental results demonstrate that our model outperforms recent state-of-the-art in point cloud generation.
[ { "created": "Sat, 5 Aug 2023 13:10:43 GMT", "version": "v1" } ]
2023-08-08
[ [ "Wu", "Zijie", "" ], [ "Wang", "Yaonan", "" ], [ "Feng", "Mingtao", "" ], [ "Xie", "He", "" ], [ "Mian", "Ajmal", "" ] ]
Diffusion probabilistic models have achieved remarkable success in text guided image generation. However, generating 3D shapes is still challenging due to the lack of sufficient data containing 3D models along with their descriptions. Moreover, text based descriptions of 3D shapes are inherently ambiguous and lack details. In this paper, we propose a sketch and text guided probabilistic diffusion model for colored point cloud generation that conditions the denoising process jointly with a hand drawn sketch of the object and its textual description. We incrementally diffuse the point coordinates and color values in a joint diffusion process to reach a Gaussian distribution. Colored point cloud generation thus amounts to learning the reverse diffusion process, conditioned by the sketch and text, to iteratively recover the desired shape and color. Specifically, to learn effective sketch-text embedding, our model adaptively aggregates the joint embedding of text prompt and the sketch based on a capsule attention network. Our model uses staged diffusion to generate the shape and then assign colors to different parts conditioned on the appearance prompt while preserving precise shapes from the first stage. This gives our model the flexibility to extend to multiple tasks, such as appearance re-editing and part segmentation. Experimental results demonstrate that our model outperforms recent state-of-the-art in point cloud generation.
1102.5197
Moez Hizem
Moez Hizem and Ridha Bouallegue
Fine Synchronization through UWB TH-PPM Impulse Radios
11 pages, 7 figures
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 3, No. 1, February 2011
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a novel fine timing algorithm has been tested and developed to synchronize Ultra-Wideband (UWB) signals with pulse position modulation (PPM). By applying this algorithm, we evaluate timing algorithms in both data-aided (DA) and non-data-aided (NDA) modes. Based on correlation operations, our algorithm remains operational in practical UWB settings. The proposed timing scheme consists of two complementary floors or steps. The first floor consists on a coarse synchronization which is founded on the recently proposed acquisition scheme based on dirty templates (TDT). In the second floor, we investigate a new fine synchronization algorithm which gives an improved estimate of timing offset. Simulations confirm performance improvement of our timing synchronization compared to the original TDT algorithm in terms of mean square error.
[ { "created": "Fri, 25 Feb 2011 09:31:28 GMT", "version": "v1" } ]
2011-02-28
[ [ "Hizem", "Moez", "" ], [ "Bouallegue", "Ridha", "" ] ]
In this paper, a novel fine timing algorithm has been tested and developed to synchronize Ultra-Wideband (UWB) signals with pulse position modulation (PPM). By applying this algorithm, we evaluate timing algorithms in both data-aided (DA) and non-data-aided (NDA) modes. Based on correlation operations, our algorithm remains operational in practical UWB settings. The proposed timing scheme consists of two complementary floors or steps. The first floor consists on a coarse synchronization which is founded on the recently proposed acquisition scheme based on dirty templates (TDT). In the second floor, we investigate a new fine synchronization algorithm which gives an improved estimate of timing offset. Simulations confirm performance improvement of our timing synchronization compared to the original TDT algorithm in terms of mean square error.
1304.0270
Jun Zhu Professor
Fan Wang, Jun Zhu and Lin Zhang
An optimal problem for relative entropy
10 page
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relative entropy is an essential tool in quantum information theory. There are so many problems which are related to relative entropy. In this article, the optimal values which are defined by $\displaystyle\max_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma)$ and $\displaystyle\min_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma)$ for two positive definite operators $\rho,\sigma\in{\textmd{Pd}(\cX)}$ are obtained. And the set of $S(U\rho{U^{\ast}}\parallel\sigma)$ for every unitary operator $U$ is full of the interval $[\displaystyle\min_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma),\displaystyle\max_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma)]$
[ { "created": "Mon, 1 Apr 2013 00:53:08 GMT", "version": "v1" } ]
2013-04-02
[ [ "Wang", "Fan", "" ], [ "Zhu", "Jun", "" ], [ "Zhang", "Lin", "" ] ]
Relative entropy is an essential tool in quantum information theory. There are so many problems which are related to relative entropy. In this article, the optimal values which are defined by $\displaystyle\max_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma)$ and $\displaystyle\min_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma)$ for two positive definite operators $\rho,\sigma\in{\textmd{Pd}(\cX)}$ are obtained. And the set of $S(U\rho{U^{\ast}}\parallel\sigma)$ for every unitary operator $U$ is full of the interval $[\displaystyle\min_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma),\displaystyle\max_{U\in{U(\cX_{d})}} S(U\rho{U^{\ast}}\parallel\sigma)]$
0909.5649
Vitaly Osipov
Nikolaj Leischner, Vitaly Osipov, Peter Sanders
GPU sample sort
null
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present the design of a sample sort algorithm for manycore GPUs. Despite being one of the most efficient comparison-based sorting algorithms for distributed memory architectures its performance on GPUs was previously unknown. For uniformly distributed keys our sample sort is at least 25% and on average 68% faster than the best comparison-based sorting algorithm, GPU Thrust merge sort, and on average more than 2 times faster than GPU quicksort. Moreover, for 64-bit integer keys it is at least 63% and on average 2 times faster than the highly optimized GPU Thrust radix sort that directly manipulates the binary representation of keys. Our implementation is robust to different distributions and entropy levels of keys and scales almost linearly with the input size. These results indicate that multi-way techniques in general and sample sort in particular achieve substantially better performance than two-way merge sort and quicksort.
[ { "created": "Wed, 30 Sep 2009 15:58:53 GMT", "version": "v1" } ]
2009-10-01
[ [ "Leischner", "Nikolaj", "" ], [ "Osipov", "Vitaly", "" ], [ "Sanders", "Peter", "" ] ]
In this paper, we present the design of a sample sort algorithm for manycore GPUs. Despite being one of the most efficient comparison-based sorting algorithms for distributed memory architectures its performance on GPUs was previously unknown. For uniformly distributed keys our sample sort is at least 25% and on average 68% faster than the best comparison-based sorting algorithm, GPU Thrust merge sort, and on average more than 2 times faster than GPU quicksort. Moreover, for 64-bit integer keys it is at least 63% and on average 2 times faster than the highly optimized GPU Thrust radix sort that directly manipulates the binary representation of keys. Our implementation is robust to different distributions and entropy levels of keys and scales almost linearly with the input size. These results indicate that multi-way techniques in general and sample sort in particular achieve substantially better performance than two-way merge sort and quicksort.
2408.05746
Nianzu Li
Nianzu Li, Weidong Mei, Boyu Ning, Peiran Wu
Movable Antenna Enhanced AF Relaying: Two-Stage Antenna Position Optimization
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The movable antenna (MA) technology has attracted increasing attention in wireless communications due to its capability for flexibly adjusting the positions of multiple antennas in a local region to reconfigure channel conditions. In this paper, we investigate its application in an amplify-and-forward (AF) relay system, where a multi-MA AF relay is deployed to assist in the wireless communications from a source to a destination. In particular, we aim to maximize the achievable rate at the destination, by jointly optimizing the AF weight matrix at the relay and its MAs' positions in two stages for receiving the signal from the source and transmitting its amplified version to the destination, respectively. However, compared to the existing one-stage antenna position optimization, the two-stage position optimization is more challenging due to its intricate coupling in the achievable rate at the destination. To tackle this challenge, we decompose the considered problem into several subproblems by invoking the alternating optimization (AO) and solve them by using the semidefinite programming and the gradient ascent. Numerical results demonstrate the superiority of our proposed system over the conventional relaying system with fixed-position antennas (FPAs) and also drive essential insights.
[ { "created": "Sun, 11 Aug 2024 10:58:13 GMT", "version": "v1" } ]
2024-08-13
[ [ "Li", "Nianzu", "" ], [ "Mei", "Weidong", "" ], [ "Ning", "Boyu", "" ], [ "Wu", "Peiran", "" ] ]
The movable antenna (MA) technology has attracted increasing attention in wireless communications due to its capability for flexibly adjusting the positions of multiple antennas in a local region to reconfigure channel conditions. In this paper, we investigate its application in an amplify-and-forward (AF) relay system, where a multi-MA AF relay is deployed to assist in the wireless communications from a source to a destination. In particular, we aim to maximize the achievable rate at the destination, by jointly optimizing the AF weight matrix at the relay and its MAs' positions in two stages for receiving the signal from the source and transmitting its amplified version to the destination, respectively. However, compared to the existing one-stage antenna position optimization, the two-stage position optimization is more challenging due to its intricate coupling in the achievable rate at the destination. To tackle this challenge, we decompose the considered problem into several subproblems by invoking the alternating optimization (AO) and solve them by using the semidefinite programming and the gradient ascent. Numerical results demonstrate the superiority of our proposed system over the conventional relaying system with fixed-position antennas (FPAs) and also drive essential insights.
2002.08859
Eitan Richardson
Eitan Richardson and Yair Weiss
A Bayes-Optimal View on Adversarial Examples
Minor revision per journal review, 28 pages
null
null
null
cs.LG cs.CR cs.CV stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Since the discovery of adversarial examples - the ability to fool modern CNN classifiers with tiny perturbations of the input, there has been much discussion whether they are a "bug" that is specific to current neural architectures and training methods or an inevitable "feature" of high dimensional geometry. In this paper, we argue for examining adversarial examples from the perspective of Bayes-Optimal classification. We construct realistic image datasets for which the Bayes-Optimal classifier can be efficiently computed and derive analytic conditions on the distributions under which these classifiers are provably robust against any adversarial attack even in high dimensions. Our results show that even when these "gold standard" optimal classifiers are robust, CNNs trained on the same datasets consistently learn a vulnerable classifier, indicating that adversarial examples are often an avoidable "bug". We further show that RBF SVMs trained on the same data consistently learn a robust classifier. The same trend is observed in experiments with real images in different datasets.
[ { "created": "Thu, 20 Feb 2020 16:43:47 GMT", "version": "v1" }, { "created": "Wed, 17 Mar 2021 09:47:10 GMT", "version": "v2" } ]
2021-03-18
[ [ "Richardson", "Eitan", "" ], [ "Weiss", "Yair", "" ] ]
Since the discovery of adversarial examples - the ability to fool modern CNN classifiers with tiny perturbations of the input, there has been much discussion whether they are a "bug" that is specific to current neural architectures and training methods or an inevitable "feature" of high dimensional geometry. In this paper, we argue for examining adversarial examples from the perspective of Bayes-Optimal classification. We construct realistic image datasets for which the Bayes-Optimal classifier can be efficiently computed and derive analytic conditions on the distributions under which these classifiers are provably robust against any adversarial attack even in high dimensions. Our results show that even when these "gold standard" optimal classifiers are robust, CNNs trained on the same datasets consistently learn a vulnerable classifier, indicating that adversarial examples are often an avoidable "bug". We further show that RBF SVMs trained on the same data consistently learn a robust classifier. The same trend is observed in experiments with real images in different datasets.
2111.12077
Jonathan Barron
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
https://jonbarron.info/mipnerf360/
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of the task of reconstructing a large scene from a small set of images. We present an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes. Our model, which we dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees around a point, reduces mean-squared error by 57% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps for highly intricate, unbounded real-world scenes.
[ { "created": "Tue, 23 Nov 2021 18:51:18 GMT", "version": "v1" }, { "created": "Wed, 24 Nov 2021 18:51:06 GMT", "version": "v2" }, { "created": "Fri, 25 Mar 2022 23:05:20 GMT", "version": "v3" } ]
2022-03-29
[ [ "Barron", "Jonathan T.", "" ], [ "Mildenhall", "Ben", "" ], [ "Verbin", "Dor", "" ], [ "Srinivasan", "Pratul P.", "" ], [ "Hedman", "Peter", "" ] ]
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of the task of reconstructing a large scene from a small set of images. We present an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes. Our model, which we dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees around a point, reduces mean-squared error by 57% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps for highly intricate, unbounded real-world scenes.
1103.4080
Armel Ulrich Kemloh Wagoum
A. U. Kemloh Wagoum, A. Seyfried and S. Holl
Modelling dynamic route choice of pedestrians to assess the criticality of building evacuation
15 pages, 34 figures
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an event-driven way finding algorithm for pedestrians in an evacuation scenario, which operates on a graph-based structure. The motivation of each pedestrian is to leave the facility. The events used to redirect pedestrians include the identification of a jam situation and/or identification of a better route than the current. This study considers two types of pedestrians: familiar and unfamiliar with the facility. Four strategies are modelled to cover those groups. The modelled strategies are the shortest path (local and global); They are combined with a quickest path approach, which is based on an observation principle. In the quickest path approach, pedestrians take their decisions based on the observed environment and are routed dynamically in the network using an appropriate cost benefit analysis function. The dynamic modelling of route choice with different strategies and types of pedestrians considers the manifold of in uences which appears in the real system and raises questions about the criticality of an evacuation process. To address this question criteria are elaborated. The criteria we focus on in this contribution are the evacuation time, the individual times spent in jam, the jam size evolution and the overall jam size itself. The in uences of the different strategies on those evaluation criteria are investigated. The sensibility of the system to disturbances (e.g. broken escape route) is also analysed. Keywords: pedestrian dynamics, routing, quickest path, evacuation, jam, critical state
[ { "created": "Mon, 21 Mar 2011 17:04:08 GMT", "version": "v1" } ]
2011-03-22
[ [ "Wagoum", "A. U. Kemloh", "" ], [ "Seyfried", "A.", "" ], [ "Holl", "S.", "" ] ]
This paper presents an event-driven way finding algorithm for pedestrians in an evacuation scenario, which operates on a graph-based structure. The motivation of each pedestrian is to leave the facility. The events used to redirect pedestrians include the identification of a jam situation and/or identification of a better route than the current. This study considers two types of pedestrians: familiar and unfamiliar with the facility. Four strategies are modelled to cover those groups. The modelled strategies are the shortest path (local and global); They are combined with a quickest path approach, which is based on an observation principle. In the quickest path approach, pedestrians take their decisions based on the observed environment and are routed dynamically in the network using an appropriate cost benefit analysis function. The dynamic modelling of route choice with different strategies and types of pedestrians considers the manifold of in uences which appears in the real system and raises questions about the criticality of an evacuation process. To address this question criteria are elaborated. The criteria we focus on in this contribution are the evacuation time, the individual times spent in jam, the jam size evolution and the overall jam size itself. The in uences of the different strategies on those evaluation criteria are investigated. The sensibility of the system to disturbances (e.g. broken escape route) is also analysed. Keywords: pedestrian dynamics, routing, quickest path, evacuation, jam, critical state
2211.10409
Chi Zhang
Chi Zhang, Paul Scheffler, Thomas Benz, Matteo Perotti, Luca Benini
AXI-Pack: Near-Memory Bus Packing for Bandwidth-Efficient Irregular Workloads
6 pages, 5 figures. Submitted to DATE 2023
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data-intensive applications involving irregular memory streams are inefficiently handled by modern processors and memory systems highly optimized for regular, contiguous data. Recent work tackles these inefficiencies in hardware through core-side stream extensions or memory-side prefetchers and accelerators, but fails to provide end-to-end solutions which also achieve high efficiency in on-chip interconnects. We propose AXI-Pack, an extension to ARM's AXI4 protocol introducing bandwidth-efficient strided and indirect bursts to enable end-to-end irregular streams. AXI-Pack adds irregular stream semantics to memory requests and avoids inefficient narrow-bus transfers by packing multiple narrow data elements onto a wide bus. It retains full compatibility with AXI4 and does not require modifications to non-burst-reshaping interconnect IPs. To demonstrate our approach end-to-end, we extend an open-source RISC-V vector processor to leverage AXI-Pack at its memory interface for strided and indexed accesses. On the memory side, we design a banked memory controller efficiently handling AXI-Pack requests. On a system with a 256-bit-wide interconnect running FP32 workloads, AXI-Pack achieves near-ideal peak on-chip bus utilizations of 87% and 39%, speedups of 5.4x and 2.4x, and energy efficiency improvements of 5.3x and 2.1x over a baseline using an AXI4 bus on strided and indirect benchmarks, respectively.
[ { "created": "Fri, 18 Nov 2022 18:23:47 GMT", "version": "v1" } ]
2022-11-21
[ [ "Zhang", "Chi", "" ], [ "Scheffler", "Paul", "" ], [ "Benz", "Thomas", "" ], [ "Perotti", "Matteo", "" ], [ "Benini", "Luca", "" ] ]
Data-intensive applications involving irregular memory streams are inefficiently handled by modern processors and memory systems highly optimized for regular, contiguous data. Recent work tackles these inefficiencies in hardware through core-side stream extensions or memory-side prefetchers and accelerators, but fails to provide end-to-end solutions which also achieve high efficiency in on-chip interconnects. We propose AXI-Pack, an extension to ARM's AXI4 protocol introducing bandwidth-efficient strided and indirect bursts to enable end-to-end irregular streams. AXI-Pack adds irregular stream semantics to memory requests and avoids inefficient narrow-bus transfers by packing multiple narrow data elements onto a wide bus. It retains full compatibility with AXI4 and does not require modifications to non-burst-reshaping interconnect IPs. To demonstrate our approach end-to-end, we extend an open-source RISC-V vector processor to leverage AXI-Pack at its memory interface for strided and indexed accesses. On the memory side, we design a banked memory controller efficiently handling AXI-Pack requests. On a system with a 256-bit-wide interconnect running FP32 workloads, AXI-Pack achieves near-ideal peak on-chip bus utilizations of 87% and 39%, speedups of 5.4x and 2.4x, and energy efficiency improvements of 5.3x and 2.1x over a baseline using an AXI4 bus on strided and indirect benchmarks, respectively.
2303.13514
Mehmet Ayg\"un
Mehmet Ayg\"un and Oisin Mac Aodha
SAOR: Single-View Articulated Object Reconstruction
Accepted to CVPR 2024, website: https://mehmetaygun.github.io/saor
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce SAOR, a novel approach for estimating the 3D shape, texture, and viewpoint of an articulated object from a single image captured in the wild. Unlike prior approaches that rely on pre-defined category-specific 3D templates or tailored 3D skeletons, SAOR learns to articulate shapes from single-view image collections with a skeleton-free part-based model without requiring any 3D object shape priors. To prevent ill-posed solutions, we propose a cross-instance consistency loss that exploits disentangled object shape deformation and articulation. This is helped by a new silhouette-based sampling mechanism to enhance viewpoint diversity during training. Our method only requires estimated object silhouettes and relative depth maps from off-the-shelf pre-trained networks during training. At inference time, given a single-view image, it efficiently outputs an explicit mesh representation. We obtain improved qualitative and quantitative results on challenging quadruped animals compared to relevant existing work.
[ { "created": "Thu, 23 Mar 2023 17:59:35 GMT", "version": "v1" }, { "created": "Fri, 1 Dec 2023 19:25:10 GMT", "version": "v2" }, { "created": "Mon, 8 Apr 2024 11:22:05 GMT", "version": "v3" } ]
2024-04-09
[ [ "Aygün", "Mehmet", "" ], [ "Mac Aodha", "Oisin", "" ] ]
We introduce SAOR, a novel approach for estimating the 3D shape, texture, and viewpoint of an articulated object from a single image captured in the wild. Unlike prior approaches that rely on pre-defined category-specific 3D templates or tailored 3D skeletons, SAOR learns to articulate shapes from single-view image collections with a skeleton-free part-based model without requiring any 3D object shape priors. To prevent ill-posed solutions, we propose a cross-instance consistency loss that exploits disentangled object shape deformation and articulation. This is helped by a new silhouette-based sampling mechanism to enhance viewpoint diversity during training. Our method only requires estimated object silhouettes and relative depth maps from off-the-shelf pre-trained networks during training. At inference time, given a single-view image, it efficiently outputs an explicit mesh representation. We obtain improved qualitative and quantitative results on challenging quadruped animals compared to relevant existing work.
1611.03453
Abhishek Gupta
Abhishek Gupta, M. Farhan Habib, Uttam Mandal, Pulak Chowdhury, Massimo Tornatore, and Biswanath Mukherjee
On Service-Chaining Strategies using Virtual Network Functions in Operator Networks
null
https://doi.org/10.1016/j.comnet.2018.01.028
10.1016/j.comnet.2018.01.028
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network functions (e.g., firewalls, load balancers, etc.) have been traditionally provided through proprietary hardware appliances. Often, hardware appliances need to be hardwired back to back to form a service chain providing chained network functions. Hardware appliances cannot be provisioned on demand since they are statically embedded in the network topology, making creation, insertion, modification, upgrade, and removal of service chains complex, and also slowing down service innovation. Hence, network operators are starting to deploy Virtual Network Functions (VNFs), which are virtualized over commodity hardware. VNFs can be deployed in Data Centers (DCs) or in Network Function Virtualization (NFV) capable network elements (nodes) such as routers and switches. NFV capable nodes and DCs together form a Network enabled Cloud (NeC) that helps to facilitate the dynamic service chaining required to support evolving network traffic and its service demands. In this study, we focus on the VNF service chain placement and traffic routing problem, and build a model for placing a VNF service chain while minimizing network resource consumption. Our results indicate that a NeC having a DC and NFV capable nodes can significantly reduce network-resource consumption.
[ { "created": "Thu, 10 Nov 2016 19:15:45 GMT", "version": "v1" } ]
2018-04-23
[ [ "Gupta", "Abhishek", "" ], [ "Habib", "M. Farhan", "" ], [ "Mandal", "Uttam", "" ], [ "Chowdhury", "Pulak", "" ], [ "Tornatore", "Massimo", "" ], [ "Mukherjee", "Biswanath", "" ] ]
Network functions (e.g., firewalls, load balancers, etc.) have been traditionally provided through proprietary hardware appliances. Often, hardware appliances need to be hardwired back to back to form a service chain providing chained network functions. Hardware appliances cannot be provisioned on demand since they are statically embedded in the network topology, making creation, insertion, modification, upgrade, and removal of service chains complex, and also slowing down service innovation. Hence, network operators are starting to deploy Virtual Network Functions (VNFs), which are virtualized over commodity hardware. VNFs can be deployed in Data Centers (DCs) or in Network Function Virtualization (NFV) capable network elements (nodes) such as routers and switches. NFV capable nodes and DCs together form a Network enabled Cloud (NeC) that helps to facilitate the dynamic service chaining required to support evolving network traffic and its service demands. In this study, we focus on the VNF service chain placement and traffic routing problem, and build a model for placing a VNF service chain while minimizing network resource consumption. Our results indicate that a NeC having a DC and NFV capable nodes can significantly reduce network-resource consumption.
2209.09668
Max Klimm
Max Klimm and Martin Knaack
Maximizing a Submodular Function with Bounded Curvature under an Unknown Knapsack Constraint
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the problem of maximizing a monotone submodular function under an unknown knapsack constraint. A solution to this problem is a policy that decides which item to pack next based on the past packing history. The robustness factor of a policy is the worst case ratio of the solution obtained by following the policy and an optimal solution that knows the knapsack capacity. We develop an algorithm with a robustness factor that is decreasing in the curvature $B$ of the submodular function. For the extreme cases $c=0$ corresponding to a modular objective, it matches a previously known and best possible robustness factor of $1/2$. For the other extreme case of $c=1$ it yields a robustness factor of $\approx 0.35$ improving over the best previously known robustness factor of $\approx 0.06$.
[ { "created": "Tue, 20 Sep 2022 12:04:59 GMT", "version": "v1" } ]
2022-09-21
[ [ "Klimm", "Max", "" ], [ "Knaack", "Martin", "" ] ]
This paper studies the problem of maximizing a monotone submodular function under an unknown knapsack constraint. A solution to this problem is a policy that decides which item to pack next based on the past packing history. The robustness factor of a policy is the worst case ratio of the solution obtained by following the policy and an optimal solution that knows the knapsack capacity. We develop an algorithm with a robustness factor that is decreasing in the curvature $B$ of the submodular function. For the extreme cases $c=0$ corresponding to a modular objective, it matches a previously known and best possible robustness factor of $1/2$. For the other extreme case of $c=1$ it yields a robustness factor of $\approx 0.35$ improving over the best previously known robustness factor of $\approx 0.06$.
1511.05768
Andreas Bulling
Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling
Labeled pupils in the wild: A dataset for studying pupil detection in unconstrained environments
null
null
10.1145/2857491.2857520
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
[ { "created": "Wed, 18 Nov 2015 13:30:55 GMT", "version": "v1" } ]
2017-02-07
[ [ "Tonsen", "Marc", "" ], [ "Zhang", "Xucong", "" ], [ "Sugano", "Yusuke", "" ], [ "Bulling", "Andreas", "" ] ]
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
2407.03596
Xuerong Zhang
Xuerong Zhang, Li Huang, Jing Lv, Ming Yang
Self Adaptive Threshold Pseudo-labeling and Unreliable Sample Contrastive Loss for Semi-supervised Image Classification
ICANN24 accepted
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi-supervised learning is attracting blooming attention, due to its success in combining unlabeled data. However, pseudo-labeling-based semi-supervised approaches suffer from two problems in image classification: (1) Existing methods might fail to adopt suitable thresholds since they either use a pre-defined/fixed threshold or an ad-hoc threshold adjusting scheme, resulting in inferior performance and slow convergence. (2) Discarding unlabeled data with confidence below the thresholds results in the loss of discriminating information. To solve these issues, we develop an effective method to make sufficient use of unlabeled data. Specifically, we design a self adaptive threshold pseudo-labeling strategy, which thresholds for each class can be dynamically adjusted to increase the number of reliable samples. Meanwhile, in order to effectively utilise unlabeled data with confidence below the thresholds, we propose an unreliable sample contrastive loss to mine the discriminative information in low-confidence samples by learning the similarities and differences between sample features. We evaluate our method on several classification benchmarks under partially labeled settings and demonstrate its superiority over the other approaches.
[ { "created": "Thu, 4 Jul 2024 03:04:56 GMT", "version": "v1" } ]
2024-07-08
[ [ "Zhang", "Xuerong", "" ], [ "Huang", "Li", "" ], [ "Lv", "Jing", "" ], [ "Yang", "Ming", "" ] ]
Semi-supervised learning is attracting blooming attention, due to its success in combining unlabeled data. However, pseudo-labeling-based semi-supervised approaches suffer from two problems in image classification: (1) Existing methods might fail to adopt suitable thresholds since they either use a pre-defined/fixed threshold or an ad-hoc threshold adjusting scheme, resulting in inferior performance and slow convergence. (2) Discarding unlabeled data with confidence below the thresholds results in the loss of discriminating information. To solve these issues, we develop an effective method to make sufficient use of unlabeled data. Specifically, we design a self adaptive threshold pseudo-labeling strategy, which thresholds for each class can be dynamically adjusted to increase the number of reliable samples. Meanwhile, in order to effectively utilise unlabeled data with confidence below the thresholds, we propose an unreliable sample contrastive loss to mine the discriminative information in low-confidence samples by learning the similarities and differences between sample features. We evaluate our method on several classification benchmarks under partially labeled settings and demonstrate its superiority over the other approaches.
2303.04392
Amaael Antonini
Amaael Antonini, Rita Gimelshein, and Richard Wesel
Achievable Rates and Low-Complexity Encoding of Posterior Matching for the BSC
This paper consists of 26 pages and contains 6 figures. An earlier version of the algorithm included in this paper was published at the 2020 IEEE International Symposium on Information Theory (ISIT), (DOI: 10.1109/ISIT44484.2020.9174232)
null
null
null
cs.IT cs.IR math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Horstein, Burnashev, Shayevitz and Feder, Naghshvar et al. and others have studied sequential transmission of a K-bit message over the binary symmetric channel (BSC) with full, noiseless feedback using posterior matching. Yang et al. provide an improved lower bound on the achievable rate using martingale analysis that relies on the small-enough difference (SED) partitioning introduced by Naghshvar et al. SED requires a relatively complex encoder and decoder. To reduce complexity, this paper replaces SED with relaxed constraints that admit the small enough absolute difference (SEAD) partitioning rule. The main analytical results show that achievable-rate bounds higher than those found by Yang et al. are possible even under the new constraints, which are less restrictive than SED. The new analysis does not use martingale theory for the confirmation phase and applies a surrogate channel technique to tighten the results. An initial systematic transmission further increases the achievable rate bound. The simplified encoder associated with SEAD has a complexity below order O(K^2) and allows simulations for message sizes of at least 1000 bits. For example, simulations achieve 99% of of the channel's 0.50-bit capacity with an average block size of 200 bits for a target codeword error rate of 10^(-3).
[ { "created": "Wed, 8 Mar 2023 05:53:33 GMT", "version": "v1" }, { "created": "Fri, 10 Mar 2023 01:31:11 GMT", "version": "v2" } ]
2023-03-13
[ [ "Antonini", "Amaael", "" ], [ "Gimelshein", "Rita", "" ], [ "Wesel", "Richard", "" ] ]
Horstein, Burnashev, Shayevitz and Feder, Naghshvar et al. and others have studied sequential transmission of a K-bit message over the binary symmetric channel (BSC) with full, noiseless feedback using posterior matching. Yang et al. provide an improved lower bound on the achievable rate using martingale analysis that relies on the small-enough difference (SED) partitioning introduced by Naghshvar et al. SED requires a relatively complex encoder and decoder. To reduce complexity, this paper replaces SED with relaxed constraints that admit the small enough absolute difference (SEAD) partitioning rule. The main analytical results show that achievable-rate bounds higher than those found by Yang et al. are possible even under the new constraints, which are less restrictive than SED. The new analysis does not use martingale theory for the confirmation phase and applies a surrogate channel technique to tighten the results. An initial systematic transmission further increases the achievable rate bound. The simplified encoder associated with SEAD has a complexity below order O(K^2) and allows simulations for message sizes of at least 1000 bits. For example, simulations achieve 99% of of the channel's 0.50-bit capacity with an average block size of 200 bits for a target codeword error rate of 10^(-3).
2401.10353
Mian Zhang
Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi and Dong Yu
Inconsistent dialogue responses and how to recover from them
Accepted in EACL 2024. Code and dataset available at https://github.com/mianzhang/CIDER
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
One critical issue for chat systems is to stay consistent about preferences, opinions, beliefs and facts of itself, which has been shown a difficult problem. In this work, we study methods to assess and bolster utterance consistency of chat systems. A dataset is first developed for studying the inconsistencies, where inconsistent dialogue responses, explanations of the inconsistencies, and recovery utterances are authored by annotators. This covers the life span of inconsistencies, namely introduction, understanding, and resolution. Building on this, we introduce a set of tasks centered on dialogue consistency, specifically focused on its detection and resolution. Our experimental findings indicate that our dataset significantly helps the progress in identifying and resolving conversational inconsistencies, and current popular large language models like ChatGPT which are good at resolving inconsistencies however still struggle with detection.
[ { "created": "Thu, 18 Jan 2024 19:46:04 GMT", "version": "v1" } ]
2024-01-22
[ [ "Zhang", "Mian", "" ], [ "Jin", "Lifeng", "" ], [ "Song", "Linfeng", "" ], [ "Mi", "Haitao", "" ], [ "Yu", "Dong", "" ] ]
One critical issue for chat systems is to stay consistent about preferences, opinions, beliefs and facts of itself, which has been shown a difficult problem. In this work, we study methods to assess and bolster utterance consistency of chat systems. A dataset is first developed for studying the inconsistencies, where inconsistent dialogue responses, explanations of the inconsistencies, and recovery utterances are authored by annotators. This covers the life span of inconsistencies, namely introduction, understanding, and resolution. Building on this, we introduce a set of tasks centered on dialogue consistency, specifically focused on its detection and resolution. Our experimental findings indicate that our dataset significantly helps the progress in identifying and resolving conversational inconsistencies, and current popular large language models like ChatGPT which are good at resolving inconsistencies however still struggle with detection.
2307.10447
Yumeng Xue
Yumeng Xue, Patrick Paetzold, Rebecca Kehlbeck, Bin Chen, Kin Chung Kwan, Yunhai Wang, and Oliver Deussen
Reducing Ambiguities in Line-based Density Plots by Image-space Colorization
Published in IEEE Transactions on Visualization and Computer Graphics (Supplementary Material: https://osf.io/jm5yz/)
null
10.1109/TVCG.2023.3327149
null
cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Line-based density plots are used to reduce visual clutter in line charts with a multitude of individual lines. However, these traditional density plots are often perceived ambiguously, which obstructs the user's identification of underlying trends in complex datasets. Thus, we propose a novel image space coloring method for line-based density plots that enhances their interpretability. Our method employs color not only to visually communicate data density but also to highlight similar regions in the plot, allowing users to identify and distinguish trends easily. We achieve this by performing hierarchical clustering based on the lines passing through each region and mapping the identified clusters to the hue circle using circular MDS. Additionally, we propose a heuristic approach to assign each line to the most probable cluster, enabling users to analyze density and individual lines. We motivate our method by conducting a small-scale user study, demonstrating the effectiveness of our method using synthetic and real-world datasets, and providing an interactive online tool for generating colored line-based density plots.
[ { "created": "Sun, 16 Jul 2023 15:15:00 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2023 13:22:48 GMT", "version": "v2" } ]
2023-11-23
[ [ "Xue", "Yumeng", "" ], [ "Paetzold", "Patrick", "" ], [ "Kehlbeck", "Rebecca", "" ], [ "Chen", "Bin", "" ], [ "Kwan", "Kin Chung", "" ], [ "Wang", "Yunhai", "" ], [ "Deussen", "Oliver", "" ] ]
Line-based density plots are used to reduce visual clutter in line charts with a multitude of individual lines. However, these traditional density plots are often perceived ambiguously, which obstructs the user's identification of underlying trends in complex datasets. Thus, we propose a novel image space coloring method for line-based density plots that enhances their interpretability. Our method employs color not only to visually communicate data density but also to highlight similar regions in the plot, allowing users to identify and distinguish trends easily. We achieve this by performing hierarchical clustering based on the lines passing through each region and mapping the identified clusters to the hue circle using circular MDS. Additionally, we propose a heuristic approach to assign each line to the most probable cluster, enabling users to analyze density and individual lines. We motivate our method by conducting a small-scale user study, demonstrating the effectiveness of our method using synthetic and real-world datasets, and providing an interactive online tool for generating colored line-based density plots.
2401.15289
Xi Tan
Xi Tan, Zheyuan Ma, Sandro Pinto, Le Guan, Ning Zhang, Jun Xu, Zhiqiang Lin, Hongxin Hu, Ziming Zhao
SoK: Where's the "up"?! A Comprehensive (bottom-up) Study on the Security of Arm Cortex-M Systems
To Appear in the 18th USENIX WOOT Conference on Offensive Technologies, August 12-13, 2024
null
null
null
cs.CR cs.AR
http://creativecommons.org/licenses/by/4.0/
Arm Cortex-M processors are the most widely used 32-bit microcontrollers among embedded and Internet-of-Things devices. Despite the widespread usage, there has been little effort in summarizing their hardware security features, characterizing the limitations and vulnerabilities of their hardware and software stack, and systematizing the research on securing these systems. The goals and contributions of this paper are multi-fold. First, we analyze the hardware security limitations and issues of Cortex-M systems. Second, we conducted a deep study of the software stack designed for Cortex-M and revealed its limitations, which is accompanied by an empirical analysis of 1,797 real-world firmware. Third, we categorize the reported bugs in Cortex-M software systems. Finally, we systematize the efforts that aim at securing Cortex-M systems and evaluate them in terms of the protections they offer, runtime performance, required hardware features, etc. Based on the insights, we develop a set of recommendations for the research community and MCU software developers.
[ { "created": "Sat, 27 Jan 2024 04:09:29 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2024 17:20:26 GMT", "version": "v2" }, { "created": "Mon, 13 May 2024 21:09:28 GMT", "version": "v3" } ]
2024-05-15
[ [ "Tan", "Xi", "" ], [ "Ma", "Zheyuan", "" ], [ "Pinto", "Sandro", "" ], [ "Guan", "Le", "" ], [ "Zhang", "Ning", "" ], [ "Xu", "Jun", "" ], [ "Lin", "Zhiqiang", "" ], [ "Hu", "Hongxin", "" ], [ "Zhao", "Ziming", "" ] ]
Arm Cortex-M processors are the most widely used 32-bit microcontrollers among embedded and Internet-of-Things devices. Despite the widespread usage, there has been little effort in summarizing their hardware security features, characterizing the limitations and vulnerabilities of their hardware and software stack, and systematizing the research on securing these systems. The goals and contributions of this paper are multi-fold. First, we analyze the hardware security limitations and issues of Cortex-M systems. Second, we conducted a deep study of the software stack designed for Cortex-M and revealed its limitations, which is accompanied by an empirical analysis of 1,797 real-world firmware. Third, we categorize the reported bugs in Cortex-M software systems. Finally, we systematize the efforts that aim at securing Cortex-M systems and evaluate them in terms of the protections they offer, runtime performance, required hardware features, etc. Based on the insights, we develop a set of recommendations for the research community and MCU software developers.
2205.01873
Yuanfei Dai
Yuanfei Dai, Wenzhong Guo and Carsten Eickhoff
Wasserstein Adversarial Learning based Temporal Knowledge Graph Embedding
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on knowledge graph embedding (KGE) has emerged as an active field in which most existing KGE approaches mainly focus on static structural data and ignore the influence of temporal variation involved in time-aware triples. In order to deal with this issue, several temporal knowledge graph embedding (TKGE) approaches have been proposed to integrate temporal and structural information in recent years. However, these methods only employ a uniformly random sampling to construct negative facts. As a consequence, the corrupted samples are often too simplistic for training an effective model. In this paper, we propose a new temporal knowledge graph embedding framework by introducing adversarial learning to further refine the performance of traditional TKGE models. In our framework, a generator is utilized to construct high-quality plausible quadruples and a discriminator learns to obtain the embeddings of entities and relations based on both positive and negative samples. Meanwhile, we also apply a Gumbel-Softmax relaxation and the Wasserstein distance to prevent vanishing gradient problems on discrete data; an inherent flaw in traditional generative adversarial networks. Through comprehensive experimentation on temporal datasets, the results indicate that our proposed framework can attain significant improvements based on benchmark models and also demonstrate the effectiveness and applicability of our framework.
[ { "created": "Wed, 4 May 2022 03:28:49 GMT", "version": "v1" } ]
2022-05-05
[ [ "Dai", "Yuanfei", "" ], [ "Guo", "Wenzhong", "" ], [ "Eickhoff", "Carsten", "" ] ]
Research on knowledge graph embedding (KGE) has emerged as an active field in which most existing KGE approaches mainly focus on static structural data and ignore the influence of temporal variation involved in time-aware triples. In order to deal with this issue, several temporal knowledge graph embedding (TKGE) approaches have been proposed to integrate temporal and structural information in recent years. However, these methods only employ a uniformly random sampling to construct negative facts. As a consequence, the corrupted samples are often too simplistic for training an effective model. In this paper, we propose a new temporal knowledge graph embedding framework by introducing adversarial learning to further refine the performance of traditional TKGE models. In our framework, a generator is utilized to construct high-quality plausible quadruples and a discriminator learns to obtain the embeddings of entities and relations based on both positive and negative samples. Meanwhile, we also apply a Gumbel-Softmax relaxation and the Wasserstein distance to prevent vanishing gradient problems on discrete data; an inherent flaw in traditional generative adversarial networks. Through comprehensive experimentation on temporal datasets, the results indicate that our proposed framework can attain significant improvements based on benchmark models and also demonstrate the effectiveness and applicability of our framework.
1612.05877
Zhiwu Huang
Zhiwu Huang, Chengde Wan, Thomas Probst, Luc Van Gool
Deep Learning on Lie Groups for Skeleton-based Action Recognition
Accepted to CVPR 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, skeleton-based action recognition has become a popular 3D classification problem. State-of-the-art methods typically first represent each motion sequence as a high-dimensional trajectory on a Lie group with an additional dynamic time warping, and then shallowly learn favorable Lie group features. In this paper we incorporate the Lie group structure into a deep network architecture to learn more appropriate Lie group features for 3D action recognition. Within the network structure, we design rotation mapping layers to transform the input Lie group features into desirable ones, which are aligned better in the temporal domain. To reduce the high feature dimensionality, the architecture is equipped with rotation pooling layers for the elements on the Lie group. Furthermore, we propose a logarithm mapping layer to map the resulting manifold data into a tangent space that facilitates the application of regular output layers for the final classification. Evaluations of the proposed network for standard 3D human action recognition datasets clearly demonstrate its superiority over existing shallow Lie group feature learning methods as well as most conventional deep learning methods.
[ { "created": "Sun, 18 Dec 2016 09:08:29 GMT", "version": "v1" }, { "created": "Tue, 11 Apr 2017 08:47:00 GMT", "version": "v2" } ]
2017-04-12
[ [ "Huang", "Zhiwu", "" ], [ "Wan", "Chengde", "" ], [ "Probst", "Thomas", "" ], [ "Van Gool", "Luc", "" ] ]
In recent years, skeleton-based action recognition has become a popular 3D classification problem. State-of-the-art methods typically first represent each motion sequence as a high-dimensional trajectory on a Lie group with an additional dynamic time warping, and then shallowly learn favorable Lie group features. In this paper we incorporate the Lie group structure into a deep network architecture to learn more appropriate Lie group features for 3D action recognition. Within the network structure, we design rotation mapping layers to transform the input Lie group features into desirable ones, which are aligned better in the temporal domain. To reduce the high feature dimensionality, the architecture is equipped with rotation pooling layers for the elements on the Lie group. Furthermore, we propose a logarithm mapping layer to map the resulting manifold data into a tangent space that facilitates the application of regular output layers for the final classification. Evaluations of the proposed network for standard 3D human action recognition datasets clearly demonstrate its superiority over existing shallow Lie group feature learning methods as well as most conventional deep learning methods.
1806.04836
Noam Buckman
Noam Buckman, Han-Lim Choi, Jonathan P. How
Partial Replanning for Decentralized Dynamic Task Allocation
11 pages, Accepted to AIAA GNC 2019
null
10.2514/6.2019-0915
null
cs.MA
http://creativecommons.org/licenses/by-nc-sa/4.0/
In time-sensitive and dynamic missions, multi-UAV teams must respond quickly to new information and objectives. This paper presents a dynamic decentralized task allocation algorithm for allocating new tasks that appear online during the solving of the task allocation problem. Our algorithm extends the Consensus-Based Bundle Algorithm (CBBA), a decentralized task allocation algorithm, allowing for the fast allocation of new tasks without a full reallocation of existing tasks. CBBA with Partial Replanning (CBBA-PR) enables the team to trade-off between convergence time and increased coordination by resetting a portion of their previous allocation at every round of bidding on tasks. By resetting the last tasks allocated by each agent, we are able to ensure the convergence of the team to a conflict-free solution. CBBA-PR can be further improved by reducing the team size involved in the replanning, further reducing the communication burden of the team and runtime of CBBA-PR. Finally, we validate the faster convergence and improved solution quality of CBBA-PR in multi-UAV simulations.
[ { "created": "Wed, 13 Jun 2018 03:18:40 GMT", "version": "v1" }, { "created": "Thu, 25 Oct 2018 22:45:04 GMT", "version": "v2" } ]
2023-10-02
[ [ "Buckman", "Noam", "" ], [ "Choi", "Han-Lim", "" ], [ "How", "Jonathan P.", "" ] ]
In time-sensitive and dynamic missions, multi-UAV teams must respond quickly to new information and objectives. This paper presents a dynamic decentralized task allocation algorithm for allocating new tasks that appear online during the solving of the task allocation problem. Our algorithm extends the Consensus-Based Bundle Algorithm (CBBA), a decentralized task allocation algorithm, allowing for the fast allocation of new tasks without a full reallocation of existing tasks. CBBA with Partial Replanning (CBBA-PR) enables the team to trade-off between convergence time and increased coordination by resetting a portion of their previous allocation at every round of bidding on tasks. By resetting the last tasks allocated by each agent, we are able to ensure the convergence of the team to a conflict-free solution. CBBA-PR can be further improved by reducing the team size involved in the replanning, further reducing the communication burden of the team and runtime of CBBA-PR. Finally, we validate the faster convergence and improved solution quality of CBBA-PR in multi-UAV simulations.
2102.07833
Aleksei Sorokin
Sou-Cheng T. Choi, Fred J. Hickernell, R. Jagadeeswaran, Michael J. McCourt, and Aleksei G. Sorokin
Quasi-Monte Carlo Software
25 pages, 7 figures, to be published in the MCQMC2020 Proceedings
null
null
null
cs.MS cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Practitioners wishing to experience the efficiency gains from using low discrepancy sequences need correct, robust, well-written software. This article, based on our MCQMC 2020 tutorial, describes some of the better quasi-Monte Carlo (QMC) software available. We highlight the key software components required by QMC to approximate multivariate integrals or expectations of functions of vector random variables. We have combined these components in QMCPy, a Python open-source library, which we hope will draw the support of the QMC community. Here we introduce QMCPy.
[ { "created": "Mon, 15 Feb 2021 20:21:05 GMT", "version": "v1" }, { "created": "Wed, 29 Sep 2021 14:52:31 GMT", "version": "v2" }, { "created": "Thu, 14 Oct 2021 17:44:05 GMT", "version": "v3" } ]
2021-10-15
[ [ "Choi", "Sou-Cheng T.", "" ], [ "Hickernell", "Fred J.", "" ], [ "Jagadeeswaran", "R.", "" ], [ "McCourt", "Michael J.", "" ], [ "Sorokin", "Aleksei G.", "" ] ]
Practitioners wishing to experience the efficiency gains from using low discrepancy sequences need correct, robust, well-written software. This article, based on our MCQMC 2020 tutorial, describes some of the better quasi-Monte Carlo (QMC) software available. We highlight the key software components required by QMC to approximate multivariate integrals or expectations of functions of vector random variables. We have combined these components in QMCPy, a Python open-source library, which we hope will draw the support of the QMC community. Here we introduce QMCPy.
1807.11929
Mengmi Zhang
Mengmi Zhang, Keng Teck Ma, Shih-Cheng Yen, Joo Hwee Lim, Qi Zhao, and Jiashi Feng
Egocentric Spatial Memory
8 pages, 6 figures, accepted in IROS 2018
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Egocentric spatial memory (ESM) defines a memory system with encoding, storing, recognizing and recalling the spatial information about the environment from an egocentric perspective. We introduce an integrated deep neural network architecture for modeling ESM. It learns to estimate the occupancy state of the world and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places over long-term exploration in large environments which enables agents to perform place recognition and hence, loop closure. Our proposed ESM network contributes in the following aspects: (1) without feature engineering, our model predicts free space based on egocentric views efficiently in an end-to-end manner; (2) different from other deep learning-based mapping system, ESMN deals with continuous actions and states which is vitally important for robotic control in real applications. In the experiments, we demonstrate its accurate and robust global mapping capacities in 3D virtual mazes and realistic indoor environments by comparing with several competitive baselines.
[ { "created": "Tue, 31 Jul 2018 17:27:19 GMT", "version": "v1" } ]
2018-08-01
[ [ "Zhang", "Mengmi", "" ], [ "Ma", "Keng Teck", "" ], [ "Yen", "Shih-Cheng", "" ], [ "Lim", "Joo Hwee", "" ], [ "Zhao", "Qi", "" ], [ "Feng", "Jiashi", "" ] ]
Egocentric spatial memory (ESM) defines a memory system with encoding, storing, recognizing and recalling the spatial information about the environment from an egocentric perspective. We introduce an integrated deep neural network architecture for modeling ESM. It learns to estimate the occupancy state of the world and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places over long-term exploration in large environments which enables agents to perform place recognition and hence, loop closure. Our proposed ESM network contributes in the following aspects: (1) without feature engineering, our model predicts free space based on egocentric views efficiently in an end-to-end manner; (2) different from other deep learning-based mapping system, ESMN deals with continuous actions and states which is vitally important for robotic control in real applications. In the experiments, we demonstrate its accurate and robust global mapping capacities in 3D virtual mazes and realistic indoor environments by comparing with several competitive baselines.
2110.14460
Maciej Drozdowski
Thomas Robertazzi, Maciej Drozdowski
Interaction Maxima in Distributed Systems
10 pages, 1 figure
null
null
null
cs.DM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper we study the maximum degree of interaction which may emerge in distributed systems. It is assumed that a distributed system is represented by a graph of nodes interacting over edges. Each node has some amount of data. The intensity of interaction over an edge is proportional to the product of the amounts of data in each node at either end of the edge. The maximum sum of interactions over the edges is searched for. This model can be extended to other interacting entities. For bipartite graphs and odd-length cycles we prove that the greatest degree of interaction emerge when the whole data is concentrated in an arbitrary pair of neighbors. Equal partitioning of the load is shown to be optimum for complete graphs. Finally, we show that in general graphs for maximum interaction the data should be distributed equally between the nodes of the largest clique in the graph. We also present in this context a result of Motzkin and Straus from 1965 for the maximal interaction objective.
[ { "created": "Wed, 27 Oct 2021 14:28:11 GMT", "version": "v1" } ]
2021-10-28
[ [ "Robertazzi", "Thomas", "" ], [ "Drozdowski", "Maciej", "" ] ]
In this paper we study the maximum degree of interaction which may emerge in distributed systems. It is assumed that a distributed system is represented by a graph of nodes interacting over edges. Each node has some amount of data. The intensity of interaction over an edge is proportional to the product of the amounts of data in each node at either end of the edge. The maximum sum of interactions over the edges is searched for. This model can be extended to other interacting entities. For bipartite graphs and odd-length cycles we prove that the greatest degree of interaction emerge when the whole data is concentrated in an arbitrary pair of neighbors. Equal partitioning of the load is shown to be optimum for complete graphs. Finally, we show that in general graphs for maximum interaction the data should be distributed equally between the nodes of the largest clique in the graph. We also present in this context a result of Motzkin and Straus from 1965 for the maximal interaction objective.
2208.07464
Brendon G. Anderson
Brendon G. Anderson, Tanmay Gautam, Somayeh Sojoudi
An Overview and Prospective Outlook on Robust Training and Certification of Machine Learning Models
null
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this discussion paper, we survey recent research surrounding robustness of machine learning models. As learning algorithms become increasingly more popular in data-driven control systems, their robustness to data uncertainty must be ensured in order to maintain reliable safety-critical operations. We begin by reviewing common formalisms for such robustness, and then move on to discuss popular and state-of-the-art techniques for training robust machine learning models as well as methods for provably certifying such robustness. From this unification of robust machine learning, we identify and discuss pressing directions for future research in the area.
[ { "created": "Mon, 15 Aug 2022 23:09:54 GMT", "version": "v1" }, { "created": "Tue, 27 Sep 2022 16:55:39 GMT", "version": "v2" } ]
2022-09-28
[ [ "Anderson", "Brendon G.", "" ], [ "Gautam", "Tanmay", "" ], [ "Sojoudi", "Somayeh", "" ] ]
In this discussion paper, we survey recent research surrounding robustness of machine learning models. As learning algorithms become increasingly more popular in data-driven control systems, their robustness to data uncertainty must be ensured in order to maintain reliable safety-critical operations. We begin by reviewing common formalisms for such robustness, and then move on to discuss popular and state-of-the-art techniques for training robust machine learning models as well as methods for provably certifying such robustness. From this unification of robust machine learning, we identify and discuss pressing directions for future research in the area.
2011.05507
Chun-Na Li
Yan-Ru Guo, Yan-Qin Bai, Chun-Na Li, Lan Bai, Yuan-Hai Shao
Two-dimensional Bhattacharyya bound linear discriminant analysis with its applications
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently proposed L2-norm linear discriminant analysis criterion via the Bhattacharyya error bound estimation (L2BLDA) is an effective improvement of linear discriminant analysis (LDA) for feature extraction. However, L2BLDA is only proposed to cope with vector input samples. When facing with two-dimensional (2D) inputs, such as images, it will lose some useful information, since it does not consider intrinsic structure of images. In this paper, we extend L2BLDA to a two-dimensional Bhattacharyya bound linear discriminant analysis (2DBLDA). 2DBLDA maximizes the matrix-based between-class distance which is measured by the weighted pairwise distances of class means and meanwhile minimizes the matrix-based within-class distance. The weighting constant between the between-class and within-class terms is determined by the involved data that makes the proposed 2DBLDA adaptive. In addition, the criterion of 2DBLDA is equivalent to optimizing an upper bound of the Bhattacharyya error. The construction of 2DBLDA makes it avoid the small sample size problem while also possess robustness, and can be solved through a simple standard eigenvalue decomposition problem. The experimental results on image recognition and face image reconstruction demonstrate the effectiveness of the proposed methods.
[ { "created": "Wed, 11 Nov 2020 01:56:42 GMT", "version": "v1" } ]
2020-11-12
[ [ "Guo", "Yan-Ru", "" ], [ "Bai", "Yan-Qin", "" ], [ "Li", "Chun-Na", "" ], [ "Bai", "Lan", "" ], [ "Shao", "Yuan-Hai", "" ] ]
Recently proposed L2-norm linear discriminant analysis criterion via the Bhattacharyya error bound estimation (L2BLDA) is an effective improvement of linear discriminant analysis (LDA) for feature extraction. However, L2BLDA is only proposed to cope with vector input samples. When facing with two-dimensional (2D) inputs, such as images, it will lose some useful information, since it does not consider intrinsic structure of images. In this paper, we extend L2BLDA to a two-dimensional Bhattacharyya bound linear discriminant analysis (2DBLDA). 2DBLDA maximizes the matrix-based between-class distance which is measured by the weighted pairwise distances of class means and meanwhile minimizes the matrix-based within-class distance. The weighting constant between the between-class and within-class terms is determined by the involved data that makes the proposed 2DBLDA adaptive. In addition, the criterion of 2DBLDA is equivalent to optimizing an upper bound of the Bhattacharyya error. The construction of 2DBLDA makes it avoid the small sample size problem while also possess robustness, and can be solved through a simple standard eigenvalue decomposition problem. The experimental results on image recognition and face image reconstruction demonstrate the effectiveness of the proposed methods.
2310.20212
Yuan Wei
Zhiyuan Wei, Jing Sun, Zijian Zhang, Xianhao Zhang, Meng Li, Liehuang Zhu
A Comparative Evaluation of Automated Analysis Tools for Solidity Smart Contracts
24 pages, 6 figure, IEEE Communications Surveys & Tutorials
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Blockchain smart contracts have emerged as a transformative force in the digital realm, spawning a diverse range of compelling applications. Since solidity smart contracts across various domains manage trillions of dollars in virtual coins, they become a prime target for attacks. One of the primary challenges is keeping abreast of the latest techniques and tools for developing secure smart contracts and examining those already deployed. In this paper, we seek to address these challenges from four aspects: (1) We begin by examining ten automatic tools, specifically focusing on their methodologies and their ability to identify vulnerabilities in solidity smart contracts. (2) We propose a novel criterion for evaluating these tools, based on the ISO/IEC 25010 standard. (3) To facilitate the evaluation of the selected tools, we construct a benchmark that encompasses two distinct datasets: a collection of 389 labelled smart contracts and a scaled set of 20,000 unique cases from real-world contracts. (4) We provide a comparison of the selected tools, offering insights into their strengths and weaknesses and highlighting areas where further improvements are needed. Through this evaluation, we hope to provide developers and researchers with valuable guidance on selecting and using smart contract analysis tools and contribute to the ongoing efforts to improve the security and reliability of smart contracts.
[ { "created": "Tue, 31 Oct 2023 06:20:42 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2023 15:54:52 GMT", "version": "v2" }, { "created": "Thu, 2 Nov 2023 00:33:22 GMT", "version": "v3" } ]
2023-11-03
[ [ "Wei", "Zhiyuan", "" ], [ "Sun", "Jing", "" ], [ "Zhang", "Zijian", "" ], [ "Zhang", "Xianhao", "" ], [ "Li", "Meng", "" ], [ "Zhu", "Liehuang", "" ] ]
Blockchain smart contracts have emerged as a transformative force in the digital realm, spawning a diverse range of compelling applications. Since solidity smart contracts across various domains manage trillions of dollars in virtual coins, they become a prime target for attacks. One of the primary challenges is keeping abreast of the latest techniques and tools for developing secure smart contracts and examining those already deployed. In this paper, we seek to address these challenges from four aspects: (1) We begin by examining ten automatic tools, specifically focusing on their methodologies and their ability to identify vulnerabilities in solidity smart contracts. (2) We propose a novel criterion for evaluating these tools, based on the ISO/IEC 25010 standard. (3) To facilitate the evaluation of the selected tools, we construct a benchmark that encompasses two distinct datasets: a collection of 389 labelled smart contracts and a scaled set of 20,000 unique cases from real-world contracts. (4) We provide a comparison of the selected tools, offering insights into their strengths and weaknesses and highlighting areas where further improvements are needed. Through this evaluation, we hope to provide developers and researchers with valuable guidance on selecting and using smart contract analysis tools and contribute to the ongoing efforts to improve the security and reliability of smart contracts.
2206.01202
Chieh Hubert Lin
Chieh Hubert Lin, Hsin-Ying Lee, Hung-Yu Tseng, Maneesh Singh, Ming-Hsuan Yang
Unveiling The Mask of Position-Information Pattern Through the Mist of Image Features
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies show that paddings in convolutional neural networks encode absolute position information which can negatively affect the model performance for certain tasks. However, existing metrics for quantifying the strength of positional information remain unreliable and frequently lead to erroneous results. To address this issue, we propose novel metrics for measuring (and visualizing) the encoded positional information. We formally define the encoded information as PPP (Position-information Pattern from Padding) and conduct a series of experiments to study its properties as well as its formation. The proposed metrics measure the presence of positional information more reliably than the existing metrics based on PosENet and a test in F-Conv. We also demonstrate that for any extant (and proposed) padding schemes, PPP is primarily a learning artifact and is less dependent on the characteristics of the underlying padding schemes.
[ { "created": "Thu, 2 Jun 2022 17:59:57 GMT", "version": "v1" } ]
2022-06-03
[ [ "Lin", "Chieh Hubert", "" ], [ "Lee", "Hsin-Ying", "" ], [ "Tseng", "Hung-Yu", "" ], [ "Singh", "Maneesh", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
Recent studies show that paddings in convolutional neural networks encode absolute position information which can negatively affect the model performance for certain tasks. However, existing metrics for quantifying the strength of positional information remain unreliable and frequently lead to erroneous results. To address this issue, we propose novel metrics for measuring (and visualizing) the encoded positional information. We formally define the encoded information as PPP (Position-information Pattern from Padding) and conduct a series of experiments to study its properties as well as its formation. The proposed metrics measure the presence of positional information more reliably than the existing metrics based on PosENet and a test in F-Conv. We also demonstrate that for any extant (and proposed) padding schemes, PPP is primarily a learning artifact and is less dependent on the characteristics of the underlying padding schemes.
1011.3571
Kristina Lerman
Rumi Ghosh and Kristina Lerman
A Framework for Quantitative Analysis of Cascades on Networks
In Proceedings of 4th ACM Conference on Web Search and Data Mining
null
null
null
cs.SI cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How does information flow in online social networks? How does the structure and size of the information cascade evolve in time? How can we efficiently mine the information contained in cascade dynamics? We approach these questions empirically and present an efficient and scalable mathematical framework for quantitative analysis of cascades on networks. We define a cascade generating function that captures the details of the microscopic dynamics of the cascades. We show that this function can also be used to compute the macroscopic properties of cascades, such as their size, spread, diameter, number of paths, and average path length. We present an algorithm to efficiently compute cascade generating function and demonstrate that while significantly compressing information within a cascade, it nevertheless allows us to accurately reconstruct its structure. We use this framework to study information dynamics on the social network of Digg. Digg allows users to post and vote on stories, and easily see the stories that friends have voted on. As a story spreads on Digg through voting, it generates cascades. We extract cascades of more than 3,500 Digg stories and calculate their macroscopic and microscopic properties. We identify several trends in cascade dynamics: spreading via chaining, branching and community. We discuss how these affect the spread of the story through the Digg social network. Our computational framework is general and offers a practical solution to quantitative analysis of the microscopic structure of even very large cascades.
[ { "created": "Tue, 16 Nov 2010 01:54:16 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2010 20:14:51 GMT", "version": "v2" } ]
2010-11-18
[ [ "Ghosh", "Rumi", "" ], [ "Lerman", "Kristina", "" ] ]
How does information flow in online social networks? How does the structure and size of the information cascade evolve in time? How can we efficiently mine the information contained in cascade dynamics? We approach these questions empirically and present an efficient and scalable mathematical framework for quantitative analysis of cascades on networks. We define a cascade generating function that captures the details of the microscopic dynamics of the cascades. We show that this function can also be used to compute the macroscopic properties of cascades, such as their size, spread, diameter, number of paths, and average path length. We present an algorithm to efficiently compute cascade generating function and demonstrate that while significantly compressing information within a cascade, it nevertheless allows us to accurately reconstruct its structure. We use this framework to study information dynamics on the social network of Digg. Digg allows users to post and vote on stories, and easily see the stories that friends have voted on. As a story spreads on Digg through voting, it generates cascades. We extract cascades of more than 3,500 Digg stories and calculate their macroscopic and microscopic properties. We identify several trends in cascade dynamics: spreading via chaining, branching and community. We discuss how these affect the spread of the story through the Digg social network. Our computational framework is general and offers a practical solution to quantitative analysis of the microscopic structure of even very large cascades.
1403.5315
Emrah Akyol
Mustafa Mehmetoglu, Emrah Akyol, Kenneth Rose
A Deterministic Annealing Optimization Approach for Witsenhausen's and Related Decentralized Control Settings
submitted to CDC'14
null
null
null
cs.SY cs.IT math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the problem of mapping optimization in decentralized control problems. A global optimization algorithm is proposed based on the ideas of ``deterministic annealing" - a powerful non-convex optimization framework derived from information theoretic principles with analogies to statistical physics. The key idea is to randomize the mappings and control the Shannon entropy of the system during optimization. The entropy constraint is gradually relaxed in a deterministic annealing process while tracking the minimum, to obtain the ultimate deterministic mappings. Deterministic annealing has been successfully employed in several problems including clustering, vector quantization, regression, as well as the Witsenhausen's counterexample in our recent work[1]. We extend our method to a more involved setting, a variation of Witsenhausen's counterexample, where there is a side channel between the two controllers. The problem can be viewed as a two stage cancellation problem. We demonstrate that there exist complex strategies that can exploit the side channel efficiently, obtaining significant gains over the best affine and known non-linear strategies.
[ { "created": "Thu, 20 Mar 2014 22:15:24 GMT", "version": "v1" } ]
2014-03-24
[ [ "Mehmetoglu", "Mustafa", "" ], [ "Akyol", "Emrah", "" ], [ "Rose", "Kenneth", "" ] ]
This paper studies the problem of mapping optimization in decentralized control problems. A global optimization algorithm is proposed based on the ideas of ``deterministic annealing" - a powerful non-convex optimization framework derived from information theoretic principles with analogies to statistical physics. The key idea is to randomize the mappings and control the Shannon entropy of the system during optimization. The entropy constraint is gradually relaxed in a deterministic annealing process while tracking the minimum, to obtain the ultimate deterministic mappings. Deterministic annealing has been successfully employed in several problems including clustering, vector quantization, regression, as well as the Witsenhausen's counterexample in our recent work[1]. We extend our method to a more involved setting, a variation of Witsenhausen's counterexample, where there is a side channel between the two controllers. The problem can be viewed as a two stage cancellation problem. We demonstrate that there exist complex strategies that can exploit the side channel efficiently, obtaining significant gains over the best affine and known non-linear strategies.
cs/0610159
Vaneet Aggarwal
Vaneet Aggarwal, A. Robert Calderbank
Boolean Functions, Projection Operators and Quantum Error Correcting Codes
Submitted to IEEE Transactions on Information Theory, October 2006, to appear in IEEE Transactions on Information Theory, 2008
IEEE Trans. Inf. Theory, vol. 54, no. 4, pp.1700-1707, Apr. 2008.
10.1109/TIT.2008.917720
null
cs.IT math.IT quant-ph
null
This paper describes a fundamental correspondence between Boolean functions and projection operators in Hilbert space. The correspondence is widely applicable, and it is used in this paper to provide a common mathematical framework for the design of both additive and non-additive quantum error correcting codes. The new framework leads to the construction of a variety of codes including an infinite class of codes that extend the original ((5,6,2)) code found by Rains [21]. It also extends to operator quantum error correcting codes.
[ { "created": "Fri, 27 Oct 2006 16:50:41 GMT", "version": "v1" }, { "created": "Thu, 1 Mar 2007 20:58:22 GMT", "version": "v2" }, { "created": "Mon, 24 Sep 2007 13:20:15 GMT", "version": "v3" } ]
2009-04-14
[ [ "Aggarwal", "Vaneet", "" ], [ "Calderbank", "A. Robert", "" ] ]
This paper describes a fundamental correspondence between Boolean functions and projection operators in Hilbert space. The correspondence is widely applicable, and it is used in this paper to provide a common mathematical framework for the design of both additive and non-additive quantum error correcting codes. The new framework leads to the construction of a variety of codes including an infinite class of codes that extend the original ((5,6,2)) code found by Rains [21]. It also extends to operator quantum error correcting codes.
2303.13126
Jing Zhao
Jing Zhao, Heliang Zheng, Chaoyue Wang, Long Lan, Wenjing Yang
MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models
Accepted by ICCV 2023
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of open-source AI communities has produced a cornucopia of powerful text-guided diffusion models that are trained on various datasets. While few explorations have been conducted on ensembling such models to combine their strengths. In this work, we propose a simple yet effective method called Saliency-aware Noise Blending (SNB) that can empower the fused text-guided diffusion models to achieve more controllable generation. Specifically, we experimentally find that the responses of classifier-free guidance are highly related to the saliency of generated images. Thus we propose to trust different models in their areas of expertise by blending the predicted noises of two diffusion models in a saliency-aware manner. SNB is training-free and can be completed within a DDIM sampling process. Additionally, it can automatically align the semantics of two noise spaces without requiring additional annotations such as masks. Extensive experiments show the impressive effectiveness of SNB in various applications. Project page is available at https://magicfusion.github.io/.
[ { "created": "Thu, 23 Mar 2023 09:30:39 GMT", "version": "v1" }, { "created": "Sat, 25 Mar 2023 14:38:16 GMT", "version": "v2" }, { "created": "Fri, 14 Jul 2023 09:36:35 GMT", "version": "v3" } ]
2023-07-20
[ [ "Zhao", "Jing", "" ], [ "Zheng", "Heliang", "" ], [ "Wang", "Chaoyue", "" ], [ "Lan", "Long", "" ], [ "Yang", "Wenjing", "" ] ]
The advent of open-source AI communities has produced a cornucopia of powerful text-guided diffusion models that are trained on various datasets. While few explorations have been conducted on ensembling such models to combine their strengths. In this work, we propose a simple yet effective method called Saliency-aware Noise Blending (SNB) that can empower the fused text-guided diffusion models to achieve more controllable generation. Specifically, we experimentally find that the responses of classifier-free guidance are highly related to the saliency of generated images. Thus we propose to trust different models in their areas of expertise by blending the predicted noises of two diffusion models in a saliency-aware manner. SNB is training-free and can be completed within a DDIM sampling process. Additionally, it can automatically align the semantics of two noise spaces without requiring additional annotations such as masks. Extensive experiments show the impressive effectiveness of SNB in various applications. Project page is available at https://magicfusion.github.io/.
2212.00881
Saeed Mohammadzadeh
Saeed Mohammadzadeh, Peerasait Prachaseree, Emma Lejeune
Investigating Deep Learning Model Calibration for Classification Problems in Mechanics
21 pages, 9 figures
null
null
null
cs.LG physics.data-an
http://creativecommons.org/licenses/by-sa/4.0/
Recently, there has been a growing interest in applying machine learning methods to problems in engineering mechanics. In particular, there has been significant interest in applying deep learning techniques to predicting the mechanical behavior of heterogeneous materials and structures. Researchers have shown that deep learning methods are able to effectively predict mechanical behavior with low error for systems ranging from engineered composites, to geometrically complex metamaterials, to heterogeneous biological tissue. However, there has been comparatively little attention paid to deep learning model calibration, i.e., the match between predicted probabilities of outcomes and the true probabilities of outcomes. In this work, we perform a comprehensive investigation into ML model calibration across seven open access engineering mechanics datasets that cover three distinct types of mechanical problems. Specifically, we evaluate both model and model calibration error for multiple machine learning methods, and investigate the influence of ensemble averaging and post hoc model calibration via temperature scaling. Overall, we find that ensemble averaging of deep neural networks is both an effective and consistent tool for improving model calibration, while temperature scaling has comparatively limited benefits. Looking forward, we anticipate that this investigation will lay the foundation for future work in developing mechanics specific approaches to deep learning model calibration.
[ { "created": "Thu, 1 Dec 2022 21:39:48 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 17:22:41 GMT", "version": "v2" } ]
2023-03-15
[ [ "Mohammadzadeh", "Saeed", "" ], [ "Prachaseree", "Peerasait", "" ], [ "Lejeune", "Emma", "" ] ]
Recently, there has been a growing interest in applying machine learning methods to problems in engineering mechanics. In particular, there has been significant interest in applying deep learning techniques to predicting the mechanical behavior of heterogeneous materials and structures. Researchers have shown that deep learning methods are able to effectively predict mechanical behavior with low error for systems ranging from engineered composites, to geometrically complex metamaterials, to heterogeneous biological tissue. However, there has been comparatively little attention paid to deep learning model calibration, i.e., the match between predicted probabilities of outcomes and the true probabilities of outcomes. In this work, we perform a comprehensive investigation into ML model calibration across seven open access engineering mechanics datasets that cover three distinct types of mechanical problems. Specifically, we evaluate both model and model calibration error for multiple machine learning methods, and investigate the influence of ensemble averaging and post hoc model calibration via temperature scaling. Overall, we find that ensemble averaging of deep neural networks is both an effective and consistent tool for improving model calibration, while temperature scaling has comparatively limited benefits. Looking forward, we anticipate that this investigation will lay the foundation for future work in developing mechanics specific approaches to deep learning model calibration.
2102.09086
Robi Bhattacharjee
Robi Bhattacharjee and Kamalika Chaudhuri
Consistent Non-Parametric Methods for Maximizing Robustness
accepted to Nuerips 2021
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is there is an artificial robustness radius $r$ that applies to all inputs. This ignores the fact that data may be highly heterogeneous, in which case it is plausible that robustness regions should be larger in some regions of data, and smaller in others. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit, and show that both nearest neighbors and kernel classifiers satisfy them under certain conditions.
[ { "created": "Thu, 18 Feb 2021 00:44:07 GMT", "version": "v1" }, { "created": "Mon, 8 Nov 2021 04:14:49 GMT", "version": "v2" }, { "created": "Wed, 18 Jan 2023 18:02:20 GMT", "version": "v3" } ]
2023-01-19
[ [ "Bhattacharjee", "Robi", "" ], [ "Chaudhuri", "Kamalika", "" ] ]
Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is there is an artificial robustness radius $r$ that applies to all inputs. This ignores the fact that data may be highly heterogeneous, in which case it is plausible that robustness regions should be larger in some regions of data, and smaller in others. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit, and show that both nearest neighbors and kernel classifiers satisfy them under certain conditions.
2310.13583
Ofir Arviv
Ofir Arviv, Dmitry Nikolaev, Taelin Karidi and Omri Abend
Improving Cross-Lingual Transfer through Subtree-Aware Word Reordering
Accepted to EMNLP Findings 2023
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the impressive growth of the abilities of multilingual language models, such as XLM-R and mT5, it has been shown that they still face difficulties when tackling typologically-distant languages, particularly in the low-resource setting. One obstacle for effective cross-lingual transfer is variability in word-order patterns. It can be potentially mitigated via source- or target-side word reordering, and numerous approaches to reordering have been proposed. However, they rely on language-specific rules, work on the level of POS tags, or only target the main clause, leaving subordinate clauses intact. To address these limitations, we present a new powerful reordering method, defined in terms of Universal Dependencies, that is able to learn fine-grained word-order patterns conditioned on the syntactic context from a small amount of annotated data and can be applied at all levels of the syntactic tree. We conduct experiments on a diverse set of tasks and show that our method consistently outperforms strong baselines over different language pairs and model architectures. This performance advantage holds true in both zero-shot and few-shot scenarios.
[ { "created": "Fri, 20 Oct 2023 15:25:53 GMT", "version": "v1" } ]
2023-10-23
[ [ "Arviv", "Ofir", "" ], [ "Nikolaev", "Dmitry", "" ], [ "Karidi", "Taelin", "" ], [ "Abend", "Omri", "" ] ]
Despite the impressive growth of the abilities of multilingual language models, such as XLM-R and mT5, it has been shown that they still face difficulties when tackling typologically-distant languages, particularly in the low-resource setting. One obstacle for effective cross-lingual transfer is variability in word-order patterns. It can be potentially mitigated via source- or target-side word reordering, and numerous approaches to reordering have been proposed. However, they rely on language-specific rules, work on the level of POS tags, or only target the main clause, leaving subordinate clauses intact. To address these limitations, we present a new powerful reordering method, defined in terms of Universal Dependencies, that is able to learn fine-grained word-order patterns conditioned on the syntactic context from a small amount of annotated data and can be applied at all levels of the syntactic tree. We conduct experiments on a diverse set of tasks and show that our method consistently outperforms strong baselines over different language pairs and model architectures. This performance advantage holds true in both zero-shot and few-shot scenarios.
2008.08931
Liyi Guo
Liyi Guo, Rui Lu, Haoqi Zhang, Junqi Jin, Zhenzhe Zheng, Fan Wu, Jin Li, Haiyang Xu, Han Li, Wenkai Lu, Jian Xu, Kun Gai
A Deep Prediction Network for Understanding Advertiser Intent and Satisfaction
null
CIKM 2020, Virtual Event, Ireland
10.1145/3340531.3412681
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For e-commerce platforms such as Taobao and Amazon, advertisers play an important role in the entire digital ecosystem: their behaviors explicitly influence users' browsing and shopping experience; more importantly, advertiser's expenditure on advertising constitutes a primary source of platform revenue. Therefore, providing better services for advertisers is essential for the long-term prosperity for e-commerce platforms. To achieve this goal, the ad platform needs to have an in-depth understanding of advertisers in terms of both their marketing intents and satisfaction over the advertising performance, based on which further optimization could be carried out to service the advertisers in the correct direction. In this paper, we propose a novel Deep Satisfaction Prediction Network (DSPN), which models advertiser intent and satisfaction simultaneously. It employs a two-stage network structure where advertiser intent vector and satisfaction are jointly learned by considering the features of advertiser's action information and advertising performance indicators. Experiments on an Alibaba advertisement dataset and online evaluations show that our proposed DSPN outperforms state-of-the-art baselines and has stable performance in terms of AUC in the online environment. Further analyses show that DSPN not only predicts advertisers' satisfaction accurately but also learns an explainable advertiser intent, revealing the opportunities to optimize the advertising performance further.
[ { "created": "Thu, 20 Aug 2020 15:08:50 GMT", "version": "v1" } ]
2020-09-01
[ [ "Guo", "Liyi", "" ], [ "Lu", "Rui", "" ], [ "Zhang", "Haoqi", "" ], [ "Jin", "Junqi", "" ], [ "Zheng", "Zhenzhe", "" ], [ "Wu", "Fan", "" ], [ "Li", "Jin", "" ], [ "Xu", "Haiyang", "" ], [ "Li", "Han", "" ], [ "Lu", "Wenkai", "" ], [ "Xu", "Jian", "" ], [ "Gai", "Kun", "" ] ]
For e-commerce platforms such as Taobao and Amazon, advertisers play an important role in the entire digital ecosystem: their behaviors explicitly influence users' browsing and shopping experience; more importantly, advertiser's expenditure on advertising constitutes a primary source of platform revenue. Therefore, providing better services for advertisers is essential for the long-term prosperity for e-commerce platforms. To achieve this goal, the ad platform needs to have an in-depth understanding of advertisers in terms of both their marketing intents and satisfaction over the advertising performance, based on which further optimization could be carried out to service the advertisers in the correct direction. In this paper, we propose a novel Deep Satisfaction Prediction Network (DSPN), which models advertiser intent and satisfaction simultaneously. It employs a two-stage network structure where advertiser intent vector and satisfaction are jointly learned by considering the features of advertiser's action information and advertising performance indicators. Experiments on an Alibaba advertisement dataset and online evaluations show that our proposed DSPN outperforms state-of-the-art baselines and has stable performance in terms of AUC in the online environment. Further analyses show that DSPN not only predicts advertisers' satisfaction accurately but also learns an explainable advertiser intent, revealing the opportunities to optimize the advertising performance further.
1304.3249
Paolo Parisen Toldin
Jean-Yves Moyen, Paolo Parisen Toldin
A polytime complexity analyser for Probabilistic Polynomial Time over imperative stack programs
null
null
null
null
cs.LO cs.CC
http://creativecommons.org/licenses/by/3.0/
We present iSAPP (Imperative Static Analyser for Probabilistic Polynomial Time), a complexity verifier tool that is sound and extensionally complete for the Probabilistic Polynomial Time (PP) complexity class. iSAPP works on an imperative programming language for stack machines. The certificate of polynomiality can be built in polytime, with respect to the number of stacks used.
[ { "created": "Thu, 11 Apr 2013 10:20:24 GMT", "version": "v1" } ]
2013-04-12
[ [ "Moyen", "Jean-Yves", "" ], [ "Toldin", "Paolo Parisen", "" ] ]
We present iSAPP (Imperative Static Analyser for Probabilistic Polynomial Time), a complexity verifier tool that is sound and extensionally complete for the Probabilistic Polynomial Time (PP) complexity class. iSAPP works on an imperative programming language for stack machines. The certificate of polynomiality can be built in polytime, with respect to the number of stacks used.
2404.00686
Srinjoy Roy
Srinjoy Roy, Swagatam Das
Utilizing Maximum Mean Discrepancy Barycenter for Propagating the Uncertainty of Value Functions in Reinforcement Learning
We found some flaws in our analysis and we are in the process of rectifying those
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Accounting for the uncertainty of value functions boosts exploration in Reinforcement Learning (RL). Our work introduces Maximum Mean Discrepancy Q-Learning (MMD-QL) to improve Wasserstein Q-Learning (WQL) for uncertainty propagation during Temporal Difference (TD) updates. MMD-QL uses the MMD barycenter for this purpose, as MMD provides a tighter estimate of closeness between probability measures than the Wasserstein distance. Firstly, we establish that MMD-QL is Probably Approximately Correct in MDP (PAC-MDP) under the average loss metric. Concerning the accumulated rewards, experiments on tabular environments show that MMD-QL outperforms WQL and other algorithms. Secondly, we incorporate deep networks into MMD-QL to create MMD Q-Network (MMD-QN). Making reasonable assumptions, we analyze the convergence rates of MMD-QN using function approximation. Empirical results on challenging Atari games demonstrate that MMD-QN performs well compared to benchmark deep RL algorithms, highlighting its effectiveness in handling large state-action spaces.
[ { "created": "Sun, 31 Mar 2024 13:41:56 GMT", "version": "v1" }, { "created": "Wed, 3 Apr 2024 14:32:17 GMT", "version": "v2" } ]
2024-04-04
[ [ "Roy", "Srinjoy", "" ], [ "Das", "Swagatam", "" ] ]
Accounting for the uncertainty of value functions boosts exploration in Reinforcement Learning (RL). Our work introduces Maximum Mean Discrepancy Q-Learning (MMD-QL) to improve Wasserstein Q-Learning (WQL) for uncertainty propagation during Temporal Difference (TD) updates. MMD-QL uses the MMD barycenter for this purpose, as MMD provides a tighter estimate of closeness between probability measures than the Wasserstein distance. Firstly, we establish that MMD-QL is Probably Approximately Correct in MDP (PAC-MDP) under the average loss metric. Concerning the accumulated rewards, experiments on tabular environments show that MMD-QL outperforms WQL and other algorithms. Secondly, we incorporate deep networks into MMD-QL to create MMD Q-Network (MMD-QN). Making reasonable assumptions, we analyze the convergence rates of MMD-QN using function approximation. Empirical results on challenging Atari games demonstrate that MMD-QN performs well compared to benchmark deep RL algorithms, highlighting its effectiveness in handling large state-action spaces.
2406.03893
Anushka Singh
Anushka Singh, Ananya B. Sai, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan, Mitesh M Khapra
How Good is Zero-Shot MT Evaluation for Low Resource Indian Languages?
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
While machine translation evaluation has been studied primarily for high-resource languages, there has been a recent interest in evaluation for low-resource languages due to the increasing availability of data and models. In this paper, we focus on a zero-shot evaluation setting focusing on low-resource Indian languages, namely Assamese, Kannada, Maithili, and Punjabi. We collect sufficient Multi-Dimensional Quality Metrics (MQM) and Direct Assessment (DA) annotations to create test sets and meta-evaluate a plethora of automatic evaluation metrics. We observe that even for learned metrics, which are known to exhibit zero-shot performance, the Kendall Tau and Pearson correlations with human annotations are only as high as 0.32 and 0.45. Synthetic data approaches show mixed results and overall do not help close the gap by much for these languages. This indicates that there is still a long way to go for low-resource evaluation.
[ { "created": "Thu, 6 Jun 2024 09:28:08 GMT", "version": "v1" } ]
2024-06-07
[ [ "Singh", "Anushka", "" ], [ "Sai", "Ananya B.", "" ], [ "Dabre", "Raj", "" ], [ "Puduppully", "Ratish", "" ], [ "Kunchukuttan", "Anoop", "" ], [ "Khapra", "Mitesh M", "" ] ]
While machine translation evaluation has been studied primarily for high-resource languages, there has been a recent interest in evaluation for low-resource languages due to the increasing availability of data and models. In this paper, we focus on a zero-shot evaluation setting focusing on low-resource Indian languages, namely Assamese, Kannada, Maithili, and Punjabi. We collect sufficient Multi-Dimensional Quality Metrics (MQM) and Direct Assessment (DA) annotations to create test sets and meta-evaluate a plethora of automatic evaluation metrics. We observe that even for learned metrics, which are known to exhibit zero-shot performance, the Kendall Tau and Pearson correlations with human annotations are only as high as 0.32 and 0.45. Synthetic data approaches show mixed results and overall do not help close the gap by much for these languages. This indicates that there is still a long way to go for low-resource evaluation.
2008.05440
Jie Yang
Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Lin Gao
DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation
Accept to ACM Transaction on Graphics 2022, 26 pages
null
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
D shape generation is a fundamental operation in computer graphics. While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structure, in a controllable manner. To tackle this, we introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes, where two key aspects of shapes, geometry, and structure, are encoded in a synergistic manner to ensure plausibility of the generated shapes, while also being disentangled as much as possible. This supports a range of novel shape generation applications with disentangled control, such as interpolation of structure (geometry) while keeping geometry (structure) unchanged. To achieve this, we simultaneously learn structure and geometry through variational autoencoders (VAEs) in a hierarchical manner for both, with bijective mappings at each level. In this manner, we effectively encode geometry and structure in separate latent spaces, while ensuring their compatibility: the structure is used to guide the geometry and vice versa. At the leaf level, the part geometry is represented using a conditional part VAE, to encode high-quality geometric details, guided by the structure context as the condition. Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods. The code has been released at https://github.com/IGLICT/DSG-Net.
[ { "created": "Wed, 12 Aug 2020 17:06:51 GMT", "version": "v1" }, { "created": "Fri, 14 Aug 2020 02:38:45 GMT", "version": "v2" }, { "created": "Mon, 24 May 2021 14:45:26 GMT", "version": "v3" }, { "created": "Sat, 28 May 2022 17:40:15 GMT", "version": "v4" } ]
2022-05-31
[ [ "Yang", "Jie", "" ], [ "Mo", "Kaichun", "" ], [ "Lai", "Yu-Kun", "" ], [ "Guibas", "Leonidas J.", "" ], [ "Gao", "Lin", "" ] ]
D shape generation is a fundamental operation in computer graphics. While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structure, in a controllable manner. To tackle this, we introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes, where two key aspects of shapes, geometry, and structure, are encoded in a synergistic manner to ensure plausibility of the generated shapes, while also being disentangled as much as possible. This supports a range of novel shape generation applications with disentangled control, such as interpolation of structure (geometry) while keeping geometry (structure) unchanged. To achieve this, we simultaneously learn structure and geometry through variational autoencoders (VAEs) in a hierarchical manner for both, with bijective mappings at each level. In this manner, we effectively encode geometry and structure in separate latent spaces, while ensuring their compatibility: the structure is used to guide the geometry and vice versa. At the leaf level, the part geometry is represented using a conditional part VAE, to encode high-quality geometric details, guided by the structure context as the condition. Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods. The code has been released at https://github.com/IGLICT/DSG-Net.
1004.2079
Yashodhan Kanoria
Mohsen Bayati, Christian Borgs, Jennifer Chayes, Yashodhan Kanoria and Andrea Montanari
Bargaining dynamics in exchange networks
47 pages, SODA 2011, invited to Journal of Economic Theory
Proc. ACM-SIAM Symp. on Discrete Algorithms (2011) 1518-1537
null
null
cs.GT cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a one-sided assignment market or exchange network with transferable utility and propose a model for the dynamics of bargaining in such a market. Our dynamical model is local, involving iterative updates of 'offers' based on estimated best alternative matches, in the spirit of pairwise Nash bargaining. We establish that when a balanced outcome (a generalization of the pairwise Nash bargaining solution to networks) exists, our dynamics converges rapidly to such an outcome. We extend our results to the cases of (i) general agent 'capacity constraints', i.e., an agent may be allowed to participate in multiple matches, and (ii) 'unequal bargaining powers' (where we also find a surprising change in rate of convergence).
[ { "created": "Mon, 12 Apr 2010 23:11:16 GMT", "version": "v1" }, { "created": "Tue, 6 Dec 2011 19:40:38 GMT", "version": "v2" } ]
2015-03-14
[ [ "Bayati", "Mohsen", "" ], [ "Borgs", "Christian", "" ], [ "Chayes", "Jennifer", "" ], [ "Kanoria", "Yashodhan", "" ], [ "Montanari", "Andrea", "" ] ]
We consider a one-sided assignment market or exchange network with transferable utility and propose a model for the dynamics of bargaining in such a market. Our dynamical model is local, involving iterative updates of 'offers' based on estimated best alternative matches, in the spirit of pairwise Nash bargaining. We establish that when a balanced outcome (a generalization of the pairwise Nash bargaining solution to networks) exists, our dynamics converges rapidly to such an outcome. We extend our results to the cases of (i) general agent 'capacity constraints', i.e., an agent may be allowed to participate in multiple matches, and (ii) 'unequal bargaining powers' (where we also find a surprising change in rate of convergence).
2208.04159
Ningning Wang
Ningning Wang, Guodong Li, Sihuang Hu, Min Ye
Constructing MSR codes with subpacketization $2^{n/3}$ for $k+1$ helper nodes
null
IEEE Transactions on Information Theory (Volume: 69, Issue: 6, June 2023)
10.1109/TIT.2023.3238759
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Wang et al. (IEEE Transactions on Information Theory, vol. 62, no. 8, 2016) proposed an explicit construction of an $(n=k+2,k)$ Minimum Storage Regenerating (MSR) code with $2$ parity nodes and subpacketization $2^{k/3}$. The number of helper nodes for this code is $d=k+1=n-1$, and this code has the smallest subpacketization among all the existing explicit constructions of MSR codes with the same $n,k$ and $d$. In this paper, we present a new construction of MSR codes for a wider range of parameters. More precisely, we still fix $d=k+1$, but we allow the code length $n$ to be any integer satisfying $n\ge k+2$. The field size of our code is linear in $n$, and the subpacketization of our code is $2^{n/3}$. This value is slightly larger than the subpacketization of the construction by Wang et al. because their code construction only guarantees optimal repair for all the systematic nodes while our code construction guarantees optimal repair for all nodes.
[ { "created": "Mon, 8 Aug 2022 13:59:11 GMT", "version": "v1" }, { "created": "Thu, 11 May 2023 14:58:30 GMT", "version": "v2" } ]
2023-05-23
[ [ "Wang", "Ningning", "" ], [ "Li", "Guodong", "" ], [ "Hu", "Sihuang", "" ], [ "Ye", "Min", "" ] ]
Wang et al. (IEEE Transactions on Information Theory, vol. 62, no. 8, 2016) proposed an explicit construction of an $(n=k+2,k)$ Minimum Storage Regenerating (MSR) code with $2$ parity nodes and subpacketization $2^{k/3}$. The number of helper nodes for this code is $d=k+1=n-1$, and this code has the smallest subpacketization among all the existing explicit constructions of MSR codes with the same $n,k$ and $d$. In this paper, we present a new construction of MSR codes for a wider range of parameters. More precisely, we still fix $d=k+1$, but we allow the code length $n$ to be any integer satisfying $n\ge k+2$. The field size of our code is linear in $n$, and the subpacketization of our code is $2^{n/3}$. This value is slightly larger than the subpacketization of the construction by Wang et al. because their code construction only guarantees optimal repair for all the systematic nodes while our code construction guarantees optimal repair for all nodes.
2002.04095
Juan-Manuel Torres-Moreno
R\'emy Saksik, Alejandro Molina-Villegas, Andr\'ea Carneiro Linhares, Juan-Manuel Torres-Moreno
Automatic Discourse Segmentation: an evaluation in French
7 pages, 2 figures, 2 tables
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we describe some discursive segmentation methods as well as a preliminary evaluation of the segmentation quality. Although our experiment were carried for documents in French, we have developed three discursive segmentation models solely based on resources simultaneously available in several languages: marker lists and a statistic POS labeling. We have also carried out automatic evaluations of these systems against the Annodis corpus, which is a manually annotated reference. The results obtained are very encouraging.
[ { "created": "Mon, 10 Feb 2020 21:35:39 GMT", "version": "v1" }, { "created": "Thu, 11 Jun 2020 20:27:29 GMT", "version": "v2" } ]
2020-06-15
[ [ "Saksik", "Rémy", "" ], [ "Molina-Villegas", "Alejandro", "" ], [ "Linhares", "Andréa Carneiro", "" ], [ "Torres-Moreno", "Juan-Manuel", "" ] ]
In this article, we describe some discursive segmentation methods as well as a preliminary evaluation of the segmentation quality. Although our experiment were carried for documents in French, we have developed three discursive segmentation models solely based on resources simultaneously available in several languages: marker lists and a statistic POS labeling. We have also carried out automatic evaluations of these systems against the Annodis corpus, which is a manually annotated reference. The results obtained are very encouraging.
0907.4547
EPTCS
Janusz Brzozowski
Quotient Complexity of Regular Languages
null
EPTCS 3, 2009, pp. 17-28
10.4204/EPTCS.3.2
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The past research on the state complexity of operations on regular languages is examined, and a new approach based on an old method (derivatives of regular expressions) is presented. Since state complexity is a property of a language, it is appropriate to define it in formal-language terms as the number of distinct quotients of the language, and to call it "quotient complexity". The problem of finding the quotient complexity of a language f(K,L) is considered, where K and L are regular languages and f is a regular operation, for example, union or concatenation. Since quotients can be represented by derivatives, one can find a formula for the typical quotient of f(K,L) in terms of the quotients of K and L. To obtain an upper bound on the number of quotients of f(K,L) all one has to do is count how many such quotients are possible, and this makes automaton constructions unnecessary. The advantages of this point of view are illustrated by many examples. Moreover, new general observations are presented to help in the estimation of the upper bounds on quotient complexity of regular operations.
[ { "created": "Mon, 27 Jul 2009 06:19:09 GMT", "version": "v1" } ]
2009-07-28
[ [ "Brzozowski", "Janusz", "" ] ]
The past research on the state complexity of operations on regular languages is examined, and a new approach based on an old method (derivatives of regular expressions) is presented. Since state complexity is a property of a language, it is appropriate to define it in formal-language terms as the number of distinct quotients of the language, and to call it "quotient complexity". The problem of finding the quotient complexity of a language f(K,L) is considered, where K and L are regular languages and f is a regular operation, for example, union or concatenation. Since quotients can be represented by derivatives, one can find a formula for the typical quotient of f(K,L) in terms of the quotients of K and L. To obtain an upper bound on the number of quotients of f(K,L) all one has to do is count how many such quotients are possible, and this makes automaton constructions unnecessary. The advantages of this point of view are illustrated by many examples. Moreover, new general observations are presented to help in the estimation of the upper bounds on quotient complexity of regular operations.
1912.02858
Victor Lecomte
Victor Lecomte and Omri Weinstein
Settling the relationship between Wilber's bounds for dynamic optimality
ESA 2020; 25 pages, 18 figures; v3 applies reviewers' comments
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In FOCS 1986, Wilber proposed two combinatorial lower bounds on the operational cost of any binary search tree (BST) for a given access sequence $X \in [n]^m$. Both bounds play a central role in the ongoing pursuit of the dynamic optimality conjecture (Sleator and Tarjan, 1985), but their relationship remained unknown for more than three decades. We show that Wilber's Funnel bound dominates his Alternation bound for all $X$, and give a tight $\Theta(\lg\lg n)$ separation for some $X$, answering Wilber's conjecture and an open problem of Iacono, Demaine et. al. The main ingredient of the proof is a new "symmetric" characterization of Wilber's Funnel bound, which proves that it is invariant under rotations of $X$. We use this characterization to provide initial indication that the Funnel bound matches the Independent Rectangle bound (Demaine et al., 2009), by proving that when the Funnel bound is constant, $\mathsf{IRB}_{\diagup\hspace{-.6em}\square}$ is linear. To the best of our knowledge, our results provide the first progress on Wilber's conjecture that the Funnel bound is dynamically optimal (1986).
[ { "created": "Thu, 5 Dec 2019 20:17:15 GMT", "version": "v1" }, { "created": "Thu, 12 Dec 2019 20:49:57 GMT", "version": "v2" }, { "created": "Sun, 28 Jun 2020 20:51:41 GMT", "version": "v3" } ]
2020-06-30
[ [ "Lecomte", "Victor", "" ], [ "Weinstein", "Omri", "" ] ]
In FOCS 1986, Wilber proposed two combinatorial lower bounds on the operational cost of any binary search tree (BST) for a given access sequence $X \in [n]^m$. Both bounds play a central role in the ongoing pursuit of the dynamic optimality conjecture (Sleator and Tarjan, 1985), but their relationship remained unknown for more than three decades. We show that Wilber's Funnel bound dominates his Alternation bound for all $X$, and give a tight $\Theta(\lg\lg n)$ separation for some $X$, answering Wilber's conjecture and an open problem of Iacono, Demaine et. al. The main ingredient of the proof is a new "symmetric" characterization of Wilber's Funnel bound, which proves that it is invariant under rotations of $X$. We use this characterization to provide initial indication that the Funnel bound matches the Independent Rectangle bound (Demaine et al., 2009), by proving that when the Funnel bound is constant, $\mathsf{IRB}_{\diagup\hspace{-.6em}\square}$ is linear. To the best of our knowledge, our results provide the first progress on Wilber's conjecture that the Funnel bound is dynamically optimal (1986).
2110.04946
Hieu-Thi Luong
Hieu-Thi Luong, Junichi Yamagishi
LaughNet: synthesizing laughter utterances from waveform silhouettes and a single laughter example
null
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emotional and controllable speech synthesis is a topic that has received much attention. However, most studies focused on improving the expressiveness and controllability in the context of linguistic content, even though natural verbal human communication is inseparable from spontaneous non-speech expressions such as laughter, crying, or grunting. We propose a model called LaughNet for synthesizing laughter by using waveform silhouettes as inputs. The motivation is not simply synthesizing new laughter utterances, but testing a novel synthesis-control paradigm that uses an abstract representation of the waveform. We conducted basic listening test experiments, and the results showed that LaughNet can synthesize laughter utterances with moderate quality and retain the characteristics of the training example. More importantly, the generated waveforms have shapes similar to the input silhouettes. For future work, we will test the same method on other types of human nonverbal expressions and integrate it into more elaborated synthesis systems.
[ { "created": "Mon, 11 Oct 2021 00:45:07 GMT", "version": "v1" }, { "created": "Wed, 26 Jan 2022 01:40:13 GMT", "version": "v2" } ]
2022-01-27
[ [ "Luong", "Hieu-Thi", "" ], [ "Yamagishi", "Junichi", "" ] ]
Emotional and controllable speech synthesis is a topic that has received much attention. However, most studies focused on improving the expressiveness and controllability in the context of linguistic content, even though natural verbal human communication is inseparable from spontaneous non-speech expressions such as laughter, crying, or grunting. We propose a model called LaughNet for synthesizing laughter by using waveform silhouettes as inputs. The motivation is not simply synthesizing new laughter utterances, but testing a novel synthesis-control paradigm that uses an abstract representation of the waveform. We conducted basic listening test experiments, and the results showed that LaughNet can synthesize laughter utterances with moderate quality and retain the characteristics of the training example. More importantly, the generated waveforms have shapes similar to the input silhouettes. For future work, we will test the same method on other types of human nonverbal expressions and integrate it into more elaborated synthesis systems.
2006.03179
Garrett Bingham
Garrett Bingham and Risto Miikkulainen
Discovering Parametric Activation Functions
Published in Neural Networks. 34 pages, 10 figures, 11 tables
Neural Networks, Volume 148, 2022, Pages 48-65, ISSN 0893-6080
10.1016/j.neunet.2022.01.001
null
cs.LG cs.CV cs.NE stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent studies have shown that the choice of activation function can significantly affect the performance of deep learning networks. However, the benefits of novel activation functions have been inconsistent and task dependent, and therefore the rectified linear unit (ReLU) is still the most commonly used. This paper proposes a technique for customizing activation functions automatically, resulting in reliable improvements in performance. Evolutionary search is used to discover the general form of the function, and gradient descent to optimize its parameters for different parts of the network and over the learning process. Experiments with four different neural network architectures on the CIFAR-10 and CIFAR-100 image classification datasets show that this approach is effective. It discovers both general activation functions and specialized functions for different architectures, consistently improving accuracy over ReLU and other activation functions by significant margins. The approach can therefore be used as an automated optimization step in applying deep learning to new tasks.
[ { "created": "Fri, 5 Jun 2020 00:25:33 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 15:33:14 GMT", "version": "v2" }, { "created": "Tue, 8 Dec 2020 19:28:47 GMT", "version": "v3" }, { "created": "Sat, 30 Jan 2021 02:17:20 GMT", "version": "v4" }, { "created": "Fri, 21 Jan 2022 19:39:36 GMT", "version": "v5" } ]
2022-01-25
[ [ "Bingham", "Garrett", "" ], [ "Miikkulainen", "Risto", "" ] ]
Recent studies have shown that the choice of activation function can significantly affect the performance of deep learning networks. However, the benefits of novel activation functions have been inconsistent and task dependent, and therefore the rectified linear unit (ReLU) is still the most commonly used. This paper proposes a technique for customizing activation functions automatically, resulting in reliable improvements in performance. Evolutionary search is used to discover the general form of the function, and gradient descent to optimize its parameters for different parts of the network and over the learning process. Experiments with four different neural network architectures on the CIFAR-10 and CIFAR-100 image classification datasets show that this approach is effective. It discovers both general activation functions and specialized functions for different architectures, consistently improving accuracy over ReLU and other activation functions by significant margins. The approach can therefore be used as an automated optimization step in applying deep learning to new tasks.
1410.0382
Mircea Andrecut Dr
M. Andrecut
A String-Based Public Key Cryptosystem
In this revised version of the paper we show that the eavesdropper's problem of the proposed cryptosystem has a solution, and we give the details of the solution
null
null
null
cs.CR physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional methods in public key cryptography are based on number theory, and suffer from problems such as dealing with very large numbers, making key creation cumbersome. Here, we propose a new public key cryptosystem based on strings only, which avoids the difficulties of the traditional number theory approach. The security mechanism for public and secret keys generation is ensured by a recursive encoding mechanism embedded in a quasi-commutative-random function, resulted from the composition of a quasi-commutative function with a pseudo-random function. In this revised version of the paper we show that the eavesdropper's problem of the proposed cryptosystem has a solution, and we give the details of the solution.
[ { "created": "Fri, 5 Sep 2014 18:44:31 GMT", "version": "v1" }, { "created": "Mon, 19 Jan 2015 18:53:35 GMT", "version": "v2" } ]
2015-01-20
[ [ "Andrecut", "M.", "" ] ]
Traditional methods in public key cryptography are based on number theory, and suffer from problems such as dealing with very large numbers, making key creation cumbersome. Here, we propose a new public key cryptosystem based on strings only, which avoids the difficulties of the traditional number theory approach. The security mechanism for public and secret keys generation is ensured by a recursive encoding mechanism embedded in a quasi-commutative-random function, resulted from the composition of a quasi-commutative function with a pseudo-random function. In this revised version of the paper we show that the eavesdropper's problem of the proposed cryptosystem has a solution, and we give the details of the solution.
2210.10226
Zillur Rahman
Zillur Rahman, Amit Mazumder Ami, Muhammad Ahsan Ullah
A Real-Time Wrong-Way Vehicle Detection Based on YOLO and Centroid Tracking
5 pages
2020 IEEE Region 10 Symposium (TENSYMP), page:916-920
10.1109/TENSYMP50017.2020.9230463
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Wrong-way driving is one of the main causes of road accidents and traffic jam all over the world. By detecting wrong-way vehicles, the number of accidents can be minimized and traffic jam can be reduced. With the increasing popularity of real-time traffic management systems and due to the availability of cheaper cameras, the surveillance video has become a big source of data. In this paper, we propose an automatic wrong-way vehicle detection system from on-road surveillance camera footage. Our system works in three stages: the detection of vehicles from the video frame by using the You Only Look Once (YOLO) algorithm, track each vehicle in a specified region of interest using centroid tracking algorithm and detect the wrong-way driving vehicles. YOLO is very accurate in object detection and the centroid tracking algorithm can track any moving object efficiently. Experiment with some traffic videos shows that our proposed system can detect and identify any wrong-way vehicle in different light and weather conditions. The system is very simple and easy to implement.
[ { "created": "Wed, 19 Oct 2022 00:53:28 GMT", "version": "v1" } ]
2022-10-20
[ [ "Rahman", "Zillur", "" ], [ "Ami", "Amit Mazumder", "" ], [ "Ullah", "Muhammad Ahsan", "" ] ]
Wrong-way driving is one of the main causes of road accidents and traffic jam all over the world. By detecting wrong-way vehicles, the number of accidents can be minimized and traffic jam can be reduced. With the increasing popularity of real-time traffic management systems and due to the availability of cheaper cameras, the surveillance video has become a big source of data. In this paper, we propose an automatic wrong-way vehicle detection system from on-road surveillance camera footage. Our system works in three stages: the detection of vehicles from the video frame by using the You Only Look Once (YOLO) algorithm, track each vehicle in a specified region of interest using centroid tracking algorithm and detect the wrong-way driving vehicles. YOLO is very accurate in object detection and the centroid tracking algorithm can track any moving object efficiently. Experiment with some traffic videos shows that our proposed system can detect and identify any wrong-way vehicle in different light and weather conditions. The system is very simple and easy to implement.
2202.13134
Fu Song
Qi Qin and JulianAndres JiYang and Fu Song and Taolue Chen and Xinyu Xing
Preventing Timing Side-Channels via Security-Aware Just-In-Time Compilation
null
null
null
null
cs.PL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown that Just-In-Time (JIT) compilation can introduce timing side-channels to constant-time programs, which would otherwise be a principled and effective means to counter timing attacks. In this paper, we propose a novel approach to eliminate JIT-induced leaks from these programs. Specifically, we present an operational semantics and a formal definition of constant-time programs under JIT compilation, laying the foundation for reasoning about programs with JIT compilation. We then propose to eliminate JIT-induced leaks via a fine-grained JIT compilation for which we provide an automated approach to generate policies and a novel type system to show its soundness. We develop a tool DeJITLeak for Java based on our approach and implement the fine-grained JIT compilation in HotSpot. Experimental results show that DeJITLeak can effectively and efficiently eliminate JIT-induced leaks on three datasets used in side-channel detection
[ { "created": "Sat, 26 Feb 2022 13:06:15 GMT", "version": "v1" } ]
2022-03-01
[ [ "Qin", "Qi", "" ], [ "JiYang", "JulianAndres", "" ], [ "Song", "Fu", "" ], [ "Chen", "Taolue", "" ], [ "Xing", "Xinyu", "" ] ]
Recent work has shown that Just-In-Time (JIT) compilation can introduce timing side-channels to constant-time programs, which would otherwise be a principled and effective means to counter timing attacks. In this paper, we propose a novel approach to eliminate JIT-induced leaks from these programs. Specifically, we present an operational semantics and a formal definition of constant-time programs under JIT compilation, laying the foundation for reasoning about programs with JIT compilation. We then propose to eliminate JIT-induced leaks via a fine-grained JIT compilation for which we provide an automated approach to generate policies and a novel type system to show its soundness. We develop a tool DeJITLeak for Java based on our approach and implement the fine-grained JIT compilation in HotSpot. Experimental results show that DeJITLeak can effectively and efficiently eliminate JIT-induced leaks on three datasets used in side-channel detection
1703.09807
Nhien-An Le-Khac
Lamine M. Aouad, Nhien-An Le-Khac, Tahar Kechadi
Grid-based Approaches for Distributed Data Mining Applications
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The data mining field is an important source of large-scale applications and datasets which are getting more and more common. In this paper, we present grid-based approaches for two basic data mining applications, and a performance evaluation on an experimental grid environment that provides interesting monitoring capabilities and configuration tools. We propose a new distributed clustering approach and a distributed frequent itemsets generation well-adapted for grid environments. Performance evaluation is done using the Condor system and its workflow manager DAGMan. We also compare this performance analysis to a simple analytical model to evaluate the overheads related to the workflow engine and the underlying grid system. This will specifically show that realistic performance expectations are currently difficult to achieve on the grid.
[ { "created": "Tue, 28 Mar 2017 21:19:24 GMT", "version": "v1" } ]
2017-03-30
[ [ "Aouad", "Lamine M.", "" ], [ "Le-Khac", "Nhien-An", "" ], [ "Kechadi", "Tahar", "" ] ]
The data mining field is an important source of large-scale applications and datasets which are getting more and more common. In this paper, we present grid-based approaches for two basic data mining applications, and a performance evaluation on an experimental grid environment that provides interesting monitoring capabilities and configuration tools. We propose a new distributed clustering approach and a distributed frequent itemsets generation well-adapted for grid environments. Performance evaluation is done using the Condor system and its workflow manager DAGMan. We also compare this performance analysis to a simple analytical model to evaluate the overheads related to the workflow engine and the underlying grid system. This will specifically show that realistic performance expectations are currently difficult to achieve on the grid.
1605.06154
Michael Nelson
Herbert Van de Sompel, David S. H. Rosenthal, Michael L. Nelson
Web Infrastructure to Support e-Journal Preservation (and More)
23 pages, 5 figures
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
E-journal preservation systems have to ingest millions of articles each year. Ingest, especially of the "long tail" of journals from small publishers, is the largest element of their cost. Cost is the major reason that archives contain less than half the content they should. Automation is essential to minimize these costs. This paper examines the potential for automation beyond the status quo based on the API provided by CrossRef, ANSI/NISO Z39.99 ResourceSync, and the provision of typed links in publishers' HTTP response headers. These changes would not merely assist e-journal preservation and other cross-venue scholarly applications, but would help remedy the gap that research has revealed between DOIs' potential and actual benefits.
[ { "created": "Thu, 19 May 2016 21:44:01 GMT", "version": "v1" } ]
2016-05-23
[ [ "Van de Sompel", "Herbert", "" ], [ "Rosenthal", "David S. H.", "" ], [ "Nelson", "Michael L.", "" ] ]
E-journal preservation systems have to ingest millions of articles each year. Ingest, especially of the "long tail" of journals from small publishers, is the largest element of their cost. Cost is the major reason that archives contain less than half the content they should. Automation is essential to minimize these costs. This paper examines the potential for automation beyond the status quo based on the API provided by CrossRef, ANSI/NISO Z39.99 ResourceSync, and the provision of typed links in publishers' HTTP response headers. These changes would not merely assist e-journal preservation and other cross-venue scholarly applications, but would help remedy the gap that research has revealed between DOIs' potential and actual benefits.
1802.01448
Deepak Vijaykeerthy
Deepak Vijaykeerthy, Anshuman Suri, Sameep Mehta, Ponnurangam Kumaraguru
Hardening Deep Neural Networks via Adversarial Model Cascades
null
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are vulnerable to malicious inputs crafted by an adversary to produce erroneous outputs. Works on securing neural networks against adversarial examples achieve high empirical robustness on simple datasets such as MNIST. However, these techniques are inadequate when empirically tested on complex data sets such as CIFAR-10 and SVHN. Further, existing techniques are designed to target specific attacks and fail to generalize across attacks. We propose the Adversarial Model Cascades (AMC) as a way to tackle the above inadequacies. Our approach trains a cascade of models sequentially where each model is optimized to be robust towards a mixture of multiple attacks. Ultimately, it yields a single model which is secure against a wide range of attacks; namely FGSM, Elastic, Virtual Adversarial Perturbations and Madry. On an average, AMC increases the model's empirical robustness against various attacks simultaneously, by a significant margin (of 6.225% for MNIST, 5.075% for SVHN and 2.65% for CIFAR10). At the same time, the model's performance on non-adversarial inputs is comparable to the state-of-the-art models.
[ { "created": "Fri, 2 Feb 2018 09:02:38 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2018 16:38:56 GMT", "version": "v2" }, { "created": "Mon, 12 Feb 2018 06:28:25 GMT", "version": "v3" }, { "created": "Sun, 4 Nov 2018 11:16:23 GMT", "version": "v4" } ]
2018-11-06
[ [ "Vijaykeerthy", "Deepak", "" ], [ "Suri", "Anshuman", "" ], [ "Mehta", "Sameep", "" ], [ "Kumaraguru", "Ponnurangam", "" ] ]
Deep neural networks (DNNs) are vulnerable to malicious inputs crafted by an adversary to produce erroneous outputs. Works on securing neural networks against adversarial examples achieve high empirical robustness on simple datasets such as MNIST. However, these techniques are inadequate when empirically tested on complex data sets such as CIFAR-10 and SVHN. Further, existing techniques are designed to target specific attacks and fail to generalize across attacks. We propose the Adversarial Model Cascades (AMC) as a way to tackle the above inadequacies. Our approach trains a cascade of models sequentially where each model is optimized to be robust towards a mixture of multiple attacks. Ultimately, it yields a single model which is secure against a wide range of attacks; namely FGSM, Elastic, Virtual Adversarial Perturbations and Madry. On an average, AMC increases the model's empirical robustness against various attacks simultaneously, by a significant margin (of 6.225% for MNIST, 5.075% for SVHN and 2.65% for CIFAR10). At the same time, the model's performance on non-adversarial inputs is comparable to the state-of-the-art models.
1908.02999
Fabian Schilling
Fabian Schilling and Julien Lecoeur and Fabrizio Schiano and Dario Floreano
Learning Vision-based Flight in Drone Swarms by Imitation
8 pages, 8 figures, accepted for publication in the IEEE Robotics and Automation Letters (RA-L) on July 28, 2019. arXiv admin note: substantial text overlap with arXiv:1809.00543
null
null
null
cs.RO cs.CV cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decentralized drone swarms deployed today either rely on sharing of positions among agents or detecting swarm members with the help of visual markers. This work proposes an entirely visual approach to coordinate markerless drone swarms based on imitation learning. Each agent is controlled by a small and efficient convolutional neural network that takes raw omnidirectional images as inputs and predicts 3D velocity commands that match those computed by a flocking algorithm. We start training in simulation and propose a simple yet effective unsupervised domain adaptation approach to transfer the learned controller to the real world. We further train the controller with data collected in our motion capture hall. We show that the convolutional neural network trained on the visual inputs of the drone can learn not only robust inter-agent collision avoidance but also cohesion of the swarm in a sample-efficient manner. The neural controller effectively learns to localize other agents in the visual input, which we show by visualizing the regions with the most influence on the motion of an agent. We remove the dependence on sharing positions among swarm members by taking only local visual information into account for control. Our work can therefore be seen as the first step towards a fully decentralized, vision-based swarm without the need for communication or visual markers.
[ { "created": "Thu, 8 Aug 2019 10:19:48 GMT", "version": "v1" } ]
2019-08-09
[ [ "Schilling", "Fabian", "" ], [ "Lecoeur", "Julien", "" ], [ "Schiano", "Fabrizio", "" ], [ "Floreano", "Dario", "" ] ]
Decentralized drone swarms deployed today either rely on sharing of positions among agents or detecting swarm members with the help of visual markers. This work proposes an entirely visual approach to coordinate markerless drone swarms based on imitation learning. Each agent is controlled by a small and efficient convolutional neural network that takes raw omnidirectional images as inputs and predicts 3D velocity commands that match those computed by a flocking algorithm. We start training in simulation and propose a simple yet effective unsupervised domain adaptation approach to transfer the learned controller to the real world. We further train the controller with data collected in our motion capture hall. We show that the convolutional neural network trained on the visual inputs of the drone can learn not only robust inter-agent collision avoidance but also cohesion of the swarm in a sample-efficient manner. The neural controller effectively learns to localize other agents in the visual input, which we show by visualizing the regions with the most influence on the motion of an agent. We remove the dependence on sharing positions among swarm members by taking only local visual information into account for control. Our work can therefore be seen as the first step towards a fully decentralized, vision-based swarm without the need for communication or visual markers.
1911.11547
Dat Quoc Nguyen
Dai Quoc Nguyen, Dat Quoc Nguyen, Son Bao Pham
A Vietnamese Text-Based Conversational Agent
In Proceedings of the 25th International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2012)
null
10.1007/978-3-642-31087-4_71
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a Vietnamese text-based conversational agent architecture on specific knowledge domain which is integrated in a question answering system. When the question answering system fails to provide answers to users' input, our conversational agent can step in to interact with users to provide answers to users. Experimental results are promising where our Vietnamese text-based conversational agent achieves positive feedback in a study conducted in the university academic regulation domain.
[ { "created": "Tue, 26 Nov 2019 14:11:50 GMT", "version": "v1" } ]
2019-11-27
[ [ "Nguyen", "Dai Quoc", "" ], [ "Nguyen", "Dat Quoc", "" ], [ "Pham", "Son Bao", "" ] ]
This paper introduces a Vietnamese text-based conversational agent architecture on specific knowledge domain which is integrated in a question answering system. When the question answering system fails to provide answers to users' input, our conversational agent can step in to interact with users to provide answers to users. Experimental results are promising where our Vietnamese text-based conversational agent achieves positive feedback in a study conducted in the university academic regulation domain.
1312.0525
Kiryung Lee
Kiryung Lee, Yihong Wu, and Yoram Bresler
Near Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices via Sparse Power Factorization
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressed sensing of simultaneously sparse and low-rank matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike conventional compressed sensing for sparse vectors, where convex relaxation via the $\ell_1$-norm achieves near optimal performance, for compressed sensing of sparse low-rank matrices, it has been shown recently Oymak et al. that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. For a class of signals whose sparse representation coefficients are fast-decaying, SPF achieves stable recovery of the rank-1 matrix formed by their outer product and requires number of measurements within a logarithmic factor of the information-theoretic fundamental limit. For the recovery of general sparse low-rank matrices, we propose subspace-concatenated SPF (SCSPF), which has analogous near optimal performance guarantees to SPF in the rank-1 case. Numerical results show that SPF and SCSPF empirically outperform convex programmings using the best known combinations of mixed norm and nuclear norm.
[ { "created": "Mon, 2 Dec 2013 17:37:00 GMT", "version": "v1" }, { "created": "Thu, 30 Jun 2016 02:43:34 GMT", "version": "v2" } ]
2016-07-01
[ [ "Lee", "Kiryung", "" ], [ "Wu", "Yihong", "" ], [ "Bresler", "Yoram", "" ] ]
Compressed sensing of simultaneously sparse and low-rank matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike conventional compressed sensing for sparse vectors, where convex relaxation via the $\ell_1$-norm achieves near optimal performance, for compressed sensing of sparse low-rank matrices, it has been shown recently Oymak et al. that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. For a class of signals whose sparse representation coefficients are fast-decaying, SPF achieves stable recovery of the rank-1 matrix formed by their outer product and requires number of measurements within a logarithmic factor of the information-theoretic fundamental limit. For the recovery of general sparse low-rank matrices, we propose subspace-concatenated SPF (SCSPF), which has analogous near optimal performance guarantees to SPF in the rank-1 case. Numerical results show that SPF and SCSPF empirically outperform convex programmings using the best known combinations of mixed norm and nuclear norm.
2011.07200
Yingtao Luo
Ziyang Zhang and Yingtao Luo
Deep Spatial Learning with Molecular Vibration
NeurIPS 2020 Machine Learning for Molecules Workshop, Vancouver, Canada
null
null
null
cs.LG physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Machine learning over-fitting caused by data scarcity greatly limits the application of machine learning for molecules. Due to manufacturing processes difference, big data is not always rendered available through computational chemistry methods for some tasks, causing data scarcity problem for machine learning algorithms. Here we propose to extract the natural features of molecular structures and rationally distort them to augment the data availability. This method allows a machine learning project to leverage the powerful fit of physics-informed augmentation for providing significant boost to predictive accuracy. Successfully verified by the prediction of rejection rate and flux of thin film polyamide nanofiltration membranes, with the relative error dropping from 16.34% to 6.71% and the coefficient of determination rising from 0.16 to 0.75, the proposed deep spatial learning with molecular vibration is widely instructive for molecular science. Experimental comparison unequivocally demonstrates its superiority over common learning algorithms.
[ { "created": "Sat, 14 Nov 2020 02:46:43 GMT", "version": "v1" } ]
2020-11-20
[ [ "Zhang", "Ziyang", "" ], [ "Luo", "Yingtao", "" ] ]
Machine learning over-fitting caused by data scarcity greatly limits the application of machine learning for molecules. Due to manufacturing processes difference, big data is not always rendered available through computational chemistry methods for some tasks, causing data scarcity problem for machine learning algorithms. Here we propose to extract the natural features of molecular structures and rationally distort them to augment the data availability. This method allows a machine learning project to leverage the powerful fit of physics-informed augmentation for providing significant boost to predictive accuracy. Successfully verified by the prediction of rejection rate and flux of thin film polyamide nanofiltration membranes, with the relative error dropping from 16.34% to 6.71% and the coefficient of determination rising from 0.16 to 0.75, the proposed deep spatial learning with molecular vibration is widely instructive for molecular science. Experimental comparison unequivocally demonstrates its superiority over common learning algorithms.
2001.00522
Na-Young Ahn
Na-Young Ahn, Dong Hoon Lee
Schemes for Privacy Data Destruction in a NAND Flash Memory
Pages 181305 - 181313
null
10.1109/ACCESS.2019.2958628
null
cs.CR cs.SY eess.SP eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose schemes for efficiently destroying privacy data in a NAND flash memory. Generally, even if privcy data is discarded from NAND flash memories, there is a high probability that the data will remain in an invalid block. This is a management problem that arises from the specificity of a program operation and an erase operation of NAND flash memories. When updating pages or performing a garbage collection, there is a problem that valid data remains in at least one unmapped memory block. Is it possible to apply the obligation to delete privacy data from existing NAND flash memory? This paper is the answer to this question. We propose a partial overwriting scheme, a SLC programming scheme, and a deletion duty pulse application scheme for invalid pages to effectively solve privacy data destruction issues due to the remaining data. Such privacy data destruction schemes basically utilize at least one state in which data can be written to the programmed cells based on a multi-level cell program operation. Our privacy data destruction schemes have advantages in terms of block management as compared with conventional erase schemes, and are very economical in terms of time and cost. The proposed privacy data destruction schemes can be easily applied to many storage devices and data centers using NAND flash memories.
[ { "created": "Sat, 28 Dec 2019 03:52:02 GMT", "version": "v1" } ]
2020-11-20
[ [ "Ahn", "Na-Young", "" ], [ "Lee", "Dong Hoon", "" ] ]
We propose schemes for efficiently destroying privacy data in a NAND flash memory. Generally, even if privcy data is discarded from NAND flash memories, there is a high probability that the data will remain in an invalid block. This is a management problem that arises from the specificity of a program operation and an erase operation of NAND flash memories. When updating pages or performing a garbage collection, there is a problem that valid data remains in at least one unmapped memory block. Is it possible to apply the obligation to delete privacy data from existing NAND flash memory? This paper is the answer to this question. We propose a partial overwriting scheme, a SLC programming scheme, and a deletion duty pulse application scheme for invalid pages to effectively solve privacy data destruction issues due to the remaining data. Such privacy data destruction schemes basically utilize at least one state in which data can be written to the programmed cells based on a multi-level cell program operation. Our privacy data destruction schemes have advantages in terms of block management as compared with conventional erase schemes, and are very economical in terms of time and cost. The proposed privacy data destruction schemes can be easily applied to many storage devices and data centers using NAND flash memories.
2301.04696
Joberto Martins Prof. Dr.
Eduardo S. Xavier and Nazim Agoulmine and Joberto S. B. Martins
On Modeling Network Slicing Communication Resources with SARSA Optimization
8 pages, 9 figures, ADVANCE conference paper
null
10.5281/zenodo.7513695
null
cs.NI cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Network slicing is a crucial enabler to support the composition and deployment of virtual network infrastructures required by the dynamic behavior of networks like 5G/6G mobile networks, IoT-aware networks, e-health systems, and industry verticals like the internet of vehicles (IoV) and industry 4.0. The communication slices and their allocated communication resources are essential in slicing architectures for resource orchestration and allocation, virtual network function (VNF) deployment, and slice operation functionalities. The communication slices provide the communications capabilities required to support slice operation, SLA guarantees, and QoS/ QoE application requirements. Therefore, this contribution proposes a networking slicing conceptual model to formulate the optimization problem related to the sharing of communication resources among communication slices. First, we present a conceptual model of network slicing, we then formulate analytically some aspects of the model and the optimization problem to address. Next, we proposed to use a SARSA agent to solve the problem and implement a proof of concept prototype. Finally, we present the obtained results and discuss them.
[ { "created": "Wed, 11 Jan 2023 20:00:42 GMT", "version": "v1" } ]
2023-01-13
[ [ "Xavier", "Eduardo S.", "" ], [ "Agoulmine", "Nazim", "" ], [ "Martins", "Joberto S. B.", "" ] ]
Network slicing is a crucial enabler to support the composition and deployment of virtual network infrastructures required by the dynamic behavior of networks like 5G/6G mobile networks, IoT-aware networks, e-health systems, and industry verticals like the internet of vehicles (IoV) and industry 4.0. The communication slices and their allocated communication resources are essential in slicing architectures for resource orchestration and allocation, virtual network function (VNF) deployment, and slice operation functionalities. The communication slices provide the communications capabilities required to support slice operation, SLA guarantees, and QoS/ QoE application requirements. Therefore, this contribution proposes a networking slicing conceptual model to formulate the optimization problem related to the sharing of communication resources among communication slices. First, we present a conceptual model of network slicing, we then formulate analytically some aspects of the model and the optimization problem to address. Next, we proposed to use a SARSA agent to solve the problem and implement a proof of concept prototype. Finally, we present the obtained results and discuss them.
2107.02378
Jun Shu
Jun Shu, Deyu Meng, Zongben Xu
Learning an Explicit Hyperparameter Prediction Function Conditioned on Tasks
74 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Meta learning has attracted much attention recently in machine learning community. Contrary to conventional machine learning aiming to learn inherent prediction rules to predict labels for new query data, meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks by leveraging the meta-learned learning methodology. In this study, we interpret such learning methodology as learning an explicit hyper-parameter prediction function shared by all training tasks. Specifically, this function is represented as a parameterized function called meta-learner, mapping from a training/test task to its suitable hyper-parameter setting, extracted from a pre-specified function set called meta learning machine. Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks, instead of only obtaining fixed hyper-parameters by many current meta learning methods, with less adaptability to query task's variations. Such understanding of meta learning also makes it easily succeed from traditional learning theory for analyzing its generalization bounds with general losses/tasks/models. The theory naturally leads to some feasible controlling strategies for ameliorating the quality of the extracted meta-learner, verified to be able to finely ameliorate its generalization capability in some typical meta learning applications, including few-shot regression, few-shot classification and domain generalization.
[ { "created": "Tue, 6 Jul 2021 04:05:08 GMT", "version": "v1" }, { "created": "Sat, 13 May 2023 09:41:42 GMT", "version": "v2" }, { "created": "Sat, 1 Jul 2023 09:27:29 GMT", "version": "v3" } ]
2023-07-04
[ [ "Shu", "Jun", "" ], [ "Meng", "Deyu", "" ], [ "Xu", "Zongben", "" ] ]
Meta learning has attracted much attention recently in machine learning community. Contrary to conventional machine learning aiming to learn inherent prediction rules to predict labels for new query data, meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks by leveraging the meta-learned learning methodology. In this study, we interpret such learning methodology as learning an explicit hyper-parameter prediction function shared by all training tasks. Specifically, this function is represented as a parameterized function called meta-learner, mapping from a training/test task to its suitable hyper-parameter setting, extracted from a pre-specified function set called meta learning machine. Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks, instead of only obtaining fixed hyper-parameters by many current meta learning methods, with less adaptability to query task's variations. Such understanding of meta learning also makes it easily succeed from traditional learning theory for analyzing its generalization bounds with general losses/tasks/models. The theory naturally leads to some feasible controlling strategies for ameliorating the quality of the extracted meta-learner, verified to be able to finely ameliorate its generalization capability in some typical meta learning applications, including few-shot regression, few-shot classification and domain generalization.
1709.08360
Jiaqi Zhang
Jiaqi Zhang, Keyou You, and Tamer Ba\c{s}ar
Distributed Discrete-time Optimization in Multi-agent Networks Using only Sign of Relative State
Part of this work has been presented in American Control Conference (ACC) 2018, first version posted on arxiv on Sep. 2017, IEEE Transactions on Automatic Control, 2018
null
10.1109/TAC.2018.2884998
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes distributed discrete-time algorithms to cooperatively solve an additive cost optimization problem in multi-agent networks. The striking feature lies in the use of only the sign of relative state information between neighbors, which substantially differentiates our algorithms from others in the existing literature. We first interpret the proposed algorithms in terms of the penalty method in optimization theory and then perform non-asymptotic analysis to study convergence for static network graphs. Compared with the celebrated distributed subgradient algorithms, which however use the exact relative state information, the convergence speed is essentially not affected by the loss of information. We also study how introducing noise into the relative state information and randomly activated graphs affect the performance of our algorithms. Finally, we validate the theoretical results on a class of distributed quantile regression problems.
[ { "created": "Mon, 25 Sep 2017 08:05:04 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2017 16:17:55 GMT", "version": "v2" }, { "created": "Mon, 10 Dec 2018 07:01:54 GMT", "version": "v3" } ]
2018-12-11
[ [ "Zhang", "Jiaqi", "" ], [ "You", "Keyou", "" ], [ "Başar", "Tamer", "" ] ]
This paper proposes distributed discrete-time algorithms to cooperatively solve an additive cost optimization problem in multi-agent networks. The striking feature lies in the use of only the sign of relative state information between neighbors, which substantially differentiates our algorithms from others in the existing literature. We first interpret the proposed algorithms in terms of the penalty method in optimization theory and then perform non-asymptotic analysis to study convergence for static network graphs. Compared with the celebrated distributed subgradient algorithms, which however use the exact relative state information, the convergence speed is essentially not affected by the loss of information. We also study how introducing noise into the relative state information and randomly activated graphs affect the performance of our algorithms. Finally, we validate the theoretical results on a class of distributed quantile regression problems.
cs/0611058
Marie Cottrell
Marie Cottrell (CES, SAMOS), Michel Verleysen (DICE)
Advances in Self Organising Maps
Special Issue of the Neural Networks Journal after WSOM 05 in Paris
Neural Networks Volume 19, Issues 6-7 (2006) 721-722
10.1016/j.neunet.2006.05.011
null
cs.NE math.ST nlin.AO stat.TH
null
The Self-Organizing Map (SOM) with its related extensions is the most popular artificial neural algorithm for use in unsupervised learning, clustering, classification and data visualization. Over 5,000 publications have been reported in the open literature, and many commercial projects employ the SOM as a tool for solving hard real-world problems. Each two years, the "Workshop on Self-Organizing Maps" (WSOM) covers the new developments in the field. The WSOM series of conferences was initiated in 1997 by Prof. Teuvo Kohonen, and has been successfully organized in 1997 and 1999 by the Helsinki University of Technology, in 2001 by the University of Lincolnshire and Humberside, and in 2003 by the Kyushu Institute of Technology. The Universit\'{e} Paris I Panth\'{e}on Sorbonne (SAMOS-MATISSE research centre) organized WSOM 2005 in Paris on September 5-8, 2005.
[ { "created": "Tue, 14 Nov 2006 13:19:46 GMT", "version": "v1" } ]
2011-11-09
[ [ "Cottrell", "Marie", "", "CES, SAMOS" ], [ "Verleysen", "Michel", "", "DICE" ] ]
The Self-Organizing Map (SOM) with its related extensions is the most popular artificial neural algorithm for use in unsupervised learning, clustering, classification and data visualization. Over 5,000 publications have been reported in the open literature, and many commercial projects employ the SOM as a tool for solving hard real-world problems. Each two years, the "Workshop on Self-Organizing Maps" (WSOM) covers the new developments in the field. The WSOM series of conferences was initiated in 1997 by Prof. Teuvo Kohonen, and has been successfully organized in 1997 and 1999 by the Helsinki University of Technology, in 2001 by the University of Lincolnshire and Humberside, and in 2003 by the Kyushu Institute of Technology. The Universit\'{e} Paris I Panth\'{e}on Sorbonne (SAMOS-MATISSE research centre) organized WSOM 2005 in Paris on September 5-8, 2005.
2005.00198
EPTCS
Artjoms {\v{S}}inkarovs (Heriot-Watt University)
Multi-dimensional Arrays with Levels
In Proceedings MSFP 2020, arXiv:2004.14735
EPTCS 317, 2020, pp. 57-71
10.4204/EPTCS.317.4
null
cs.DS cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore a data structure that generalises rectangular multi-dimensional arrays. The shape of an n-dimensional array is typically given by a tuple of n natural numbers. Each element in that tuple defines the length of the corresponding axis. If we treat this tuple as an array, the shape of that array is described by the single natural number n. A natural number itself can be also treated as an array with the shape described by the natural number 1 (or the element of any singleton set). This observation gives rise to the hierarchy of array types where the shape of an array of level l+1 is a level-l array of natural numbers. Such a hierarchy occurs naturally when treating arrays as containers, which makes it possible to define both rank- and level-polymorphic operations. The former can be found in most array languages, whereas the latter gives rise to partial selections on a large set of hyperplanes, which is often useful in practice. In this paper we present an Agda formalisation of arrays with levels. We show that the proposed formalism supports standard rank-polymorphic array operations, while type system gives static guarantees that indexing is within bounds. We generalise the notion of ranked operator so that it becomes applicable on arrays of arbitrary levels and we show why this may be useful in practice.
[ { "created": "Fri, 1 May 2020 03:42:41 GMT", "version": "v1" } ]
2020-05-04
[ [ "{Š}inkarovs", "Artjoms", "", "Heriot-Watt University" ] ]
We explore a data structure that generalises rectangular multi-dimensional arrays. The shape of an n-dimensional array is typically given by a tuple of n natural numbers. Each element in that tuple defines the length of the corresponding axis. If we treat this tuple as an array, the shape of that array is described by the single natural number n. A natural number itself can be also treated as an array with the shape described by the natural number 1 (or the element of any singleton set). This observation gives rise to the hierarchy of array types where the shape of an array of level l+1 is a level-l array of natural numbers. Such a hierarchy occurs naturally when treating arrays as containers, which makes it possible to define both rank- and level-polymorphic operations. The former can be found in most array languages, whereas the latter gives rise to partial selections on a large set of hyperplanes, which is often useful in practice. In this paper we present an Agda formalisation of arrays with levels. We show that the proposed formalism supports standard rank-polymorphic array operations, while type system gives static guarantees that indexing is within bounds. We generalise the notion of ranked operator so that it becomes applicable on arrays of arbitrary levels and we show why this may be useful in practice.
2310.02029
Gianluca Bontempi
Gianluca Bontempi
Between accurate prediction and poor decision making: the AI/ML gap
Position paper presented in the BENELEARN 2022 conference
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Intelligent agents rely on AI/ML functionalities to predict the consequence of possible actions and optimise the policy. However, the effort of the research community in addressing prediction accuracy has been so intense (and successful) that it created the illusion that the more accurate the learner prediction (or classification) the better would have been the final decision. Now, such an assumption is valid only if the (human or artificial) decision maker has complete knowledge of the utility of the possible actions. This paper argues that AI/ML community has taken so far a too unbalanced approach by devoting excessive attention to the estimation of the state (or target) probability to the detriment of accurate and reliable estimations of the utility. In particular, few evidence exists about the impact of a wrong utility assessment on the resulting expected utility of the decision strategy. This situation is creating a substantial gap between the expectations and the effective impact of AI solutions, as witnessed by recent criticisms and emphasised by the regulatory legislative efforts. This paper aims to study this gap by quantifying the sensitivity of the expected utility to the utility uncertainty and comparing it to the one due to probability estimation. Theoretical and simulated results show that an inaccurate utility assessment may as (and sometimes) more harmful than a poor probability estimation. The final recommendation to the community is then to undertake a focus shift from a pure accuracy-driven (or obsessed) approach to a more utility-aware methodology.
[ { "created": "Tue, 3 Oct 2023 13:15:02 GMT", "version": "v1" } ]
2023-10-04
[ [ "Bontempi", "Gianluca", "" ] ]
Intelligent agents rely on AI/ML functionalities to predict the consequence of possible actions and optimise the policy. However, the effort of the research community in addressing prediction accuracy has been so intense (and successful) that it created the illusion that the more accurate the learner prediction (or classification) the better would have been the final decision. Now, such an assumption is valid only if the (human or artificial) decision maker has complete knowledge of the utility of the possible actions. This paper argues that AI/ML community has taken so far a too unbalanced approach by devoting excessive attention to the estimation of the state (or target) probability to the detriment of accurate and reliable estimations of the utility. In particular, few evidence exists about the impact of a wrong utility assessment on the resulting expected utility of the decision strategy. This situation is creating a substantial gap between the expectations and the effective impact of AI solutions, as witnessed by recent criticisms and emphasised by the regulatory legislative efforts. This paper aims to study this gap by quantifying the sensitivity of the expected utility to the utility uncertainty and comparing it to the one due to probability estimation. Theoretical and simulated results show that an inaccurate utility assessment may as (and sometimes) more harmful than a poor probability estimation. The final recommendation to the community is then to undertake a focus shift from a pure accuracy-driven (or obsessed) approach to a more utility-aware methodology.
2108.11717
Soroush Seifi
Soroush Seifi, Abhishek Jha, Tinne Tuytelaars
Glimpse-Attend-and-Explore: Self-Attention for Active Visual Exploration
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active visual exploration aims to assist an agent with a limited field of view to understand its environment based on partial observations made by choosing the best viewing directions in the scene. Recent methods have tried to address this problem either by using reinforcement learning, which is difficult to train, or by uncertainty maps, which are task-specific and can only be implemented for dense prediction tasks. In this paper, we propose the Glimpse-Attend-and-Explore model which: (a) employs self-attention to guide the visual exploration instead of task-specific uncertainty maps; (b) can be used for both dense and sparse prediction tasks; and (c) uses a contrastive stream to further improve the representations learned. Unlike previous works, we show the application of our model on multiple tasks like reconstruction, segmentation and classification. Our model provides encouraging results while being less dependent on dataset bias in driving the exploration. We further perform an ablation study to investigate the features and attention learned by our model. Finally, we show that our self-attention module learns to attend different regions of the scene by minimizing the loss on the downstream task. Code: https://github.com/soroushseifi/glimpse-attend-explore.
[ { "created": "Thu, 26 Aug 2021 11:41:03 GMT", "version": "v1" } ]
2021-08-27
[ [ "Seifi", "Soroush", "" ], [ "Jha", "Abhishek", "" ], [ "Tuytelaars", "Tinne", "" ] ]
Active visual exploration aims to assist an agent with a limited field of view to understand its environment based on partial observations made by choosing the best viewing directions in the scene. Recent methods have tried to address this problem either by using reinforcement learning, which is difficult to train, or by uncertainty maps, which are task-specific and can only be implemented for dense prediction tasks. In this paper, we propose the Glimpse-Attend-and-Explore model which: (a) employs self-attention to guide the visual exploration instead of task-specific uncertainty maps; (b) can be used for both dense and sparse prediction tasks; and (c) uses a contrastive stream to further improve the representations learned. Unlike previous works, we show the application of our model on multiple tasks like reconstruction, segmentation and classification. Our model provides encouraging results while being less dependent on dataset bias in driving the exploration. We further perform an ablation study to investigate the features and attention learned by our model. Finally, we show that our self-attention module learns to attend different regions of the scene by minimizing the loss on the downstream task. Code: https://github.com/soroushseifi/glimpse-attend-explore.
1401.0092
Shraddha Shinde
Shraddha S. Shinde and Prof. Anagha P. Khedkar
A Novel Approach For Generating Face Template Using Bda
11 pages, ITCSE 2013 conference
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In identity management system, commonly used biometric recognition system needs attention towards issue of biometric template protection as far as more reliable solution is concerned. In view of this biometric template protection algorithm should satisfy security, discriminability and cancelability. As no single template protection method is capable of satisfying the basic requirements, a novel technique for face template generation and protection is proposed. The novel approach is proposed to provide security and accuracy in new user enrollment as well as authentication process. This novel technique takes advantage of both the hybrid approach and the binary discriminant analysis algorithm. This algorithm is designed on the basis of random projection, binary discriminant analysis and fuzzy commitment scheme. Three publicly available benchmark face databases are used for evaluation. The proposed novel technique enhances the discriminability and recognition accuracy by 80% in terms of matching score of the face images and provides high security.
[ { "created": "Tue, 31 Dec 2013 04:48:43 GMT", "version": "v1" } ]
2014-01-03
[ [ "Shinde", "Shraddha S.", "" ], [ "Khedkar", "Prof. Anagha P.", "" ] ]
In identity management system, commonly used biometric recognition system needs attention towards issue of biometric template protection as far as more reliable solution is concerned. In view of this biometric template protection algorithm should satisfy security, discriminability and cancelability. As no single template protection method is capable of satisfying the basic requirements, a novel technique for face template generation and protection is proposed. The novel approach is proposed to provide security and accuracy in new user enrollment as well as authentication process. This novel technique takes advantage of both the hybrid approach and the binary discriminant analysis algorithm. This algorithm is designed on the basis of random projection, binary discriminant analysis and fuzzy commitment scheme. Three publicly available benchmark face databases are used for evaluation. The proposed novel technique enhances the discriminability and recognition accuracy by 80% in terms of matching score of the face images and provides high security.
1709.06668
Daniel Seita
Daniel Seita, Sanjay Krishnan, Roy Fox, Stephen McKinley, John Canny, Ken Goldberg
Fast and Reliable Autonomous Surgical Debridement with Cable-Driven Robots Using a Two-Phase Calibration Procedure
Code, data, and videos are available at https://sites.google.com/view/calib-icra/. Final version for ICRA 2018
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automating precision subtasks such as debridement (removing dead or diseased tissue fragments) with Robotic Surgical Assistants (RSAs) such as the da Vinci Research Kit (dVRK) is challenging due to inherent non-linearities in cable-driven systems. We propose and evaluate a novel two-phase coarse-to-fine calibration method. In Phase I (coarse), we place a red calibration marker on the end effector and let it randomly move through a set of open-loop trajectories to obtain a large sample set of camera pixels and internal robot end-effector configurations. This coarse data is then used to train a Deep Neural Network (DNN) to learn the coarse transformation bias. In Phase II (fine), the bias from Phase I is applied to move the end-effector toward a small set of specific target points on a printed sheet. For each target, a human operator manually adjusts the end-effector position by direct contact (not through teleoperation) and the residual compensation bias is recorded. This fine data is then used to train a Random Forest (RF) to learn the fine transformation bias. Subsequent experiments suggest that without calibration, position errors average 4.55mm. Phase I can reduce average error to 2.14mm and the combination of Phase I and Phase II can reduces average error to 1.08mm. We apply these results to debridement of raisins and pumpkin seeds as fragment phantoms. Using an endoscopic stereo camera with standard edge detection, experiments with 120 trials achieved average success rates of 94.5%, exceeding prior results with much larger fragments (89.4%) and achieving a speedup of 2.1x, decreasing time per fragment from 15.8 seconds to 7.3 seconds. Source code, data, and videos are available at https://sites.google.com/view/calib-icra/.
[ { "created": "Tue, 19 Sep 2017 22:51:36 GMT", "version": "v1" }, { "created": "Sat, 24 Feb 2018 08:34:58 GMT", "version": "v2" } ]
2018-02-27
[ [ "Seita", "Daniel", "" ], [ "Krishnan", "Sanjay", "" ], [ "Fox", "Roy", "" ], [ "McKinley", "Stephen", "" ], [ "Canny", "John", "" ], [ "Goldberg", "Ken", "" ] ]
Automating precision subtasks such as debridement (removing dead or diseased tissue fragments) with Robotic Surgical Assistants (RSAs) such as the da Vinci Research Kit (dVRK) is challenging due to inherent non-linearities in cable-driven systems. We propose and evaluate a novel two-phase coarse-to-fine calibration method. In Phase I (coarse), we place a red calibration marker on the end effector and let it randomly move through a set of open-loop trajectories to obtain a large sample set of camera pixels and internal robot end-effector configurations. This coarse data is then used to train a Deep Neural Network (DNN) to learn the coarse transformation bias. In Phase II (fine), the bias from Phase I is applied to move the end-effector toward a small set of specific target points on a printed sheet. For each target, a human operator manually adjusts the end-effector position by direct contact (not through teleoperation) and the residual compensation bias is recorded. This fine data is then used to train a Random Forest (RF) to learn the fine transformation bias. Subsequent experiments suggest that without calibration, position errors average 4.55mm. Phase I can reduce average error to 2.14mm and the combination of Phase I and Phase II can reduces average error to 1.08mm. We apply these results to debridement of raisins and pumpkin seeds as fragment phantoms. Using an endoscopic stereo camera with standard edge detection, experiments with 120 trials achieved average success rates of 94.5%, exceeding prior results with much larger fragments (89.4%) and achieving a speedup of 2.1x, decreasing time per fragment from 15.8 seconds to 7.3 seconds. Source code, data, and videos are available at https://sites.google.com/view/calib-icra/.
2312.15608
Yupei Zhang
Yupei Zhang, Yuxin Li, Yifei Wang, Shuangshuang Wei, Yunan Xu, and Xuequn Shang
Federated learning-outcome prediction with multi-layer privacy protection
10 pages, 9 figures, 3 tables. This preprint will be published in Frontiers of Computer Science on Dec 15, 2024
Frontiers of Computer Science, 2024,18(6):186604
10.1007/s11704-023-2791-8
null
cs.LG cs.CR cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Learning-outcome prediction (LOP) is a long-standing and critical problem in educational routes. Many studies have contributed to developing effective models while often suffering from data shortage and low generalization to various institutions due to the privacy-protection issue. To this end, this study proposes a distributed grade prediction model, dubbed FecMap, by exploiting the federated learning (FL) framework that preserves the private data of local clients and communicates with others through a global generalized model. FecMap considers local subspace learning (LSL), which explicitly learns the local features against the global features, and multi-layer privacy protection (MPP), which hierarchically protects the private features, including model-shareable features and not-allowably shared features, to achieve client-specific classifiers of high performance on LOP per institution. FecMap is then achieved in an iteration manner with all datasets distributed on clients by training a local neural network composed of a global part, a local part, and a classification head in clients and averaging the global parts from clients on the server. To evaluate the FecMap model, we collected three higher-educational datasets of student academic records from engineering majors. Experiment results manifest that FecMap benefits from the proposed LSL and MPP and achieves steady performance on the task of LOP, compared with the state-of-the-art models. This study makes a fresh attempt at the use of federated learning in the learning-analytical task, potentially paving the way to facilitating personalized education with privacy protection.
[ { "created": "Mon, 25 Dec 2023 04:29:05 GMT", "version": "v1" } ]
2023-12-27
[ [ "Zhang", "Yupei", "" ], [ "Li", "Yuxin", "" ], [ "Wang", "Yifei", "" ], [ "Wei", "Shuangshuang", "" ], [ "Xu", "Yunan", "" ], [ "Shang", "Xuequn", "" ] ]
Learning-outcome prediction (LOP) is a long-standing and critical problem in educational routes. Many studies have contributed to developing effective models while often suffering from data shortage and low generalization to various institutions due to the privacy-protection issue. To this end, this study proposes a distributed grade prediction model, dubbed FecMap, by exploiting the federated learning (FL) framework that preserves the private data of local clients and communicates with others through a global generalized model. FecMap considers local subspace learning (LSL), which explicitly learns the local features against the global features, and multi-layer privacy protection (MPP), which hierarchically protects the private features, including model-shareable features and not-allowably shared features, to achieve client-specific classifiers of high performance on LOP per institution. FecMap is then achieved in an iteration manner with all datasets distributed on clients by training a local neural network composed of a global part, a local part, and a classification head in clients and averaging the global parts from clients on the server. To evaluate the FecMap model, we collected three higher-educational datasets of student academic records from engineering majors. Experiment results manifest that FecMap benefits from the proposed LSL and MPP and achieves steady performance on the task of LOP, compared with the state-of-the-art models. This study makes a fresh attempt at the use of federated learning in the learning-analytical task, potentially paving the way to facilitating personalized education with privacy protection.
2110.04683
Alexander Lin
Alexander Lin, Andrew H. Song, Demba Ba
Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning
5 pages, 3 figures
IEEE ICASSP 2022
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
State-of-the-art approaches for clustering high-dimensional data utilize deep auto-encoder architectures. Many of these networks require a large number of parameters and suffer from a lack of interpretability, due to the black-box nature of the auto-encoders. We introduce Mixture Model Auto-Encoders (MixMate), a novel architecture that clusters data by performing inference on a generative model. Derived from the perspective of sparse dictionary learning and mixture models, MixMate comprises several auto-encoders, each tasked with reconstructing data in a distinct cluster, while enforcing sparsity in the latent space. Through experiments on various image datasets, we show that MixMate achieves competitive performance compared to state-of-the-art deep clustering algorithms, while using orders of magnitude fewer parameters.
[ { "created": "Sun, 10 Oct 2021 02:30:31 GMT", "version": "v1" }, { "created": "Fri, 25 Feb 2022 16:35:10 GMT", "version": "v2" } ]
2022-02-28
[ [ "Lin", "Alexander", "" ], [ "Song", "Andrew H.", "" ], [ "Ba", "Demba", "" ] ]
State-of-the-art approaches for clustering high-dimensional data utilize deep auto-encoder architectures. Many of these networks require a large number of parameters and suffer from a lack of interpretability, due to the black-box nature of the auto-encoders. We introduce Mixture Model Auto-Encoders (MixMate), a novel architecture that clusters data by performing inference on a generative model. Derived from the perspective of sparse dictionary learning and mixture models, MixMate comprises several auto-encoders, each tasked with reconstructing data in a distinct cluster, while enforcing sparsity in the latent space. Through experiments on various image datasets, we show that MixMate achieves competitive performance compared to state-of-the-art deep clustering algorithms, while using orders of magnitude fewer parameters.
1511.07275
Wojciech Zaremba
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, Rob Fergus
Learning Simple Algorithms from Examples
null
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach for learning simple algorithms such as copying, multi-digit addition and single digit multiplication directly from examples. Our framework consists of a set of interfaces, accessed by a controller. Typical interfaces are 1-D tapes or 2-D grids that hold the input and output data. For the controller, we explore a range of neural network-based models which vary in their ability to abstract the underlying algorithm from training instances and generalize to test examples with many thousands of digits. The controller is trained using $Q$-learning with several enhancements and we show that the bottleneck is in the capabilities of the controller rather than in the search incurred by $Q$-learning.
[ { "created": "Mon, 23 Nov 2015 15:31:54 GMT", "version": "v1" }, { "created": "Tue, 24 Nov 2015 03:28:35 GMT", "version": "v2" } ]
2015-11-25
[ [ "Zaremba", "Wojciech", "" ], [ "Mikolov", "Tomas", "" ], [ "Joulin", "Armand", "" ], [ "Fergus", "Rob", "" ] ]
We present an approach for learning simple algorithms such as copying, multi-digit addition and single digit multiplication directly from examples. Our framework consists of a set of interfaces, accessed by a controller. Typical interfaces are 1-D tapes or 2-D grids that hold the input and output data. For the controller, we explore a range of neural network-based models which vary in their ability to abstract the underlying algorithm from training instances and generalize to test examples with many thousands of digits. The controller is trained using $Q$-learning with several enhancements and we show that the bottleneck is in the capabilities of the controller rather than in the search incurred by $Q$-learning.
2211.13724
Ali Harakeh
Ali Harakeh, Jordan Hu, Naiqing Guan, Steven L. Waslander, and Liam Paull
Estimating Regression Predictive Distributions with Sample Networks
Accepted for publication in AAAI 2023. Example code at: https://samplenet.github.io/
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Estimating the uncertainty in deep neural network predictions is crucial for many real-world applications. A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation. The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates. In this work, we propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution. SampleNets do so by defining an empirical distribution using samples that are learned with the Energy Score and regularized with the Sinkhorn Divergence. SampleNets are shown to be able to well-fit a wide range of distributions and to outperform baselines on large-scale real-world regression tasks.
[ { "created": "Thu, 24 Nov 2022 17:23:29 GMT", "version": "v1" } ]
2022-11-28
[ [ "Harakeh", "Ali", "" ], [ "Hu", "Jordan", "" ], [ "Guan", "Naiqing", "" ], [ "Waslander", "Steven L.", "" ], [ "Paull", "Liam", "" ] ]
Estimating the uncertainty in deep neural network predictions is crucial for many real-world applications. A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation. The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates. In this work, we propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution. SampleNets do so by defining an empirical distribution using samples that are learned with the Energy Score and regularized with the Sinkhorn Divergence. SampleNets are shown to be able to well-fit a wide range of distributions and to outperform baselines on large-scale real-world regression tasks.
2201.01219
Sansit Patnaik
Wei Ding and Sansit Patnaik and Fabio Semperlotti
Multiscale Nonlocal Elasticity: A Distributed Order Fractional Formulation
31 pages, 9 images, 3 Tables
null
null
null
cs.CE cs.NA math.NA physics.app-ph
http://creativecommons.org/licenses/by/4.0/
This study presents a generalized multiscale nonlocal elasticity theory that leverages distributed order fractional calculus to accurately capture coexisting multiscale and nonlocal effects within a macroscopic continuum. The nonlocal multiscale behavior is captured via distributed order fractional constitutive relations derived from a nonlocal thermodynamic formulation. The governing equations of the inhomogeneous continuum are obtained via the Hamilton principle. As a generalization of the constant order fractional continuum theory, the distributed order theory can model complex media characterized by inhomogeneous nonlocality and multiscale effects. In order to understand the correspondence between microscopic effects and the properties of the continuum, an equivalent mass-spring lattice model is also developed by direct discretization of the distributed order elastic continuum. Detailed theoretical arguments are provided to show the equivalence between the discrete and the continuum distributed order models in terms of internal nonlocal forces, potential energy distribution, and boundary conditions. These theoretical arguments facilitate the physical interpretation of the role played by the distributed order framework within nonlocal elasticity theories. They also highlight the outstanding potential and opportunities offered by this methodology to account for multiscale nonlocal effects. The capabilities of the methodology are also illustrated via a numerical study that highlights the excellent agreement between the displacement profiles and the total potential energy predicted by the two models under various order distributions. Remarkably, multiscale effects such as displacement distortion, material softening, and energy concentration are well captured at continuum level by the distributed order theory.
[ { "created": "Fri, 24 Dec 2021 23:38:07 GMT", "version": "v1" } ]
2022-01-05
[ [ "Ding", "Wei", "" ], [ "Patnaik", "Sansit", "" ], [ "Semperlotti", "Fabio", "" ] ]
This study presents a generalized multiscale nonlocal elasticity theory that leverages distributed order fractional calculus to accurately capture coexisting multiscale and nonlocal effects within a macroscopic continuum. The nonlocal multiscale behavior is captured via distributed order fractional constitutive relations derived from a nonlocal thermodynamic formulation. The governing equations of the inhomogeneous continuum are obtained via the Hamilton principle. As a generalization of the constant order fractional continuum theory, the distributed order theory can model complex media characterized by inhomogeneous nonlocality and multiscale effects. In order to understand the correspondence between microscopic effects and the properties of the continuum, an equivalent mass-spring lattice model is also developed by direct discretization of the distributed order elastic continuum. Detailed theoretical arguments are provided to show the equivalence between the discrete and the continuum distributed order models in terms of internal nonlocal forces, potential energy distribution, and boundary conditions. These theoretical arguments facilitate the physical interpretation of the role played by the distributed order framework within nonlocal elasticity theories. They also highlight the outstanding potential and opportunities offered by this methodology to account for multiscale nonlocal effects. The capabilities of the methodology are also illustrated via a numerical study that highlights the excellent agreement between the displacement profiles and the total potential energy predicted by the two models under various order distributions. Remarkably, multiscale effects such as displacement distortion, material softening, and energy concentration are well captured at continuum level by the distributed order theory.
1704.07078
Yunior Ram\'irez-Cruz
Sjouke Mauw, Yunior Ram\'irez-Cruz, Rolando Trujillo-Rasua
Rethinking $(k,\ell)$-anonymity in social graphs: $(k,\ell)$-adjacency anonymity and $(k,\ell)$-(adjacency) anonymous transformations
null
"Conditional adjacency anonymity in social graphs under active attacks", Knowledge and Information Systems 61(1):485-511, 2019
10.1007/s10115-018-1283-x
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper treats the privacy-preserving publication of social graphs in the presence of active adversaries, that is, adversaries with the ability to introduce sybil nodes in the graph prior to publication and leverage them to create unique fingerprints for a set of victim nodes and re-identify them after publication. Stemming from the notion of $(k,\ell)$-anonymity, we introduce $(k,\ell)$-anonymous transformations, characterising graph perturbation methods that ensure protection from active adversaries levaraging up to $\ell$ sybil nodes. Additionally, we introduce a new privacy property: $(k,\ell)$-adjacency anonymity, which relaxes the assumption made by $(k,\ell)$-anonymity that adversaries can control all distances between sybil nodes and the rest of the nodes in the graph. The new privacy property is in turn the basis for a new type of graph perturbation: $(k,\ell)$-adjacency anonymous transformations. We propose algorithms for obtaining $(k,1)$-adjacency anonymous transformations for arbitrary values of $k$, as well as $(2,\ell)$-adjacency anonymous transformations for small values of $\ell$.
[ { "created": "Mon, 24 Apr 2017 08:14:03 GMT", "version": "v1" } ]
2019-09-04
[ [ "Mauw", "Sjouke", "" ], [ "Ramírez-Cruz", "Yunior", "" ], [ "Trujillo-Rasua", "Rolando", "" ] ]
This paper treats the privacy-preserving publication of social graphs in the presence of active adversaries, that is, adversaries with the ability to introduce sybil nodes in the graph prior to publication and leverage them to create unique fingerprints for a set of victim nodes and re-identify them after publication. Stemming from the notion of $(k,\ell)$-anonymity, we introduce $(k,\ell)$-anonymous transformations, characterising graph perturbation methods that ensure protection from active adversaries levaraging up to $\ell$ sybil nodes. Additionally, we introduce a new privacy property: $(k,\ell)$-adjacency anonymity, which relaxes the assumption made by $(k,\ell)$-anonymity that adversaries can control all distances between sybil nodes and the rest of the nodes in the graph. The new privacy property is in turn the basis for a new type of graph perturbation: $(k,\ell)$-adjacency anonymous transformations. We propose algorithms for obtaining $(k,1)$-adjacency anonymous transformations for arbitrary values of $k$, as well as $(2,\ell)$-adjacency anonymous transformations for small values of $\ell$.
1205.3181
Sebastien Bubeck
S\'ebastien Bubeck, Tengyao Wang, Nitin Viswanathan
Multiple Identifications in Multi-Armed Bandits
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of identifying the top $m$ arms in a multi-armed bandit game. Our proposed solution relies on a new algorithm based on successive rejects of the seemingly bad arms, and successive accepts of the good ones. This algorithmic contribution allows to tackle other multiple identifications settings that were previously out of reach. In particular we show that this idea of successive accepts and rejects applies to the multi-bandit best arm identification problem.
[ { "created": "Mon, 14 May 2012 20:10:04 GMT", "version": "v1" } ]
2012-05-16
[ [ "Bubeck", "Sébastien", "" ], [ "Wang", "Tengyao", "" ], [ "Viswanathan", "Nitin", "" ] ]
We study the problem of identifying the top $m$ arms in a multi-armed bandit game. Our proposed solution relies on a new algorithm based on successive rejects of the seemingly bad arms, and successive accepts of the good ones. This algorithmic contribution allows to tackle other multiple identifications settings that were previously out of reach. In particular we show that this idea of successive accepts and rejects applies to the multi-bandit best arm identification problem.
1509.02975
Steve Huntsman
Steve Huntsman and Arman Rezaee
De Bruijn entropy and string similarity
Extended version of a paper presented at WORDS 2015; MATLAB source code and scripts for reproducing results are included
null
null
https://nbn-resolving.org/urn:nbn:de:gbv:8:1-zs-00000305-a5
cs.DM cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the notion of de Bruijn entropy of an Eulerian quiver and show how the corresponding relative entropy can be applied to practical string similarity problems. This approach explicitly links the combinatorial and information-theoretical properties of words and its performance is superior to edit distances in many respects and competitive in most others. The computational complexity of our current implementation is parametrically tunable between linear and cubic, and we outline how an optimized linear algebra subroutine can reduce the cubic complexity to approximately linear. Numerous examples are provided, including a realistic application to molecular phylogenetics.
[ { "created": "Wed, 9 Sep 2015 23:27:04 GMT", "version": "v1" } ]
2022-01-24
[ [ "Huntsman", "Steve", "" ], [ "Rezaee", "Arman", "" ] ]
We introduce the notion of de Bruijn entropy of an Eulerian quiver and show how the corresponding relative entropy can be applied to practical string similarity problems. This approach explicitly links the combinatorial and information-theoretical properties of words and its performance is superior to edit distances in many respects and competitive in most others. The computational complexity of our current implementation is parametrically tunable between linear and cubic, and we outline how an optimized linear algebra subroutine can reduce the cubic complexity to approximately linear. Numerous examples are provided, including a realistic application to molecular phylogenetics.
1701.01491
Amina Piemontese Ph.D
Amina Piemontese, and Alexandre Graell i Amat
MDS-Coded Distributed Caching for Low Delay Wireless Content Delivery
submitted to IEEE Transactions on Communications. arXiv admin note: text overlap with arXiv:1607.00880
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the use of maximum distance separable (MDS) codes to cache popular content to reduce the download delay of wireless content delivery. In particular, we consider a cellular system where devices roam in an out of a cell according to a Poisson random process. Popular content is cached in a limited number of the mobile devices using an MDS code and can be downloaded from the mobile devices using device-to-device communication. We derive an analytical expression for the delay incurred in downloading content from the wireless network and show that distributed caching using MDS codes can dramatically reduce the download delay with respect to the scenario where content is always downloaded from the base station and to the case of uncoded distributed caching.
[ { "created": "Thu, 5 Jan 2017 21:59:20 GMT", "version": "v1" } ]
2017-01-09
[ [ "Piemontese", "Amina", "" ], [ "Amat", "Alexandre Graell i", "" ] ]
We investigate the use of maximum distance separable (MDS) codes to cache popular content to reduce the download delay of wireless content delivery. In particular, we consider a cellular system where devices roam in an out of a cell according to a Poisson random process. Popular content is cached in a limited number of the mobile devices using an MDS code and can be downloaded from the mobile devices using device-to-device communication. We derive an analytical expression for the delay incurred in downloading content from the wireless network and show that distributed caching using MDS codes can dramatically reduce the download delay with respect to the scenario where content is always downloaded from the base station and to the case of uncoded distributed caching.
1011.0298
Nayyar Mehmood
Nayyar Mehmood, Imran Haider Qureshi
Intuitionistic Fuzzy Ideal Extensions of {\Gamma}-Semigroups
Accepted, 11 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper the concept of the extensions of intuitionistic fuzzy ideals in a semigroup has been extended to a {\Gamma}-Semigroups. Among other results characterization of prime ideals in a {\Gamma}-Semigroups in terms of intuitionistic fuzzy ideal extension has been obtained.
[ { "created": "Mon, 1 Nov 2010 11:54:14 GMT", "version": "v1" } ]
2010-11-13
[ [ "Mehmood", "Nayyar", "" ], [ "Qureshi", "Imran Haider", "" ] ]
In this paper the concept of the extensions of intuitionistic fuzzy ideals in a semigroup has been extended to a {\Gamma}-Semigroups. Among other results characterization of prime ideals in a {\Gamma}-Semigroups in terms of intuitionistic fuzzy ideal extension has been obtained.
2111.03362
Moran Baruch
Moran Baruch, Nir Drucker, Lev Greenberg and Guy Moshkowich
A methodology for training homomorphicencryption friendly neural networks
null
null
10.1007/978-3-031-16815-4_29
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Privacy-preserving deep neural network (DNN) inference is a necessity in different regulated industries such as healthcare, finance and retail. Recently, homomorphic encryption (HE) has been used as a method to enable analytics while addressing privacy concerns. HE enables secure predictions over encrypted data. However, there are several challenges related to the use of HE, including DNN size limitations and the lack of support for some operation types. Most notably, the commonly used ReLU activation is not supported under some HE schemes. We propose a structured methodology to replace ReLU with a quadratic polynomial activation. To address the accuracy degradation issue, we use a pre-trained model that trains another HE-friendly model, using techniques such as trainable activation functions and knowledge distillation. We demonstrate our methodology on the AlexNet architecture, using the chest X-Ray and CT datasets for COVID-19 detection. Experiments using our approach reduced the gap between the F1 score and accuracy of the models trained with ReLU and the HE-friendly model to within a mere 0.32-5.3 percent degradation. We also demonstrate our methodology using the SqueezeNet architecture, for which we observed 7 percent accuracy and F1 improvements over training similar networks with other HE-friendly training methods.
[ { "created": "Fri, 5 Nov 2021 10:04:15 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2021 08:22:48 GMT", "version": "v2" }, { "created": "Thu, 7 Jul 2022 19:22:02 GMT", "version": "v3" } ]
2023-06-13
[ [ "Baruch", "Moran", "" ], [ "Drucker", "Nir", "" ], [ "Greenberg", "Lev", "" ], [ "Moshkowich", "Guy", "" ] ]
Privacy-preserving deep neural network (DNN) inference is a necessity in different regulated industries such as healthcare, finance and retail. Recently, homomorphic encryption (HE) has been used as a method to enable analytics while addressing privacy concerns. HE enables secure predictions over encrypted data. However, there are several challenges related to the use of HE, including DNN size limitations and the lack of support for some operation types. Most notably, the commonly used ReLU activation is not supported under some HE schemes. We propose a structured methodology to replace ReLU with a quadratic polynomial activation. To address the accuracy degradation issue, we use a pre-trained model that trains another HE-friendly model, using techniques such as trainable activation functions and knowledge distillation. We demonstrate our methodology on the AlexNet architecture, using the chest X-Ray and CT datasets for COVID-19 detection. Experiments using our approach reduced the gap between the F1 score and accuracy of the models trained with ReLU and the HE-friendly model to within a mere 0.32-5.3 percent degradation. We also demonstrate our methodology using the SqueezeNet architecture, for which we observed 7 percent accuracy and F1 improvements over training similar networks with other HE-friendly training methods.
1402.3613
Pavel Janovsk\'y
Pavel Janovsk\'y and Michal \v{C}\'ap and Ji\v{r}\'i Vok\v{r}\'inek
Finding Coordinated Paths for Multiple Holonomic Agents in 2-d Polygonal Environment
Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014)
null
null
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Avoiding collisions is one of the vital tasks for systems of autonomous mobile agents. We focus on the problem of finding continuous coordinated paths for multiple mobile disc agents in a 2-d environment with polygonal obstacles. The problem is PSPACE-hard, with the state space growing exponentially in the number of agents. Therefore, the state of the art methods include mainly reactive techniques and sampling-based iterative algorithms. We compare the performance of a widely-used reactive method ORCA with three variants of a popular planning algorithm RRT* applied to multi-agent path planning and find that an algorithm combining reactive collision avoidance and RRT* planning, which we call ORCA-RRT* can be used to solve instances that are out of the reach of either of the techniques. We experimentally show that: 1) the reactive part of the algorithm can efficiently solve many multi-agent path finding problems involving large number of agents, for which RRT* algorithm is often unable to find a solution in limited time and 2) the planning component of the algorithm is able to solve many instances containing local minima, where reactive techniques typically fail.
[ { "created": "Fri, 14 Feb 2014 22:09:44 GMT", "version": "v1" } ]
2014-02-18
[ [ "Janovský", "Pavel", "" ], [ "Čáp", "Michal", "" ], [ "Vokřínek", "Jiří", "" ] ]
Avoiding collisions is one of the vital tasks for systems of autonomous mobile agents. We focus on the problem of finding continuous coordinated paths for multiple mobile disc agents in a 2-d environment with polygonal obstacles. The problem is PSPACE-hard, with the state space growing exponentially in the number of agents. Therefore, the state of the art methods include mainly reactive techniques and sampling-based iterative algorithms. We compare the performance of a widely-used reactive method ORCA with three variants of a popular planning algorithm RRT* applied to multi-agent path planning and find that an algorithm combining reactive collision avoidance and RRT* planning, which we call ORCA-RRT* can be used to solve instances that are out of the reach of either of the techniques. We experimentally show that: 1) the reactive part of the algorithm can efficiently solve many multi-agent path finding problems involving large number of agents, for which RRT* algorithm is often unable to find a solution in limited time and 2) the planning component of the algorithm is able to solve many instances containing local minima, where reactive techniques typically fail.
1704.06700
Ahmad Nauman Ghazi
Shoaib Bakhtyar and Ahmad Nauman Ghazi
On Improving Research Methodology Course at Blekinge Institute of Technology
Conference on Higher Education, L\"ararl\"ardom2016 Kristianstad University Press Sweden. 2016
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Research Methodology in Software Engineering and Computer Science (RM) is a compulsory course that must be studied by graduate students at Blekinge Institute of Technology (BTH) prior to undertaking their theses work. The course is focused on teaching research methods and techniques for data collection and analysis in the fields of Computer Science and Software Engineering. It is intended that the course should help students in practically applying appropriate research methods in different courses (in addition to the RM course) including their Master's theses. However, it is believed that there exist deficiencies in the course due to which the course implementation (learning and assessment activities) as well as the performance of different participants (students, teachers, and evaluators) are affected negatively. In this article our aim is to investigate potential deficiencies in the RM course at BTH in order to provide a concrete evidence on the deficiencies faced by students, evaluators, and teachers in the course. Additionally, we suggest recommendations for resolving the identified deficiencies. Our findings gathered through semi-structured interviews with students, teachers, and evaluators in the course are presented in this article. By identifying a total of twenty one deficiencies from different perspectives, we found that there exist critical deficiencies at different levels within the course. Furthermore, in order to overcome the identified deficiencies, we suggest seven recommendations that may be implemented at different levels within the course and the study program. Our suggested recommendations, if implemented, will help in resolving deficiencies in the course, which may lead to achieving an improved teaching and learning in the RM course at BTH.
[ { "created": "Fri, 21 Apr 2017 20:11:57 GMT", "version": "v1" } ]
2017-04-25
[ [ "Bakhtyar", "Shoaib", "" ], [ "Ghazi", "Ahmad Nauman", "" ] ]
The Research Methodology in Software Engineering and Computer Science (RM) is a compulsory course that must be studied by graduate students at Blekinge Institute of Technology (BTH) prior to undertaking their theses work. The course is focused on teaching research methods and techniques for data collection and analysis in the fields of Computer Science and Software Engineering. It is intended that the course should help students in practically applying appropriate research methods in different courses (in addition to the RM course) including their Master's theses. However, it is believed that there exist deficiencies in the course due to which the course implementation (learning and assessment activities) as well as the performance of different participants (students, teachers, and evaluators) are affected negatively. In this article our aim is to investigate potential deficiencies in the RM course at BTH in order to provide a concrete evidence on the deficiencies faced by students, evaluators, and teachers in the course. Additionally, we suggest recommendations for resolving the identified deficiencies. Our findings gathered through semi-structured interviews with students, teachers, and evaluators in the course are presented in this article. By identifying a total of twenty one deficiencies from different perspectives, we found that there exist critical deficiencies at different levels within the course. Furthermore, in order to overcome the identified deficiencies, we suggest seven recommendations that may be implemented at different levels within the course and the study program. Our suggested recommendations, if implemented, will help in resolving deficiencies in the course, which may lead to achieving an improved teaching and learning in the RM course at BTH.
2305.20056
Arvind Pillai
Arvind Pillai, Subigya Nepal and Andrew Campbell
Rare Life Event Detection via Mobile Sensing Using Multi-Task Learning
15 pages, 4 figures, CHIL 2023 (Accepted)
null
null
null
cs.LG cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions. We envision that mobile sensing data can be used to detect these anomalies. However, the human-centered nature of the problem, combined with the infrequency and uniqueness of these events makes it challenging for unsupervised machine learning methods. In this paper, we first investigate granger-causality between life events and human behavior using sensing data. Next, we propose a multi-task framework with an unsupervised autoencoder to capture irregular behavior, and an auxiliary sequence predictor that identifies transitions in workplace performance to contextualize events. We perform experiments using data from a mobile sensing study comprising N=126 information workers from multiple industries, spanning 10106 days with 198 rare events (<2%). Through personalized inference, we detect the exact day of a rare event with an F1 of 0.34, demonstrating that our method outperforms several baselines. Finally, we discuss the implications of our work from the context of real-world deployment.
[ { "created": "Wed, 31 May 2023 17:29:24 GMT", "version": "v1" } ]
2023-06-01
[ [ "Pillai", "Arvind", "" ], [ "Nepal", "Subigya", "" ], [ "Campbell", "Andrew", "" ] ]
Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions. We envision that mobile sensing data can be used to detect these anomalies. However, the human-centered nature of the problem, combined with the infrequency and uniqueness of these events makes it challenging for unsupervised machine learning methods. In this paper, we first investigate granger-causality between life events and human behavior using sensing data. Next, we propose a multi-task framework with an unsupervised autoencoder to capture irregular behavior, and an auxiliary sequence predictor that identifies transitions in workplace performance to contextualize events. We perform experiments using data from a mobile sensing study comprising N=126 information workers from multiple industries, spanning 10106 days with 198 rare events (<2%). Through personalized inference, we detect the exact day of a rare event with an F1 of 0.34, demonstrating that our method outperforms several baselines. Finally, we discuss the implications of our work from the context of real-world deployment.
2310.10687
Md. Imtiaz Habib
Md. Imtiaz Habib, Abdullah Al Maruf, Md. Jobair Ahmed Nabil
An Exploration Into Web Session Security- A Systematic Literature Review
9 pages, 8 sections, survey article
null
null
null
cs.SE cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The most common attacks against web sessions are reviewed in this paper, for example, some attacks against web browsers' honest users attempting to create session with trusted web browser application legally. We have assessed with four different ways to judge the viability of a certain solution by reviewing existing security solutions which prevent or halt the different attacks. Then we have pointed out some guidelines that have been taken into account by the designers of the proposals we reviewed. The guidelines we have identified will be helpful for the creative solutions proceeding web security in a more structured and holistic way.
[ { "created": "Sat, 14 Oct 2023 16:22:07 GMT", "version": "v1" } ]
2023-10-18
[ [ "Habib", "Md. Imtiaz", "" ], [ "Maruf", "Abdullah Al", "" ], [ "Nabil", "Md. Jobair Ahmed", "" ] ]
The most common attacks against web sessions are reviewed in this paper, for example, some attacks against web browsers' honest users attempting to create session with trusted web browser application legally. We have assessed with four different ways to judge the viability of a certain solution by reviewing existing security solutions which prevent or halt the different attacks. Then we have pointed out some guidelines that have been taken into account by the designers of the proposals we reviewed. The guidelines we have identified will be helpful for the creative solutions proceeding web security in a more structured and holistic way.
2005.13811
Thomas Studer
Thomas Studer
No-Go Theorems for Data Privacy
null
null
null
null
cs.LO cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Controlled query evaluation (CQE) is an approach to guarantee data privacy for database and knowledge base systems. CQE-systems feature a censor function that may distort the answer to a query in order to hide sensitive information. We introduce a high-level formalization of controlled query evaluation and define several desirable properties of CQE-systems. Finally we establish two no-go theorems, which show that certain combinations of these properties cannot be obtained.
[ { "created": "Thu, 28 May 2020 07:10:37 GMT", "version": "v1" } ]
2020-05-29
[ [ "Studer", "Thomas", "" ] ]
Controlled query evaluation (CQE) is an approach to guarantee data privacy for database and knowledge base systems. CQE-systems feature a censor function that may distort the answer to a query in order to hide sensitive information. We introduce a high-level formalization of controlled query evaluation and define several desirable properties of CQE-systems. Finally we establish two no-go theorems, which show that certain combinations of these properties cannot be obtained.
2302.05520
Eli Gafni professor
Eli Gafni and Vasileios Zikas
Synchrony/Asynchrony vs. Stationary/Mobile? The Latter is Superior...in Theory
null
null
null
null
cs.DS
http://creativecommons.org/publicdomain/zero/1.0/
Like Asynchrony, Mobility of faults precludes consensus. Yet, a model M in which Consensus is solvable, has an analogue relaxed model in which Consensus is not solvable and for which we can ask, whether Consensus is solvable if the system initially behaves like the relaxed analogue model, but eventually morphs into M. We consider two relaxed analogues of M. The first is the traditional Asynchronous model, and the second to be defined, the Mobile analogue. While for some M we show that Consensus is not solvable in the Asynchronous analogue, it is solvable in all the Mobile analogues. Hence, from this perspective Mobility is superior to Asynchrony. The pie in the sky relationship we envision is: Consensus is solvable in M, if and only if binary Commit-Adopt is solvable in the mobile analogue. The ``only if'' is easy. Here we show case by case that the ``if'' holds for all the common faults types.
[ { "created": "Fri, 10 Feb 2023 21:49:55 GMT", "version": "v1" } ]
2023-02-14
[ [ "Gafni", "Eli", "" ], [ "Zikas", "Vasileios", "" ] ]
Like Asynchrony, Mobility of faults precludes consensus. Yet, a model M in which Consensus is solvable, has an analogue relaxed model in which Consensus is not solvable and for which we can ask, whether Consensus is solvable if the system initially behaves like the relaxed analogue model, but eventually morphs into M. We consider two relaxed analogues of M. The first is the traditional Asynchronous model, and the second to be defined, the Mobile analogue. While for some M we show that Consensus is not solvable in the Asynchronous analogue, it is solvable in all the Mobile analogues. Hence, from this perspective Mobility is superior to Asynchrony. The pie in the sky relationship we envision is: Consensus is solvable in M, if and only if binary Commit-Adopt is solvable in the mobile analogue. The ``only if'' is easy. Here we show case by case that the ``if'' holds for all the common faults types.
2101.08100
Weixuan Zhang
Weixuan Zhang, Marco Tognon, Lionel Ott, Roland Siegwart, and Juan Nieto
Active Model Learning using Informative Trajectories for Improved Closed-Loop Control on Real Robots
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Model-based controllers on real robots require accurate knowledge of the system dynamics to perform optimally. For complex dynamics, first-principles modeling is not sufficiently precise, and data-driven approaches can be leveraged to learn a statistical model from real experiments. However, the efficient and effective data collection for such a data-driven system on real robots is still an open challenge. This paper introduces an optimization problem formulation to find an informative trajectory that allows for efficient data collection and model learning. We present a sampling-based method that computes an approximation of the trajectory that minimizes the prediction uncertainty of the dynamics model. This trajectory is then executed, collecting the data to update the learned model. In experiments we demonstrate the capabilities of our proposed framework when applied to a complex omnidirectional flying vehicle with tiltable rotors. Using our informative trajectories results in models which outperform models obtained from non-informative trajectory by 13.3\% with the same amount of training data. Furthermore, we show that the model learned from informative trajectories generalizes better than the one learned from non-informative trajectories, achieving better tracking performance on different tasks.
[ { "created": "Wed, 20 Jan 2021 12:54:26 GMT", "version": "v1" }, { "created": "Fri, 14 May 2021 13:34:39 GMT", "version": "v2" } ]
2021-05-17
[ [ "Zhang", "Weixuan", "" ], [ "Tognon", "Marco", "" ], [ "Ott", "Lionel", "" ], [ "Siegwart", "Roland", "" ], [ "Nieto", "Juan", "" ] ]
Model-based controllers on real robots require accurate knowledge of the system dynamics to perform optimally. For complex dynamics, first-principles modeling is not sufficiently precise, and data-driven approaches can be leveraged to learn a statistical model from real experiments. However, the efficient and effective data collection for such a data-driven system on real robots is still an open challenge. This paper introduces an optimization problem formulation to find an informative trajectory that allows for efficient data collection and model learning. We present a sampling-based method that computes an approximation of the trajectory that minimizes the prediction uncertainty of the dynamics model. This trajectory is then executed, collecting the data to update the learned model. In experiments we demonstrate the capabilities of our proposed framework when applied to a complex omnidirectional flying vehicle with tiltable rotors. Using our informative trajectories results in models which outperform models obtained from non-informative trajectory by 13.3\% with the same amount of training data. Furthermore, we show that the model learned from informative trajectories generalizes better than the one learned from non-informative trajectories, achieving better tracking performance on different tasks.