id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1606.04202
Avik Sengupta
Avik Sengupta and Ravi Tandon
Improved Approximation of Storage-Rate Tradeoff for Caching with Multiple Demands
Extended version of a submission to IEEE Trans. on Communications
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Caching at the network edge has emerged as a viable solution for alleviating the severe capacity crunch in modern content centric wireless networks by leveraging network load-balancing in the form of localized content storage and delivery. In this work, we consider a cache-aided network where the cache storage phase is assisted by a central server and users can demand multiple files at each transmission interval. To service these demands, we consider two delivery models - $(1)$ centralized content delivery where user demands at each transmission interval are serviced by the central server via multicast transmissions; and $(2)$ device-to-device (D2D) assisted distributed delivery where users multicast to each other in order to service file demands. For such cache-aided networks, we present new results on the fundamental cache storage vs. transmission rate tradeoff. Specifically, we develop a new technique for characterizing information theoretic lower bounds on the storage-rate tradeoff and show that the new lower bounds are strictly tighter than cut-set bounds from literature. Furthermore, using the new lower bounds, we establish the optimal storage-rate tradeoff to within a constant multiplicative gap. We show that, for multiple demands per user, achievable schemes based on repetition of schemes for single demands are order-optimal under both delivery models.
[ { "created": "Tue, 14 Jun 2016 04:53:35 GMT", "version": "v1" } ]
2016-06-15
[ [ "Sengupta", "Avik", "" ], [ "Tandon", "Ravi", "" ] ]
Caching at the network edge has emerged as a viable solution for alleviating the severe capacity crunch in modern content centric wireless networks by leveraging network load-balancing in the form of localized content storage and delivery. In this work, we consider a cache-aided network where the cache storage phase is assisted by a central server and users can demand multiple files at each transmission interval. To service these demands, we consider two delivery models - $(1)$ centralized content delivery where user demands at each transmission interval are serviced by the central server via multicast transmissions; and $(2)$ device-to-device (D2D) assisted distributed delivery where users multicast to each other in order to service file demands. For such cache-aided networks, we present new results on the fundamental cache storage vs. transmission rate tradeoff. Specifically, we develop a new technique for characterizing information theoretic lower bounds on the storage-rate tradeoff and show that the new lower bounds are strictly tighter than cut-set bounds from literature. Furthermore, using the new lower bounds, we establish the optimal storage-rate tradeoff to within a constant multiplicative gap. We show that, for multiple demands per user, achievable schemes based on repetition of schemes for single demands are order-optimal under both delivery models.
2107.02757
Zhibin Duan
Zhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, Mingyuan Zhou
Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network
null
null
null
null
cs.IR cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical topic models such as the gamma belief network (GBN) have delivered promising results in mining multi-layer document representations and discovering interpretable topic taxonomies. However, they often assume in the prior that the topics at each layer are independently drawn from the Dirichlet distribution, ignoring the dependencies between the topics both at the same layer and across different layers. To relax this assumption, we propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space. Specifically, both the words and topics are represented as embedding vectors of the same dimension. The topic matrix at a layer is factorized into the product of a factor loading matrix and a topic embedding matrix, the transpose of which is set as the factor loading matrix of the layer above. Repeating this particular type of factorization, which shares components between adjacent layers, leads to a structure referred to as sawtooth factorization. An auto-encoding variational inference network is constructed to optimize the model parameter via stochastic gradient descent. Experiments on big corpora show that our models outperform other neural topic models on extracting deeper interpretable topics and deriving better document representations.
[ { "created": "Wed, 30 Jun 2021 10:14:57 GMT", "version": "v1" } ]
2021-07-07
[ [ "Duan", "Zhibin", "" ], [ "Wang", "Dongsheng", "" ], [ "Chen", "Bo", "" ], [ "Wang", "Chaojie", "" ], [ "Chen", "Wenchao", "" ], [ "Li", "Yewen", "" ], [ "Ren", "Jie", "" ], [ "Zhou", "Mingyuan", "" ] ]
Hierarchical topic models such as the gamma belief network (GBN) have delivered promising results in mining multi-layer document representations and discovering interpretable topic taxonomies. However, they often assume in the prior that the topics at each layer are independently drawn from the Dirichlet distribution, ignoring the dependencies between the topics both at the same layer and across different layers. To relax this assumption, we propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space. Specifically, both the words and topics are represented as embedding vectors of the same dimension. The topic matrix at a layer is factorized into the product of a factor loading matrix and a topic embedding matrix, the transpose of which is set as the factor loading matrix of the layer above. Repeating this particular type of factorization, which shares components between adjacent layers, leads to a structure referred to as sawtooth factorization. An auto-encoding variational inference network is constructed to optimize the model parameter via stochastic gradient descent. Experiments on big corpora show that our models outperform other neural topic models on extracting deeper interpretable topics and deriving better document representations.
cs/0207007
Denis Popel
Denis V. Popel and Nawar Al-Hakeem
Evolutionary Circuit Design: Information Theory Perspective on Signal Propagation
5 pages, 3 figures, 2 tables, ISSPIT'2001
ISSPIT'2001
null
null
cs.OH
null
This paper presents case-study results on the application of information theoretic approach to gate-level evolutionary circuit design. We introduce information measures to provide better estimates of synthesis criteria of digital circuits. For example, the analysis of signal propagation during evolving gate-level synthesis can be improved by using information theoretic measures that will make it possible to find the most effective geometry and therefore predict the cost of the final design solution. The problem is considered from the information engine point of view. That is, the process of evolutionary gate-level circuit design is presented via such measures as entropy, logical work and information vitality. Some examples of geometry driven synthesis are provided to prove the above idea.
[ { "created": "Wed, 3 Jul 2002 16:59:23 GMT", "version": "v1" } ]
2007-05-23
[ [ "Popel", "Denis V.", "" ], [ "Al-Hakeem", "Nawar", "" ] ]
This paper presents case-study results on the application of information theoretic approach to gate-level evolutionary circuit design. We introduce information measures to provide better estimates of synthesis criteria of digital circuits. For example, the analysis of signal propagation during evolving gate-level synthesis can be improved by using information theoretic measures that will make it possible to find the most effective geometry and therefore predict the cost of the final design solution. The problem is considered from the information engine point of view. That is, the process of evolutionary gate-level circuit design is presented via such measures as entropy, logical work and information vitality. Some examples of geometry driven synthesis are provided to prove the above idea.
2005.04042
Stefan Hoffmann
Stefan Hoffmann
Computational Complexity of Synchronization under Regular Commutative Constraints
Published in COCOON 2020 (The 26th International Computing and Combinatorics Conference); 2nd version is update of the published version and 1st version; both contain a minor error, the assumption of maximality in the NP-c and PSPACE-c results (propositions 5 & 6) is missing, and of incomparability of the vectors in main theorem; fixed in this version. See (new) discussion after main theorem
Computing and Combinatorics, 26th International Conference, COCOON 2020, Proceedings, pages 460-471
10.1007/978-3-030-58150-3_37
null
cs.FL cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we study the computational complexity of the constrained synchronization problem for the class of regular commutative constraint languages. Utilizing a vector representation of regular commutative constraint languages, we give a full classification of the computational complexity of the constraint synchronization problem. Depending on the constraint language, our problem becomes PSPACE-complete, NP-complete or polynomial time solvable. In addition, we derive a polynomial time decision procedure for the complexity of the constraint synchronization problem, given some constraint automaton accepting a commutative language as input.
[ { "created": "Fri, 8 May 2020 13:43:23 GMT", "version": "v1" }, { "created": "Wed, 2 Sep 2020 20:12:21 GMT", "version": "v2" } ]
2020-09-04
[ [ "Hoffmann", "Stefan", "" ] ]
Here we study the computational complexity of the constrained synchronization problem for the class of regular commutative constraint languages. Utilizing a vector representation of regular commutative constraint languages, we give a full classification of the computational complexity of the constraint synchronization problem. Depending on the constraint language, our problem becomes PSPACE-complete, NP-complete or polynomial time solvable. In addition, we derive a polynomial time decision procedure for the complexity of the constraint synchronization problem, given some constraint automaton accepting a commutative language as input.
2003.01993
Igor Buzhinsky
Igor Buzhinsky, Arseny Nerinovsky, Stavros Tripakis
Metrics and methods for robustness evaluation of neural networks with generative models
24 pages, 9 figures; data in Table 3 and Fig. 3 corrected (results unchanged), several typos fixed, references updated
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have shown that modern deep neural network classifiers are easy to fool, assuming that an adversary is able to slightly modify their inputs. Many papers have proposed adversarial attacks, defenses and methods to measure robustness to such adversarial perturbations. However, most commonly considered adversarial examples are based on $\ell_p$-bounded perturbations in the input space of the neural network, which are unlikely to arise naturally. Recently, especially in computer vision, researchers discovered "natural" or "semantic" perturbations, such as rotations, changes of brightness, or more high-level changes, but these perturbations have not yet been systematically utilized to measure the performance of classifiers. In this paper, we propose several metrics to measure robustness of classifiers to natural adversarial examples, and methods to evaluate them. These metrics, called latent space performance metrics, are based on the ability of generative models to capture probability distributions, and are defined in their latent spaces. On three image classification case studies, we evaluate the proposed metrics for several classifiers, including ones trained in conventional and robust ways. We find that the latent counterparts of adversarial robustness are associated with the accuracy of the classifier rather than its conventional adversarial robustness, but the latter is still reflected on the properties of found latent perturbations. In addition, our novel method of finding latent adversarial perturbations demonstrates that these perturbations are often perceptually small.
[ { "created": "Wed, 4 Mar 2020 10:58:59 GMT", "version": "v1" }, { "created": "Sun, 15 Mar 2020 15:55:23 GMT", "version": "v2" } ]
2020-03-17
[ [ "Buzhinsky", "Igor", "" ], [ "Nerinovsky", "Arseny", "" ], [ "Tripakis", "Stavros", "" ] ]
Recent studies have shown that modern deep neural network classifiers are easy to fool, assuming that an adversary is able to slightly modify their inputs. Many papers have proposed adversarial attacks, defenses and methods to measure robustness to such adversarial perturbations. However, most commonly considered adversarial examples are based on $\ell_p$-bounded perturbations in the input space of the neural network, which are unlikely to arise naturally. Recently, especially in computer vision, researchers discovered "natural" or "semantic" perturbations, such as rotations, changes of brightness, or more high-level changes, but these perturbations have not yet been systematically utilized to measure the performance of classifiers. In this paper, we propose several metrics to measure robustness of classifiers to natural adversarial examples, and methods to evaluate them. These metrics, called latent space performance metrics, are based on the ability of generative models to capture probability distributions, and are defined in their latent spaces. On three image classification case studies, we evaluate the proposed metrics for several classifiers, including ones trained in conventional and robust ways. We find that the latent counterparts of adversarial robustness are associated with the accuracy of the classifier rather than its conventional adversarial robustness, but the latter is still reflected on the properties of found latent perturbations. In addition, our novel method of finding latent adversarial perturbations demonstrates that these perturbations are often perceptually small.
1806.00264
Ting-Ting Liang
Ting-Ting Liang, Satoshi Tsutsui, Liangcai Gao, Jing-Jing Lu and Mengyan Sun
Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
12 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the time-consuming routine work for a radiologist is to discern anatomical structures from tomographic images. For assisting radiologists, this paper develops an automatic segmentation method for pelvic magnetic resonance (MR) images. The task has three major challenges 1) A pelvic organ can have various sizes and shapes depending on the axial image, which requires local contexts to segment correctly. 2) Different organs often have quite similar appearance in MR images, which requires global context to segment. 3) The number of available annotated images are very small to use the latest segmentation algorithms. To address the challenges, we propose a novel convolutional neural network called Attention-Pyramid network (APNet) that effectively exploits both local and global contexts, in addition to a data-augmentation technique that is particularly effective for MR images. In order to evaluate our method, we construct fine-grained (50 pelvic organs) MR image segmentation dataset, and experimentally confirm the superior performance of our techniques over the state-of-the-art image segmentation methods.
[ { "created": "Fri, 1 Jun 2018 10:13:45 GMT", "version": "v1" }, { "created": "Thu, 28 Jun 2018 16:57:39 GMT", "version": "v2" } ]
2018-06-29
[ [ "Liang", "Ting-Ting", "" ], [ "Tsutsui", "Satoshi", "" ], [ "Gao", "Liangcai", "" ], [ "Lu", "Jing-Jing", "" ], [ "Sun", "Mengyan", "" ] ]
One of the time-consuming routine work for a radiologist is to discern anatomical structures from tomographic images. For assisting radiologists, this paper develops an automatic segmentation method for pelvic magnetic resonance (MR) images. The task has three major challenges 1) A pelvic organ can have various sizes and shapes depending on the axial image, which requires local contexts to segment correctly. 2) Different organs often have quite similar appearance in MR images, which requires global context to segment. 3) The number of available annotated images are very small to use the latest segmentation algorithms. To address the challenges, we propose a novel convolutional neural network called Attention-Pyramid network (APNet) that effectively exploits both local and global contexts, in addition to a data-augmentation technique that is particularly effective for MR images. In order to evaluate our method, we construct fine-grained (50 pelvic organs) MR image segmentation dataset, and experimentally confirm the superior performance of our techniques over the state-of-the-art image segmentation methods.
2402.15420
Simon Holk
Simon Holk, Daniel Marta, Iolanda Leite
PREDILECT: Preferences Delineated with Zero-Shot Language-based Reasoning in Reinforcement Learning
8 pages, 8 Figures, 2 Tables
null
10.1145/3610977.3634970
null
cs.RO cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Preference-based reinforcement learning (RL) has emerged as a new field in robot learning, where humans play a pivotal role in shaping robot behavior by expressing preferences on different sequences of state-action pairs. However, formulating realistic policies for robots demands responses from humans to an extensive array of queries. In this work, we approach the sample-efficiency challenge by expanding the information collected per query to contain both preferences and optional text prompting. To accomplish this, we leverage the zero-shot capabilities of a large language model (LLM) to reason from the text provided by humans. To accommodate the additional query information, we reformulate the reward learning objectives to contain flexible highlights -- state-action pairs that contain relatively high information and are related to the features processed in a zero-shot fashion from a pretrained LLM. In both a simulated scenario and a user study, we reveal the effectiveness of our work by analyzing the feedback and its implications. Additionally, the collective feedback collected serves to train a robot on socially compliant trajectories in a simulated social navigation landscape. We provide video examples of the trained policies at https://sites.google.com/view/rl-predilect
[ { "created": "Fri, 23 Feb 2024 16:30:05 GMT", "version": "v1" } ]
2024-02-26
[ [ "Holk", "Simon", "" ], [ "Marta", "Daniel", "" ], [ "Leite", "Iolanda", "" ] ]
Preference-based reinforcement learning (RL) has emerged as a new field in robot learning, where humans play a pivotal role in shaping robot behavior by expressing preferences on different sequences of state-action pairs. However, formulating realistic policies for robots demands responses from humans to an extensive array of queries. In this work, we approach the sample-efficiency challenge by expanding the information collected per query to contain both preferences and optional text prompting. To accomplish this, we leverage the zero-shot capabilities of a large language model (LLM) to reason from the text provided by humans. To accommodate the additional query information, we reformulate the reward learning objectives to contain flexible highlights -- state-action pairs that contain relatively high information and are related to the features processed in a zero-shot fashion from a pretrained LLM. In both a simulated scenario and a user study, we reveal the effectiveness of our work by analyzing the feedback and its implications. Additionally, the collective feedback collected serves to train a robot on socially compliant trajectories in a simulated social navigation landscape. We provide video examples of the trained policies at https://sites.google.com/view/rl-predilect
2305.14749
Chaitanya K. Joshi
Chaitanya K. Joshi, Arian R. Jamasb, Ramon Vi\~nas, Charles Harris, Simon V. Mathis, Alex Morehead, Rishabh Anand, Pietro Li\`o
gRNAde: Geometric Deep Learning for 3D RNA inverse design
Previously titled 'Multi-State RNA Design with Geometric Multi-Graph Neural Networks', presented at ICML 2023 Computational Biology Workshop
null
null
null
cs.LG q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D geometry and conformational diversity. We introduce gRNAde, a geometric RNA design pipeline operating on 3D RNA backbones to design sequences that explicitly account for structure and dynamics. Under the hood, gRNAde is a multi-state Graph Neural Network that generates candidate RNA sequences conditioned on one or more 3D backbone structures where the identities of the bases are unknown. On a single-state fixed backbone re-design benchmark of 14 RNA structures from the PDB identified by Das et al. [2010], gRNAde obtains higher native sequence recovery rates (56% on average) compared to Rosetta (45% on average), taking under a second to produce designs compared to the reported hours for Rosetta. We further demonstrate the utility of gRNAde on a new benchmark of multi-state design for structurally flexible RNAs, as well as zero-shot ranking of mutational fitness landscapes in a retrospective analysis of a recent RNA polymerase ribozyme structure. Open source code: https://github.com/chaitjo/geometric-rna-design
[ { "created": "Wed, 24 May 2023 05:46:56 GMT", "version": "v1" }, { "created": "Thu, 25 May 2023 14:53:11 GMT", "version": "v2" }, { "created": "Sun, 28 May 2023 22:44:27 GMT", "version": "v3" }, { "created": "Sun, 31 Mar 2024 10:03:17 GMT", "version": "v4" }, { "created": "Sat, 25 May 2024 23:11:45 GMT", "version": "v5" } ]
2024-05-28
[ [ "Joshi", "Chaitanya K.", "" ], [ "Jamasb", "Arian R.", "" ], [ "Viñas", "Ramon", "" ], [ "Harris", "Charles", "" ], [ "Mathis", "Simon V.", "" ], [ "Morehead", "Alex", "" ], [ "Anand", "Rishabh", "" ], [ "Liò", "Pietro", "" ] ]
Computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D geometry and conformational diversity. We introduce gRNAde, a geometric RNA design pipeline operating on 3D RNA backbones to design sequences that explicitly account for structure and dynamics. Under the hood, gRNAde is a multi-state Graph Neural Network that generates candidate RNA sequences conditioned on one or more 3D backbone structures where the identities of the bases are unknown. On a single-state fixed backbone re-design benchmark of 14 RNA structures from the PDB identified by Das et al. [2010], gRNAde obtains higher native sequence recovery rates (56% on average) compared to Rosetta (45% on average), taking under a second to produce designs compared to the reported hours for Rosetta. We further demonstrate the utility of gRNAde on a new benchmark of multi-state design for structurally flexible RNAs, as well as zero-shot ranking of mutational fitness landscapes in a retrospective analysis of a recent RNA polymerase ribozyme structure. Open source code: https://github.com/chaitjo/geometric-rna-design
2006.09785
Jathushan Rajasegaran
Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah
Self-supervised Knowledge Distillation for Few-shot Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-world contains an overwhelmingly large number of object classes, learning all of which at once is infeasible. Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples. Recent works [7, 41] show that simply learning a good feature embedding can outperform more sophisticated meta-learning and metric learning algorithms for few-shot learning. In this paper, we propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks. We follow a two-stage learning process: First, we train a neural network to maximize the entropy of the feature embedding, thus creating an optimal output manifold using a self-supervised auxiliary loss. In the second stage, we minimize the entropy on feature embedding by bringing self-supervised twins together, while constraining the manifold with student-teacher distillation. Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods, with further gains achieved by our second stage distillation process. Our codes are available at: https://github.com/brjathu/SKD.
[ { "created": "Wed, 17 Jun 2020 11:27:00 GMT", "version": "v1" }, { "created": "Tue, 4 Aug 2020 05:22:39 GMT", "version": "v2" } ]
2020-08-05
[ [ "Rajasegaran", "Jathushan", "" ], [ "Khan", "Salman", "" ], [ "Hayat", "Munawar", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Shah", "Mubarak", "" ] ]
Real-world contains an overwhelmingly large number of object classes, learning all of which at once is infeasible. Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples. Recent works [7, 41] show that simply learning a good feature embedding can outperform more sophisticated meta-learning and metric learning algorithms for few-shot learning. In this paper, we propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks. We follow a two-stage learning process: First, we train a neural network to maximize the entropy of the feature embedding, thus creating an optimal output manifold using a self-supervised auxiliary loss. In the second stage, we minimize the entropy on feature embedding by bringing self-supervised twins together, while constraining the manifold with student-teacher distillation. Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods, with further gains achieved by our second stage distillation process. Our codes are available at: https://github.com/brjathu/SKD.
2008.02251
Thomas K\"ustner
Thomas K\"ustner, Tobias Hepp, Marc Fischer, Martin Schwartz, Andreas Fritsche, Hans-Ulrich H\"aring, Konstantin Nikolaou, Fabian Bamberg, Bin Yang, Fritz Schick, Sergios Gatidis, J\"urgen Machann
Fully Automated and Standardized Segmentation of Adipose Tissue Compartments by Deep Learning in Three-dimensional Whole-body MRI of Epidemiological Cohort Studies
This manuscript has been accepted for publication in Radiology: Artificial Intelligence (https://pubs.rsna.org/journal/ai), which is published by the Radiological Society of North America (RSNA)
null
null
null
cs.CV eess.IV physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: To enable fast and reliable assessment of subcutaneous and visceral adipose tissue compartments derived from whole-body MRI. Methods: Quantification and localization of different adipose tissue compartments from whole-body MR images is of high interest to examine metabolic conditions. For correct identification and phenotyping of individuals at increased risk for metabolic diseases, a reliable automatic segmentation of adipose tissue into subcutaneous and visceral adipose tissue is required. In this work we propose a 3D convolutional neural network (DCNet) to provide a robust and objective segmentation. In this retrospective study, we collected 1000 cases (66$\pm$ 13 years; 523 women) from the Tuebingen Family Study and from the German Center for Diabetes research (TUEF/DZD), as well as 300 cases (53$\pm$ 11 years; 152 women) from the German National Cohort (NAKO) database for model training, validation, and testing with a transfer learning between the cohorts. These datasets had variable imaging sequences, imaging contrasts, receiver coil arrangements, scanners and imaging field strengths. The proposed DCNet was compared against a comparable 3D UNet segmentation in terms of sensitivity, specificity, precision, accuracy, and Dice overlap. Results: Fast (5-7seconds) and reliable adipose tissue segmentation can be obtained with high Dice overlap (0.94), sensitivity (96.6%), specificity (95.1%), precision (92.1%) and accuracy (98.4%) from 3D whole-body MR datasets (field of view coverage 450x450x2000mm${}^3$). Segmentation masks and adipose tissue profiles are automatically reported back to the referring physician. Conclusion: Automatic adipose tissue segmentation is feasible in 3D whole-body MR data sets and is generalizable to different epidemiological cohort studies with the proposed DCNet.
[ { "created": "Wed, 5 Aug 2020 17:30:14 GMT", "version": "v1" } ]
2020-08-06
[ [ "Küstner", "Thomas", "" ], [ "Hepp", "Tobias", "" ], [ "Fischer", "Marc", "" ], [ "Schwartz", "Martin", "" ], [ "Fritsche", "Andreas", "" ], [ "Häring", "Hans-Ulrich", "" ], [ "Nikolaou", "Konstantin", "" ], [ "Bamberg", "Fabian", "" ], [ "Yang", "Bin", "" ], [ "Schick", "Fritz", "" ], [ "Gatidis", "Sergios", "" ], [ "Machann", "Jürgen", "" ] ]
Purpose: To enable fast and reliable assessment of subcutaneous and visceral adipose tissue compartments derived from whole-body MRI. Methods: Quantification and localization of different adipose tissue compartments from whole-body MR images is of high interest to examine metabolic conditions. For correct identification and phenotyping of individuals at increased risk for metabolic diseases, a reliable automatic segmentation of adipose tissue into subcutaneous and visceral adipose tissue is required. In this work we propose a 3D convolutional neural network (DCNet) to provide a robust and objective segmentation. In this retrospective study, we collected 1000 cases (66$\pm$ 13 years; 523 women) from the Tuebingen Family Study and from the German Center for Diabetes research (TUEF/DZD), as well as 300 cases (53$\pm$ 11 years; 152 women) from the German National Cohort (NAKO) database for model training, validation, and testing with a transfer learning between the cohorts. These datasets had variable imaging sequences, imaging contrasts, receiver coil arrangements, scanners and imaging field strengths. The proposed DCNet was compared against a comparable 3D UNet segmentation in terms of sensitivity, specificity, precision, accuracy, and Dice overlap. Results: Fast (5-7seconds) and reliable adipose tissue segmentation can be obtained with high Dice overlap (0.94), sensitivity (96.6%), specificity (95.1%), precision (92.1%) and accuracy (98.4%) from 3D whole-body MR datasets (field of view coverage 450x450x2000mm${}^3$). Segmentation masks and adipose tissue profiles are automatically reported back to the referring physician. Conclusion: Automatic adipose tissue segmentation is feasible in 3D whole-body MR data sets and is generalizable to different epidemiological cohort studies with the proposed DCNet.
2001.07615
Stefan Ultes
Stefan Ultes
Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning
Published at SIGDIAL 2019
null
null
null
cs.CL cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning suitable and well-performing dialogue behaviour in statistical spoken dialogue systems has been in the focus of research for many years. While most work which is based on reinforcement learning employs an objective measure like task success for modelling the reward signal, we use a reward based on user satisfaction estimation. We propose a novel estimator and show that it outperforms all previous estimators while learning temporal dependencies implicitly. Furthermore, we apply this novel user satisfaction estimation model live in simulated experiments where the satisfaction estimation model is trained on one domain and applied in many other domains which cover a similar task. We show that applying this model results in higher estimated satisfaction, similar task success rates and a higher robustness to noise.
[ { "created": "Tue, 21 Jan 2020 15:39:12 GMT", "version": "v1" } ]
2020-01-22
[ [ "Ultes", "Stefan", "" ] ]
Learning suitable and well-performing dialogue behaviour in statistical spoken dialogue systems has been in the focus of research for many years. While most work which is based on reinforcement learning employs an objective measure like task success for modelling the reward signal, we use a reward based on user satisfaction estimation. We propose a novel estimator and show that it outperforms all previous estimators while learning temporal dependencies implicitly. Furthermore, we apply this novel user satisfaction estimation model live in simulated experiments where the satisfaction estimation model is trained on one domain and applied in many other domains which cover a similar task. We show that applying this model results in higher estimated satisfaction, similar task success rates and a higher robustness to noise.
2302.14294
Aravindh Raman
Haris Bin Zia, Jiahui He, Aravindh Raman, Ignacio Castro, Nishanth Sastry, Gareth Tyson
Flocking to Mastodon: Tracking the Great Twitter Migration
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
The acquisition of Twitter by Elon Musk has spurred controversy and uncertainty among Twitter users. The move raised as many praises as concerns, particularly regarding Musk's views on free speech. As a result, a large number of Twitter users have looked for alternatives to Twitter. Mastodon, a decentralized micro-blogging social network, has attracted the attention of many users and the general media. In this paper, we track and analyze the migration of 136,009 users from Twitter to Mastodon. Our analysis sheds light on the user-driven pressure towards centralization in a decentralized ecosystem and identifies the strong influence of the social network in platform migration. We also characterize the activity of migrated users on both Twitter and Mastodon.
[ { "created": "Tue, 28 Feb 2023 03:59:19 GMT", "version": "v1" } ]
2023-03-01
[ [ "Zia", "Haris Bin", "" ], [ "He", "Jiahui", "" ], [ "Raman", "Aravindh", "" ], [ "Castro", "Ignacio", "" ], [ "Sastry", "Nishanth", "" ], [ "Tyson", "Gareth", "" ] ]
The acquisition of Twitter by Elon Musk has spurred controversy and uncertainty among Twitter users. The move raised as many praises as concerns, particularly regarding Musk's views on free speech. As a result, a large number of Twitter users have looked for alternatives to Twitter. Mastodon, a decentralized micro-blogging social network, has attracted the attention of many users and the general media. In this paper, we track and analyze the migration of 136,009 users from Twitter to Mastodon. Our analysis sheds light on the user-driven pressure towards centralization in a decentralized ecosystem and identifies the strong influence of the social network in platform migration. We also characterize the activity of migrated users on both Twitter and Mastodon.
1905.02691
Patrick M. Pilarski
Patrick M. Pilarski, Andrew Butcher, Michael Johanson, Matthew M. Botvinick, Andrew Bolt, Adam S. R. Parker
Learned human-agent decision-making, communication and joint action in a virtual reality environment
5 pages, 3 figures. Accepted to The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making, July 7-10, 2019, McGill University, Montreal, Quebec, Canada
null
null
null
cs.AI cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans make decisions and act alongside other humans to pursue both short-term and long-term goals. As a result of ongoing progress in areas such as computing science and automation, humans now also interact with non-human agents of varying complexity as part of their day-to-day activities; substantial work is being done to integrate increasingly intelligent machine agents into human work and play. With increases in the cognitive, sensory, and motor capacity of these agents, intelligent machinery for human assistance can now reasonably be considered to engage in joint action with humans---i.e., two or more agents adapting their behaviour and their understanding of each other so as to progress in shared objectives or goals. The mechanisms, conditions, and opportunities for skillful joint action in human-machine partnerships is of great interest to multiple communities. Despite this, human-machine joint action is as yet under-explored, especially in cases where a human and an intelligent machine interact in a persistent way during the course of real-time, daily-life experience. In this work, we contribute a virtual reality environment wherein a human and an agent can adapt their predictions, their actions, and their communication so as to pursue a simple foraging task. In a case study with a single participant, we provide an example of human-agent coordination and decision-making involving prediction learning on the part of the human and the machine agent, and control learning on the part of the machine agent wherein audio communication signals are used to cue its human partner in service of acquiring shared reward. These comparisons suggest the utility of studying human-machine coordination in a virtual reality environment, and identify further research that will expand our understanding of persistent human-machine joint action.
[ { "created": "Tue, 7 May 2019 16:53:48 GMT", "version": "v1" } ]
2019-05-08
[ [ "Pilarski", "Patrick M.", "" ], [ "Butcher", "Andrew", "" ], [ "Johanson", "Michael", "" ], [ "Botvinick", "Matthew M.", "" ], [ "Bolt", "Andrew", "" ], [ "Parker", "Adam S. R.", "" ] ]
Humans make decisions and act alongside other humans to pursue both short-term and long-term goals. As a result of ongoing progress in areas such as computing science and automation, humans now also interact with non-human agents of varying complexity as part of their day-to-day activities; substantial work is being done to integrate increasingly intelligent machine agents into human work and play. With increases in the cognitive, sensory, and motor capacity of these agents, intelligent machinery for human assistance can now reasonably be considered to engage in joint action with humans---i.e., two or more agents adapting their behaviour and their understanding of each other so as to progress in shared objectives or goals. The mechanisms, conditions, and opportunities for skillful joint action in human-machine partnerships is of great interest to multiple communities. Despite this, human-machine joint action is as yet under-explored, especially in cases where a human and an intelligent machine interact in a persistent way during the course of real-time, daily-life experience. In this work, we contribute a virtual reality environment wherein a human and an agent can adapt their predictions, their actions, and their communication so as to pursue a simple foraging task. In a case study with a single participant, we provide an example of human-agent coordination and decision-making involving prediction learning on the part of the human and the machine agent, and control learning on the part of the machine agent wherein audio communication signals are used to cue its human partner in service of acquiring shared reward. These comparisons suggest the utility of studying human-machine coordination in a virtual reality environment, and identify further research that will expand our understanding of persistent human-machine joint action.
2207.05316
Connor Parde
Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O'Toole
Twin identification over viewpoint change: A deep convolutional neural network surpasses humans
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (N=87) viewed pairs of face images of three types: same-identity, general imposter pairs (different identities from similar demographic groups), and twin imposter pairs (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree profile. Accuracy for discriminating matched-identity pairs from twin-imposters and general imposters was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r=0.38 to r=0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.
[ { "created": "Tue, 12 Jul 2022 04:59:53 GMT", "version": "v1" } ]
2022-07-13
[ [ "Parde", "Connor J.", "" ], [ "Strehle", "Virginia E.", "" ], [ "Banerjee", "Vivekjyoti", "" ], [ "Hu", "Ying", "" ], [ "Cavazos", "Jacqueline G.", "" ], [ "Castillo", "Carlos D.", "" ], [ "O'Toole", "Alice J.", "" ] ]
Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (N=87) viewed pairs of face images of three types: same-identity, general imposter pairs (different identities from similar demographic groups), and twin imposter pairs (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree profile. Accuracy for discriminating matched-identity pairs from twin-imposters and general imposters was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range r=0.38 to r=0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.
2008.03444
Xinyi Xu Mr
Xinyi Xu and Tiancheng Huang and Pengfei Wei and Akshay Narayan and Tze-Yun Leong
Hierarchical Reinforcement Learning in StarCraft II with Human Expertise in Subgoals Selection
In Submission to AAMAS 2021
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work is inspired by recent advances in hierarchical reinforcement learning (HRL) (Barto and Mahadevan 2003; Hengst 2010), and improvements in learning efficiency from heuristic-based subgoal selection, experience replay (Lin 1993; Andrychowicz et al. 2017), and task-based curriculum learning (Bengio et al. 2009; Zaremba and Sutskever 2014). We propose a new method to integrate HRL, experience replay and effective subgoal selection through an implicit curriculum design based on human expertise to support sample-efficient learning and enhance interpretability of the agent's behavior. Human expertise remains indispensable in many areas such as medicine (Buch, Ahmed, and Maruthappu 2018) and law (Cath 2018), where interpretability, explainability and transparency are crucial in the decision making process, for ethical and legal reasons. Our method simplifies the complex task sets for achieving the overall objectives by decomposing them into subgoals at different levels of abstraction. Incorporating relevant subjective knowledge also significantly reduces the computational resources spent in exploration for RL, especially in high speed, changing, and complex environments where the transition dynamics cannot be effectively learned and modelled in a short time. Experimental results in two StarCraft II (SC2) (Vinyals et al. 2017) minigames demonstrate that our method can achieve better sample efficiency than flat and end-to-end RL methods, and provides an effective method for explaining the agent's performance.
[ { "created": "Sat, 8 Aug 2020 04:56:30 GMT", "version": "v1" }, { "created": "Sat, 26 Sep 2020 00:15:12 GMT", "version": "v2" }, { "created": "Tue, 29 Sep 2020 01:15:05 GMT", "version": "v3" } ]
2020-09-30
[ [ "Xu", "Xinyi", "" ], [ "Huang", "Tiancheng", "" ], [ "Wei", "Pengfei", "" ], [ "Narayan", "Akshay", "" ], [ "Leong", "Tze-Yun", "" ] ]
This work is inspired by recent advances in hierarchical reinforcement learning (HRL) (Barto and Mahadevan 2003; Hengst 2010), and improvements in learning efficiency from heuristic-based subgoal selection, experience replay (Lin 1993; Andrychowicz et al. 2017), and task-based curriculum learning (Bengio et al. 2009; Zaremba and Sutskever 2014). We propose a new method to integrate HRL, experience replay and effective subgoal selection through an implicit curriculum design based on human expertise to support sample-efficient learning and enhance interpretability of the agent's behavior. Human expertise remains indispensable in many areas such as medicine (Buch, Ahmed, and Maruthappu 2018) and law (Cath 2018), where interpretability, explainability and transparency are crucial in the decision making process, for ethical and legal reasons. Our method simplifies the complex task sets for achieving the overall objectives by decomposing them into subgoals at different levels of abstraction. Incorporating relevant subjective knowledge also significantly reduces the computational resources spent in exploration for RL, especially in high speed, changing, and complex environments where the transition dynamics cannot be effectively learned and modelled in a short time. Experimental results in two StarCraft II (SC2) (Vinyals et al. 2017) minigames demonstrate that our method can achieve better sample efficiency than flat and end-to-end RL methods, and provides an effective method for explaining the agent's performance.
0707.0568
Gesualdo Scutari
Gesualdo Scutari, D.P. Palomar, S. Barbarossa
Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems based on Game Theory-Part I: Nash Equilibria
Paper submitted to IEEE Transactions on Signal Processing, September 22, 2005. Revised March 14, 2007. Accepted June 5, 2007. To be published on IEEE Transactions on Signal Processing, 2007. To appear on IEEE Transactions on Signal Processing, 2007
null
10.1109/TSP.2007.907807
null
cs.IT cs.GT math.IT
null
In this two-parts paper we propose a decentralized strategy, based on a game-theoretic formulation, to find out the optimal precoding/multiplexing matrices for a multipoint-to-multipoint communication system composed of a set of wideband links sharing the same physical resources, i.e., time and bandwidth. We assume, as optimality criterion, the achievement of a Nash equilibrium and consider two alternative optimization problems: 1) the competitive maximization of mutual information on each link, given constraints on the transmit power and on the spectral mask imposed by the radio spectrum regulatory bodies; and 2) the competitive maximization of the transmission rate, using finite order constellations, under the same constraints as above, plus a constraint on the average error probability. In Part I of the paper, we start by showing that the solution set of both noncooperative games is always nonempty and contains only pure strategies. Then, we prove that the optimal precoding/multiplexing scheme for both games leads to a channel diagonalizing structure, so that both matrix-valued problems can be recast in a simpler unified vector power control game, with no performance penalty. Thus, we study this simpler game and derive sufficient conditions ensuring the uniqueness of the Nash equilibrium. Interestingly, although derived under stronger constraints, incorporating for example spectral mask constraints, our uniqueness conditions have broader validity than previously known conditions. Finally, we assess the goodness of the proposed decentralized strategy by comparing its performance with the performance of a Pareto-optimal centralized scheme. To reach the Nash equilibria of the game, in Part II, we propose alternative distributed algorithms, along with their convergence conditions.
[ { "created": "Wed, 4 Jul 2007 10:33:25 GMT", "version": "v1" } ]
2009-11-13
[ [ "Scutari", "Gesualdo", "" ], [ "Palomar", "D. P.", "" ], [ "Barbarossa", "S.", "" ] ]
In this two-parts paper we propose a decentralized strategy, based on a game-theoretic formulation, to find out the optimal precoding/multiplexing matrices for a multipoint-to-multipoint communication system composed of a set of wideband links sharing the same physical resources, i.e., time and bandwidth. We assume, as optimality criterion, the achievement of a Nash equilibrium and consider two alternative optimization problems: 1) the competitive maximization of mutual information on each link, given constraints on the transmit power and on the spectral mask imposed by the radio spectrum regulatory bodies; and 2) the competitive maximization of the transmission rate, using finite order constellations, under the same constraints as above, plus a constraint on the average error probability. In Part I of the paper, we start by showing that the solution set of both noncooperative games is always nonempty and contains only pure strategies. Then, we prove that the optimal precoding/multiplexing scheme for both games leads to a channel diagonalizing structure, so that both matrix-valued problems can be recast in a simpler unified vector power control game, with no performance penalty. Thus, we study this simpler game and derive sufficient conditions ensuring the uniqueness of the Nash equilibrium. Interestingly, although derived under stronger constraints, incorporating for example spectral mask constraints, our uniqueness conditions have broader validity than previously known conditions. Finally, we assess the goodness of the proposed decentralized strategy by comparing its performance with the performance of a Pareto-optimal centralized scheme. To reach the Nash equilibria of the game, in Part II, we propose alternative distributed algorithms, along with their convergence conditions.
1807.06614
Ayonga Hereid
Ayonga Hereid, Omar Harib, Ross Hartley, Yukai Gong and Jessy W. Grizzle
Rapid Trajectory Optimization Using C-FROST with Illustration on a Cassie-Series Dynamic Walking Biped
null
null
null
null
cs.RO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the big attractions of low-dimensional models for gait design has been the ability to compute solutions rapidly, whereas one of their drawbacks has been the difficulty in mapping the solutions back to the target robot. This paper presents a set of tools for rapidly determining solutions for ``humanoids'' without removing or lumping degrees of freedom. The main tools are (1) C-FROST, an open-source C++ interface for FROST, a direct collocation optimization tool; and (2) multi-threading. The results will be illustrated on a 20-DoF floating-base model for a Cassie-series bipedal robot through numerical calculations and physical experiments.
[ { "created": "Tue, 17 Jul 2018 18:28:06 GMT", "version": "v1" }, { "created": "Fri, 20 Jul 2018 15:15:58 GMT", "version": "v2" }, { "created": "Fri, 15 Mar 2019 16:39:06 GMT", "version": "v3" } ]
2019-03-18
[ [ "Hereid", "Ayonga", "" ], [ "Harib", "Omar", "" ], [ "Hartley", "Ross", "" ], [ "Gong", "Yukai", "" ], [ "Grizzle", "Jessy W.", "" ] ]
One of the big attractions of low-dimensional models for gait design has been the ability to compute solutions rapidly, whereas one of their drawbacks has been the difficulty in mapping the solutions back to the target robot. This paper presents a set of tools for rapidly determining solutions for ``humanoids'' without removing or lumping degrees of freedom. The main tools are (1) C-FROST, an open-source C++ interface for FROST, a direct collocation optimization tool; and (2) multi-threading. The results will be illustrated on a 20-DoF floating-base model for a Cassie-series bipedal robot through numerical calculations and physical experiments.
1410.6796
Wojciech Mazurczyk
Wojciech Mazurczyk and Luca Caviglione
Steganography in Modern Smartphones and Mitigation Techniques
25 pages, 8 figures, 6 tables
null
null
null
cs.MM cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By offering sophisticated services and centralizing a huge volume of personal data, modern smartphones changed the way we socialize, entertain and work. To this aim, they rely upon complex hardware/software frameworks leading to a number of vulnerabilities, attacks and hazards to profile individuals or gather sensitive information. However, the majority of works evaluating the security degree of smartphones neglects steganography, which can be mainly used to: i) exfiltrate confidential data via camouflage methods, and ii) conceal valuable or personal information into innocent looking carriers. Therefore, this paper surveys the state of the art of steganographic techniques for smartphones, with emphasis on methods developed over the period 2005 to the second quarter of 2014. The different approaches are grouped according to the portion of the device used to hide information, leading to three different covert channels, i.e., local, object and network. Also, it reviews the relevant approaches used to detect and mitigate steganographic attacks or threats. Lastly, it showcases the most popular software applications to embed secret data into carriers, as well as possible future directions.
[ { "created": "Wed, 27 Aug 2014 08:46:05 GMT", "version": "v1" } ]
2014-10-27
[ [ "Mazurczyk", "Wojciech", "" ], [ "Caviglione", "Luca", "" ] ]
By offering sophisticated services and centralizing a huge volume of personal data, modern smartphones changed the way we socialize, entertain and work. To this aim, they rely upon complex hardware/software frameworks leading to a number of vulnerabilities, attacks and hazards to profile individuals or gather sensitive information. However, the majority of works evaluating the security degree of smartphones neglects steganography, which can be mainly used to: i) exfiltrate confidential data via camouflage methods, and ii) conceal valuable or personal information into innocent looking carriers. Therefore, this paper surveys the state of the art of steganographic techniques for smartphones, with emphasis on methods developed over the period 2005 to the second quarter of 2014. The different approaches are grouped according to the portion of the device used to hide information, leading to three different covert channels, i.e., local, object and network. Also, it reviews the relevant approaches used to detect and mitigate steganographic attacks or threats. Lastly, it showcases the most popular software applications to embed secret data into carriers, as well as possible future directions.
2006.08157
Yunwen Lei
Yunwen Lei and Yiming Ying
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
to appear in ICML 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently there are a considerable amount of work devoted to the study of the algorithmic stability and generalization for stochastic gradient descent (SGD). However, the existing stability analysis requires to impose restrictive assumptions on the boundedness of gradients, strong smoothness and convexity of loss functions. In this paper, we provide a fine-grained analysis of stability and generalization for SGD by substantially relaxing these assumptions. Firstly, we establish stability and generalization for SGD by removing the existing bounded gradient assumptions. The key idea is the introduction of a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates. This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting using stability approach. Secondly, the smoothness assumption is relaxed by considering loss functions with Holder continuous (sub)gradients for which we show that optimal bounds are still achieved by balancing computation and stability. To our best knowledge, this gives the first-ever-known stability and generalization bounds for SGD with even non-differentiable loss functions. Finally, we study learning problems with (strongly) convex objectives but non-convex loss functions.
[ { "created": "Mon, 15 Jun 2020 06:30:19 GMT", "version": "v1" } ]
2020-06-16
[ [ "Lei", "Yunwen", "" ], [ "Ying", "Yiming", "" ] ]
Recently there are a considerable amount of work devoted to the study of the algorithmic stability and generalization for stochastic gradient descent (SGD). However, the existing stability analysis requires to impose restrictive assumptions on the boundedness of gradients, strong smoothness and convexity of loss functions. In this paper, we provide a fine-grained analysis of stability and generalization for SGD by substantially relaxing these assumptions. Firstly, we establish stability and generalization for SGD by removing the existing bounded gradient assumptions. The key idea is the introduction of a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates. This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting using stability approach. Secondly, the smoothness assumption is relaxed by considering loss functions with Holder continuous (sub)gradients for which we show that optimal bounds are still achieved by balancing computation and stability. To our best knowledge, this gives the first-ever-known stability and generalization bounds for SGD with even non-differentiable loss functions. Finally, we study learning problems with (strongly) convex objectives but non-convex loss functions.
2002.08347
Florian Tram\`er
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry
On Adaptive Attacks to Adversarial Example Defenses
NeurIPS 2020
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS---and chosen for illustrative and pedagogical purposes---can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result---showing that a defense was ineffective---this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
[ { "created": "Wed, 19 Feb 2020 18:50:29 GMT", "version": "v1" }, { "created": "Fri, 23 Oct 2020 12:07:41 GMT", "version": "v2" } ]
2020-10-26
[ [ "Tramer", "Florian", "" ], [ "Carlini", "Nicholas", "" ], [ "Brendel", "Wieland", "" ], [ "Madry", "Aleksander", "" ] ]
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS---and chosen for illustrative and pedagogical purposes---can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result---showing that a defense was ineffective---this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
2104.04748
Zhengxu Hou
Zhengxu Hou, Bang Liu, Ruihui Zhao, Zijing Ou, Yafei Liu, Xi Chen, Yefeng Zheng
Imperfect also Deserves Reward: Multi-Level and Sequential Reward Modeling for Better Dialog Management
9 pages
NAACL 2021
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For task-oriented dialog systems, training a Reinforcement Learning (RL) based Dialog Management module suffers from low sample efficiency and slow convergence speed due to the sparse rewards in RL.To solve this problem, many strategies have been proposed to give proper rewards when training RL, but their rewards lack interpretability and cannot accurately estimate the distribution of state-action pairs in real dialogs. In this paper, we propose a multi-level reward modeling approach that factorizes a reward into a three-level hierarchy: domain, act, and slot. Based on inverse adversarial reinforcement learning, our designed reward model can provide more accurate and explainable reward signals for state-action pairs.Extensive evaluations show that our approach can be applied to a wide range of reinforcement learning-based dialog systems and significantly improves both the performance and the speed of convergence.
[ { "created": "Sat, 10 Apr 2021 12:20:23 GMT", "version": "v1" } ]
2021-04-13
[ [ "Hou", "Zhengxu", "" ], [ "Liu", "Bang", "" ], [ "Zhao", "Ruihui", "" ], [ "Ou", "Zijing", "" ], [ "Liu", "Yafei", "" ], [ "Chen", "Xi", "" ], [ "Zheng", "Yefeng", "" ] ]
For task-oriented dialog systems, training a Reinforcement Learning (RL) based Dialog Management module suffers from low sample efficiency and slow convergence speed due to the sparse rewards in RL.To solve this problem, many strategies have been proposed to give proper rewards when training RL, but their rewards lack interpretability and cannot accurately estimate the distribution of state-action pairs in real dialogs. In this paper, we propose a multi-level reward modeling approach that factorizes a reward into a three-level hierarchy: domain, act, and slot. Based on inverse adversarial reinforcement learning, our designed reward model can provide more accurate and explainable reward signals for state-action pairs.Extensive evaluations show that our approach can be applied to a wide range of reinforcement learning-based dialog systems and significantly improves both the performance and the speed of convergence.
1706.02494
Rong Zhang
Rong Zhang and Lie-Liang Yang and Lajos Hanzo
Physical Layer Security of Generalised Pre-coded Spatial Modulation with Antenna Scrambling
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We now advocate a novel physical layer security solution that is unique to our previously proposed GPSM scheme with the aid of the proposed antenna scrambling. The novelty and contribution of our paper lies in three aspects: 1/ principle: we introduce a `security key' generated at Alice that is unknown to both Bob and Eve, where the design goal is that the publicly unknown security key only imposes barrier for Eve. 2/ approach: we achieve it by conveying useful information only through the activation of RA indices, which is in turn concealed by the unknown security key in terms of the randomly scrambled symbols used in place of the conventional modulated symbols in GPSM scheme. 3/ design: we consider both Circular Antenna Scrambling (CAS) and Gaussian Antenna Scrambling (GAS) in detail and the resultant security capacity of both designs are quantified and compared.
[ { "created": "Thu, 8 Jun 2017 09:48:08 GMT", "version": "v1" } ]
2017-06-09
[ [ "Zhang", "Rong", "" ], [ "Yang", "Lie-Liang", "" ], [ "Hanzo", "Lajos", "" ] ]
We now advocate a novel physical layer security solution that is unique to our previously proposed GPSM scheme with the aid of the proposed antenna scrambling. The novelty and contribution of our paper lies in three aspects: 1/ principle: we introduce a `security key' generated at Alice that is unknown to both Bob and Eve, where the design goal is that the publicly unknown security key only imposes barrier for Eve. 2/ approach: we achieve it by conveying useful information only through the activation of RA indices, which is in turn concealed by the unknown security key in terms of the randomly scrambled symbols used in place of the conventional modulated symbols in GPSM scheme. 3/ design: we consider both Circular Antenna Scrambling (CAS) and Gaussian Antenna Scrambling (GAS) in detail and the resultant security capacity of both designs are quantified and compared.
0705.0561
Jingchao Chen
Jing-Chao Chen
Iterative Rounding for the Closest String Problem
This paper has been published in abstract Booklet of CiE09
null
null
null
cs.DS cs.CC
http://creativecommons.org/licenses/by-nc-sa/3.0/
The closest string problem is an NP-hard problem, whose task is to find a string that minimizes maximum Hamming distance to a given set of strings. This can be reduced to an integer program (IP). However, to date, there exists no known polynomial-time algorithm for IP. In 2004, Meneses et al. introduced a branch-and-bound (B & B) method for solving the IP problem. Their algorithm is not always efficient and has the exponential time complexity. In the paper, we attempt to solve efficiently the IP problem by a greedy iterative rounding technique. The proposed algorithm is polynomial time and much faster than the existing B & B IP for the CSP. If the number of strings is limited to 3, the algorithm is provably at most 1 away from the optimum. The empirical results show that in many cases we can find an exact solution. Even though we fail to find an exact solution, the solution found is very close to exact solution.
[ { "created": "Fri, 4 May 2007 03:01:42 GMT", "version": "v1" }, { "created": "Wed, 11 May 2011 00:18:55 GMT", "version": "v2" } ]
2011-05-12
[ [ "Chen", "Jing-Chao", "" ] ]
The closest string problem is an NP-hard problem, whose task is to find a string that minimizes maximum Hamming distance to a given set of strings. This can be reduced to an integer program (IP). However, to date, there exists no known polynomial-time algorithm for IP. In 2004, Meneses et al. introduced a branch-and-bound (B & B) method for solving the IP problem. Their algorithm is not always efficient and has the exponential time complexity. In the paper, we attempt to solve efficiently the IP problem by a greedy iterative rounding technique. The proposed algorithm is polynomial time and much faster than the existing B & B IP for the CSP. If the number of strings is limited to 3, the algorithm is provably at most 1 away from the optimum. The empirical results show that in many cases we can find an exact solution. Even though we fail to find an exact solution, the solution found is very close to exact solution.
2208.05810
Minji Kim
Minji Kim, Seungkwan Lee, Jungseul Ok, Bohyung Han, Minsu Cho
Towards Sequence-Level Training for Visual Tracking
ECCV 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the extensive adoption of machine learning on the task of visual object tracking, recent learning-based approaches have largely overlooked the fact that visual tracking is a sequence-level task in its nature; they rely heavily on frame-level training, which inevitably induces inconsistency between training and testing in terms of both data distributions and task objectives. This work introduces a sequence-level training strategy for visual tracking based on reinforcement learning and discusses how a sequence-level design of data sampling, learning objectives, and data augmentation can improve the accuracy and robustness of tracking algorithms. Our experiments on standard benchmarks including LaSOT, TrackingNet, and GOT-10k demonstrate that four representative tracking models, SiamRPN++, SiamAttn, TransT, and TrDiMP, consistently improve by incorporating the proposed methods in training without modifying architectures.
[ { "created": "Thu, 11 Aug 2022 13:15:36 GMT", "version": "v1" }, { "created": "Tue, 20 Sep 2022 12:46:53 GMT", "version": "v2" }, { "created": "Sun, 16 Oct 2022 16:05:12 GMT", "version": "v3" } ]
2022-10-18
[ [ "Kim", "Minji", "" ], [ "Lee", "Seungkwan", "" ], [ "Ok", "Jungseul", "" ], [ "Han", "Bohyung", "" ], [ "Cho", "Minsu", "" ] ]
Despite the extensive adoption of machine learning on the task of visual object tracking, recent learning-based approaches have largely overlooked the fact that visual tracking is a sequence-level task in its nature; they rely heavily on frame-level training, which inevitably induces inconsistency between training and testing in terms of both data distributions and task objectives. This work introduces a sequence-level training strategy for visual tracking based on reinforcement learning and discusses how a sequence-level design of data sampling, learning objectives, and data augmentation can improve the accuracy and robustness of tracking algorithms. Our experiments on standard benchmarks including LaSOT, TrackingNet, and GOT-10k demonstrate that four representative tracking models, SiamRPN++, SiamAttn, TransT, and TrDiMP, consistently improve by incorporating the proposed methods in training without modifying architectures.
1905.12261
Che-Han Chang
Che-Han Chang, Chun-Hsien Yu, Szu-Ying Chen, Edward Y. Chang
KG-GAN: Knowledge-Guided Generative Adversarial Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Can generative adversarial networks (GANs) generate roses of various colors given only roses of red petals as input? The answer is negative, since GANs' discriminator would reject all roses of unseen petal colors. In this study, we propose knowledge-guided GAN (KG-GAN) to fuse domain knowledge with the GAN framework. KG-GAN trains two generators; one learns from data whereas the other learns from knowledge with a constraint function. Experimental results demonstrate the effectiveness of KG-GAN in generating unseen flower categories from seen categories given textual descriptions of the unseen ones.
[ { "created": "Wed, 29 May 2019 07:55:46 GMT", "version": "v1" }, { "created": "Mon, 23 Sep 2019 09:48:33 GMT", "version": "v2" } ]
2019-09-24
[ [ "Chang", "Che-Han", "" ], [ "Yu", "Chun-Hsien", "" ], [ "Chen", "Szu-Ying", "" ], [ "Chang", "Edward Y.", "" ] ]
Can generative adversarial networks (GANs) generate roses of various colors given only roses of red petals as input? The answer is negative, since GANs' discriminator would reject all roses of unseen petal colors. In this study, we propose knowledge-guided GAN (KG-GAN) to fuse domain knowledge with the GAN framework. KG-GAN trains two generators; one learns from data whereas the other learns from knowledge with a constraint function. Experimental results demonstrate the effectiveness of KG-GAN in generating unseen flower categories from seen categories given textual descriptions of the unseen ones.
2108.13015
Pengguang Chen
Pengguang Chen, Yixin Chen, Shu Liu, Mingchang Yang, Jiaya Jia
Exploring and Improving Mobile Level Vision Transformers
10 pages; 5 figures; preprint
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the vision transformer structure in the mobile level in this paper, and find a dramatic performance drop. We analyze the reason behind this phenomenon, and propose a novel irregular patch embedding module and adaptive patch fusion module to improve the performance. We conjecture that the vision transformer blocks (which consist of multi-head attention and feed-forward network) are more suitable to handle high-level information than low-level features. The irregular patch embedding module extracts patches that contain rich high-level information with different receptive fields. The transformer blocks can obtain the most useful information from these irregular patches. Then the processed patches pass the adaptive patch merging module to get the final features for the classifier. With our proposed improvements, the traditional uniform vision transformer structure can achieve state-of-the-art results in mobile level. We improve the DeiT baseline by more than 9\% under the mobile-level settings and surpass other transformer architectures like Swin and CoaT by a large margin.
[ { "created": "Mon, 30 Aug 2021 06:42:49 GMT", "version": "v1" } ]
2021-08-31
[ [ "Chen", "Pengguang", "" ], [ "Chen", "Yixin", "" ], [ "Liu", "Shu", "" ], [ "Yang", "Mingchang", "" ], [ "Jia", "Jiaya", "" ] ]
We study the vision transformer structure in the mobile level in this paper, and find a dramatic performance drop. We analyze the reason behind this phenomenon, and propose a novel irregular patch embedding module and adaptive patch fusion module to improve the performance. We conjecture that the vision transformer blocks (which consist of multi-head attention and feed-forward network) are more suitable to handle high-level information than low-level features. The irregular patch embedding module extracts patches that contain rich high-level information with different receptive fields. The transformer blocks can obtain the most useful information from these irregular patches. Then the processed patches pass the adaptive patch merging module to get the final features for the classifier. With our proposed improvements, the traditional uniform vision transformer structure can achieve state-of-the-art results in mobile level. We improve the DeiT baseline by more than 9\% under the mobile-level settings and surpass other transformer architectures like Swin and CoaT by a large margin.
2201.02850
Rayson Laroca
Gabriel Salomon, Rayson Laroca, David Menotti
Image-based Automatic Dial Meter Reading in Unconstrained Scenarios
null
Measurement, vol. 204, p. 112025, 2022
10.1016/j.measurement.2022.112025
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The replacement of analog meters with smart meters is costly, laborious, and far from complete in developing countries. The Energy Company of Parana (Copel) (Brazil) performs more than 4 million meter readings (almost entirely of non-smart devices) per month, and we estimate that 850 thousand of them are from dial meters. Therefore, an image-based automatic reading system can reduce human errors, create a proof of reading, and enable the customers to perform the reading themselves through a mobile application. We propose novel approaches for Automatic Dial Meter Reading (ADMR) and introduce a new dataset for ADMR in unconstrained scenarios, called UFPR-ADMR-v2. Our best-performing method combines YOLOv4 with a novel regression approach (AngReg), and explores several postprocessing techniques. Compared to previous works, it decreased the Mean Absolute Error (MAE) from 1,343 to 129 and achieved a meter recognition rate (MRR) of 98.90% -- with an error tolerance of 1 Kilowatt-hour (kWh).
[ { "created": "Sat, 8 Jan 2022 16:03:46 GMT", "version": "v1" }, { "created": "Sun, 23 Oct 2022 11:56:38 GMT", "version": "v2" } ]
2022-10-25
[ [ "Salomon", "Gabriel", "" ], [ "Laroca", "Rayson", "" ], [ "Menotti", "David", "" ] ]
The replacement of analog meters with smart meters is costly, laborious, and far from complete in developing countries. The Energy Company of Parana (Copel) (Brazil) performs more than 4 million meter readings (almost entirely of non-smart devices) per month, and we estimate that 850 thousand of them are from dial meters. Therefore, an image-based automatic reading system can reduce human errors, create a proof of reading, and enable the customers to perform the reading themselves through a mobile application. We propose novel approaches for Automatic Dial Meter Reading (ADMR) and introduce a new dataset for ADMR in unconstrained scenarios, called UFPR-ADMR-v2. Our best-performing method combines YOLOv4 with a novel regression approach (AngReg), and explores several postprocessing techniques. Compared to previous works, it decreased the Mean Absolute Error (MAE) from 1,343 to 129 and achieved a meter recognition rate (MRR) of 98.90% -- with an error tolerance of 1 Kilowatt-hour (kWh).
1906.10495
Joshua Cook
Joshua Alan Cook
Approximating Unitary Preparations of Orthogonal Black Box States
A Class project Paper for CS395T Quantum Complexity Theory at UT Austin in Spring 2019 under Scott Aaronson
null
null
null
cs.CC quant-ph
http://creativecommons.org/licenses/by/4.0/
In this paper, I take a step toward answering the following question: for m different small circuits that compute m orthogonal n qubit states, is there a small circuit that will map m computational basis states to these m states without any input leaving any auxiliary bits changed. While this may seem simple, the constraint that auxiliary bits always be returned to 0 on any input (even ones besides the m we care about) led me to use sophisticated techniques. I give an approximation of such a unitary in the m = 2 case that has size polynomial in the approximation error, and the number of qubits n.
[ { "created": "Sun, 23 Jun 2019 01:21:52 GMT", "version": "v1" } ]
2019-06-26
[ [ "Cook", "Joshua Alan", "" ] ]
In this paper, I take a step toward answering the following question: for m different small circuits that compute m orthogonal n qubit states, is there a small circuit that will map m computational basis states to these m states without any input leaving any auxiliary bits changed. While this may seem simple, the constraint that auxiliary bits always be returned to 0 on any input (even ones besides the m we care about) led me to use sophisticated techniques. I give an approximation of such a unitary in the m = 2 case that has size polynomial in the approximation error, and the number of qubits n.
cs/0407066
Tentyukov Mikhail
M.Tentyukov, D.Fliegner, M.Frank, A.Onischenko, A.Retey, H.M.Staudenmaier and J.A.M.Vermaseren
ParFORM: Parallel Version of the Symbolic Manipulation Program FORM
5 pages, 4 Encapsulated postscript figures, LaTeX2e uses casc.cls (included). Presented at CASC'04 http://wwwmayr.in.tum.de/CASC2004/
null
null
TTP04-15
cs.SC cs.DC hep-ph
null
After an introduction to the sequential version of FORM and the mechanisms behind, we report on the status of our project of parallelization. We have now a parallel version of FORM running on Cluster- and SMP-architectures. This version can be used to run arbitrary FORM programs in parallel.
[ { "created": "Fri, 30 Jul 2004 10:06:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tentyukov", "M.", "" ], [ "Fliegner", "D.", "" ], [ "Frank", "M.", "" ], [ "Onischenko", "A.", "" ], [ "Retey", "A.", "" ], [ "Staudenmaier", "H. M.", "" ], [ "Vermaseren", "J. A. M.", "" ] ]
After an introduction to the sequential version of FORM and the mechanisms behind, we report on the status of our project of parallelization. We have now a parallel version of FORM running on Cluster- and SMP-architectures. This version can be used to run arbitrary FORM programs in parallel.
2212.01545
Ruihao Zheng
Ruihao Zheng and Zhenkun Wang
A Generalized Scalarization Method for Evolutionary Multi-Objective Optimization
Correct some typos. (Accepted for presentation at Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23))
null
10.1609/aaai.v37i10.26474
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The decomposition-based multi-objective evolutionary algorithm (MOEA/D) transforms a multi-objective optimization problem (MOP) into a set of single-objective subproblems for collaborative optimization. Mismatches between subproblems and solutions can lead to severe performance degradation of MOEA/D. Most existing mismatch coping strategies only work when the $L_{\infty}$ scalarization is used. A mismatch coping strategy that can use any $L_{p}$ scalarization, even when facing MOPs with non-convex Pareto fronts, is of great significance for MOEA/D. This paper uses the global replacement (GR) as the backbone. We analyze how GR can no longer avoid mismatches when $L_{\infty}$ is replaced by another $L_{p}$ with $p\in [1,\infty)$, and find that the $L_p$-based ($1\leq p<\infty$) subproblems having inconsistently large preference regions. When $p$ is set to a small value, some middle subproblems have very small preference regions so that their direction vectors cannot pass through their corresponding preference regions. Therefore, we propose a generalized $L_p$ (G$L_p$) scalarization to ensure that the subproblem's direction vector passes through its preference region. Our theoretical analysis shows that GR can always avoid mismatches when using the G$L_p$ scalarization for any $p\geq 1$. The experimental studies on various MOPs conform to the theoretical analysis.
[ { "created": "Sat, 3 Dec 2022 05:55:04 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 00:46:59 GMT", "version": "v2" } ]
2023-11-08
[ [ "Zheng", "Ruihao", "" ], [ "Wang", "Zhenkun", "" ] ]
The decomposition-based multi-objective evolutionary algorithm (MOEA/D) transforms a multi-objective optimization problem (MOP) into a set of single-objective subproblems for collaborative optimization. Mismatches between subproblems and solutions can lead to severe performance degradation of MOEA/D. Most existing mismatch coping strategies only work when the $L_{\infty}$ scalarization is used. A mismatch coping strategy that can use any $L_{p}$ scalarization, even when facing MOPs with non-convex Pareto fronts, is of great significance for MOEA/D. This paper uses the global replacement (GR) as the backbone. We analyze how GR can no longer avoid mismatches when $L_{\infty}$ is replaced by another $L_{p}$ with $p\in [1,\infty)$, and find that the $L_p$-based ($1\leq p<\infty$) subproblems having inconsistently large preference regions. When $p$ is set to a small value, some middle subproblems have very small preference regions so that their direction vectors cannot pass through their corresponding preference regions. Therefore, we propose a generalized $L_p$ (G$L_p$) scalarization to ensure that the subproblem's direction vector passes through its preference region. Our theoretical analysis shows that GR can always avoid mismatches when using the G$L_p$ scalarization for any $p\geq 1$. The experimental studies on various MOPs conform to the theoretical analysis.
2108.09939
Mashiat Mostafa
Mashiat Mostafa and Faheem Hussain
Transcending Old Boundaries: Digital Afterlife in the Age of COVID-19
In proceedings of the 1st Virtual Conference on Implications of Information and Digital Technologies for Development, 2021
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The primary objective of our exploratory research is to contribute to the ongoing conversation on Digital Afterlife from the lenses of Global South during the COVID-19 period. Digital Afterlife is fast becoming a challenge for our increasingly connected society. Moreover, the situation got worse with the COVID-19 pandemic. The on-going research is to address the disparity in the Global South, specifically in countries like Indonesia, India and The Philippines compared to the Global North for Digital Afterlife services such as policies and digital mourning services. By addressing the research question, 'What services and policy frameworks are available for Digital Afterlife in the Global South during COVID-19?', we aim to find the multitude of ways people in the Global South are managing their digital footprints. Our preliminary findings show that some considerable research and death related digital services and innovation have taken place during the pandemic. However, overwhelming majority of these works are western-centric and mainly dealing with post-mortem personal asset management. Cultural nuances, socio-economic perspectives, religion, political climate, regional infrastructures are mostly sidelined. We found significant disparity in Digital Afterlife product and service designs, which got worse during the global pandemic. Our goal is to collect further in-depth data within the three big ICT powerhouses of global south (Indonesia, India and The Philippines), identify the challenges as well as the innovations around Digital Afterlife.We envision proposing a set of recommendations, based on our findings, for developing a more inclusive and equitable digital space in this pandemic-stricken world.
[ { "created": "Mon, 23 Aug 2021 05:21:03 GMT", "version": "v1" } ]
2021-08-24
[ [ "Mostafa", "Mashiat", "" ], [ "Hussain", "Faheem", "" ] ]
The primary objective of our exploratory research is to contribute to the ongoing conversation on Digital Afterlife from the lenses of Global South during the COVID-19 period. Digital Afterlife is fast becoming a challenge for our increasingly connected society. Moreover, the situation got worse with the COVID-19 pandemic. The on-going research is to address the disparity in the Global South, specifically in countries like Indonesia, India and The Philippines compared to the Global North for Digital Afterlife services such as policies and digital mourning services. By addressing the research question, 'What services and policy frameworks are available for Digital Afterlife in the Global South during COVID-19?', we aim to find the multitude of ways people in the Global South are managing their digital footprints. Our preliminary findings show that some considerable research and death related digital services and innovation have taken place during the pandemic. However, overwhelming majority of these works are western-centric and mainly dealing with post-mortem personal asset management. Cultural nuances, socio-economic perspectives, religion, political climate, regional infrastructures are mostly sidelined. We found significant disparity in Digital Afterlife product and service designs, which got worse during the global pandemic. Our goal is to collect further in-depth data within the three big ICT powerhouses of global south (Indonesia, India and The Philippines), identify the challenges as well as the innovations around Digital Afterlife.We envision proposing a set of recommendations, based on our findings, for developing a more inclusive and equitable digital space in this pandemic-stricken world.
0803.2365
Ganesh Narayan
V Sriram, Ganesh Narayan, K Gopinath
SAFIUS - A secure and accountable filesystem over untrusted storage
11pt, 12 pages, 16 figures
Fourth International IEEE Security in Storage Workshop, 2007 - SISW '07. Publication Date: 27-27 Sept. 2007 On page(s): 34-45
10.1109/SISW.2007.7
null
cs.OS cs.CR cs.DC cs.NI cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe SAFIUS, a secure accountable file system that resides over an untrusted storage. SAFIUS provides strong security guarantees like confidentiality, integrity, prevention from rollback attacks, and accountability. SAFIUS also enables read/write sharing of data and provides the standard UNIX-like interface for applications. To achieve accountability with good performance, it uses asynchronous signatures; to reduce the space required for storing these signatures, a novel signature pruning mechanism is used. SAFIUS has been implemented on a GNU/Linux based system modifying OpenGFS. Preliminary performance studies show that SAFIUS has a tolerable overhead for providing secure storage: while it has an overhead of about 50% of OpenGFS in data intensive workloads (due to the overhead of performing encryption/decryption in software), it is comparable (or better in some cases) to OpenGFS in metadata intensive workloads.
[ { "created": "Sun, 16 Mar 2008 18:24:13 GMT", "version": "v1" } ]
2016-11-18
[ [ "Sriram", "V", "" ], [ "Narayan", "Ganesh", "" ], [ "Gopinath", "K", "" ] ]
We describe SAFIUS, a secure accountable file system that resides over an untrusted storage. SAFIUS provides strong security guarantees like confidentiality, integrity, prevention from rollback attacks, and accountability. SAFIUS also enables read/write sharing of data and provides the standard UNIX-like interface for applications. To achieve accountability with good performance, it uses asynchronous signatures; to reduce the space required for storing these signatures, a novel signature pruning mechanism is used. SAFIUS has been implemented on a GNU/Linux based system modifying OpenGFS. Preliminary performance studies show that SAFIUS has a tolerable overhead for providing secure storage: while it has an overhead of about 50% of OpenGFS in data intensive workloads (due to the overhead of performing encryption/decryption in software), it is comparable (or better in some cases) to OpenGFS in metadata intensive workloads.
2105.03389
EPTCS
Patricia Johann (Appalachian State University), Enrico Ghiorzi (Appalachian State University), Daniel Jeffries (Appalachian State University)
GADTs, Functoriality, Parametricity: Pick Two
In Proceedings LSFA 2021, arXiv:2204.03415
EPTCS 357, 2022, pp. 77-92
10.4204/EPTCS.357.6
null
cs.LO cs.PL
http://creativecommons.org/licenses/by/4.0/
GADTs can be represented either as their Church encodings a la Atkey, or as fixpoints a la Johann and Polonsky. While a GADT represented as its Church encoding need not support a map function satisfying the functor laws, the fixpoint representation of a GADT must support such a map function even to be well-defined. The two representations of a GADT thus need not be the same in general. This observation forces a choice of representation of data types in languages supporting GADTs. In this paper we show that choosing whether to represent data types as their Church encodings or as fixpoints determines whether or not a language supporting GADTs can have parametric models. This choice thus has important consequences for how we can program with, and reason about, these advanced data types.
[ { "created": "Fri, 7 May 2021 16:50:42 GMT", "version": "v1" }, { "created": "Tue, 7 Dec 2021 11:06:49 GMT", "version": "v2" }, { "created": "Fri, 8 Apr 2022 07:18:08 GMT", "version": "v3" } ]
2022-04-11
[ [ "Johann", "Patricia", "", "Appalachian State University" ], [ "Ghiorzi", "Enrico", "", "Appalachian State University" ], [ "Jeffries", "Daniel", "", "Appalachian State\n University" ] ]
GADTs can be represented either as their Church encodings a la Atkey, or as fixpoints a la Johann and Polonsky. While a GADT represented as its Church encoding need not support a map function satisfying the functor laws, the fixpoint representation of a GADT must support such a map function even to be well-defined. The two representations of a GADT thus need not be the same in general. This observation forces a choice of representation of data types in languages supporting GADTs. In this paper we show that choosing whether to represent data types as their Church encodings or as fixpoints determines whether or not a language supporting GADTs can have parametric models. This choice thus has important consequences for how we can program with, and reason about, these advanced data types.
1208.0944
Nader Ale Ebrahim
Nader Ale Ebrahim, Shamsuddin Ahmed, Zahari Taha
Establishing Virtual R&D Teams: Obliged Policy
6th IMC (International Management Conference). Tehran, Iran 2008
null
null
null
cs.OH
http://creativecommons.org/licenses/by/3.0/
In a global and technology oriented world the requirements that products and services have to fulfill are increasing and are getting more complicated. Research and development (R&D) is becoming increasingly important in creating the knowledge that makes research and business more competitive. Companies are obliged to produce more rapidly, more effectively and more efficiently. In order to meet these requirements and to secure the viability of business processes, services and products R&D teams need to access and retrieve information from as many sources as possible. From the other perspective virtual teams are important mechanisms for organizations seeking to leverage scarce resources across geographic and other boundaries moreover; virtual collaboration has become vital for most organizations. This is particularly true in the context of designing new product and service innovation. Such collaboration often involves a network of partners located around the world. However at the R&D project level, dealing with such distributed teams challenges both managers and specialists. In new product development, it is necessary to put together the growing different capabilities and services with the goal, through cooperation between suppliers and customers, service providers and scientific institutions to achieve innovations of high quality. In this paper based on comprehensive literature review of recent articles, at the first step provides an primary definition and characterization of virtual R&D team; next, the potential value created by virtual R&D teams for new product development is explored and lastly along with a guide line for future study, it is argued that the establishing of virtual R&D teams should be given consideration in the management of R&D projects.
[ { "created": "Sat, 4 Aug 2012 16:35:48 GMT", "version": "v1" } ]
2012-08-07
[ [ "Ebrahim", "Nader Ale", "" ], [ "Ahmed", "Shamsuddin", "" ], [ "Taha", "Zahari", "" ] ]
In a global and technology oriented world the requirements that products and services have to fulfill are increasing and are getting more complicated. Research and development (R&D) is becoming increasingly important in creating the knowledge that makes research and business more competitive. Companies are obliged to produce more rapidly, more effectively and more efficiently. In order to meet these requirements and to secure the viability of business processes, services and products R&D teams need to access and retrieve information from as many sources as possible. From the other perspective virtual teams are important mechanisms for organizations seeking to leverage scarce resources across geographic and other boundaries moreover; virtual collaboration has become vital for most organizations. This is particularly true in the context of designing new product and service innovation. Such collaboration often involves a network of partners located around the world. However at the R&D project level, dealing with such distributed teams challenges both managers and specialists. In new product development, it is necessary to put together the growing different capabilities and services with the goal, through cooperation between suppliers and customers, service providers and scientific institutions to achieve innovations of high quality. In this paper based on comprehensive literature review of recent articles, at the first step provides an primary definition and characterization of virtual R&D team; next, the potential value created by virtual R&D teams for new product development is explored and lastly along with a guide line for future study, it is argued that the establishing of virtual R&D teams should be given consideration in the management of R&D projects.
1802.00771
Vinay Namboodiri
Shashank Sharma and Vinay P. Namboodiri
No Modes left behind: Capturing the data distribution effectively using GANs
accepted to AAAI 2018 conference
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative adversarial networks (GANs) while being very versatile in realistic image synthesis, still are sensitive to the input distribution. Given a set of data that has an imbalance in the distribution, the networks are susceptible to missing modes and not capturing the data distribution. While various methods have been tried to improve training of GANs, these have not addressed the challenges of covering the full data distribution. Specifically, a generator is not penalized for missing a mode. We show that these are therefore still susceptible to not capturing the full data distribution. In this paper, we propose a simple approach that combines an encoder based objective with novel loss functions for generator and discriminator that improves the solution in terms of capturing missing modes. We validate that the proposed method results in substantial improvements through its detailed analysis on toy and real datasets. The quantitative and qualitative results demonstrate that the proposed method improves the solution for the problem of missing modes and improves training of GANs.
[ { "created": "Fri, 2 Feb 2018 17:10:55 GMT", "version": "v1" } ]
2018-02-05
[ [ "Sharma", "Shashank", "" ], [ "Namboodiri", "Vinay P.", "" ] ]
Generative adversarial networks (GANs) while being very versatile in realistic image synthesis, still are sensitive to the input distribution. Given a set of data that has an imbalance in the distribution, the networks are susceptible to missing modes and not capturing the data distribution. While various methods have been tried to improve training of GANs, these have not addressed the challenges of covering the full data distribution. Specifically, a generator is not penalized for missing a mode. We show that these are therefore still susceptible to not capturing the full data distribution. In this paper, we propose a simple approach that combines an encoder based objective with novel loss functions for generator and discriminator that improves the solution in terms of capturing missing modes. We validate that the proposed method results in substantial improvements through its detailed analysis on toy and real datasets. The quantitative and qualitative results demonstrate that the proposed method improves the solution for the problem of missing modes and improves training of GANs.
2108.00320
Stefan Konigorski
Alexander M. Zenner, Erwin B\"ottinger, Stefan Konigorski
StudyMe: A New Mobile App for User-Centric N-of-1 Trials
null
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals. Although several tools for N-of-1 trials exist, none support non-experts in conducting their own user-centric trials. In this study we present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me and offers users flexibility and guidance in configuring every component of their trials. We also present research that informed the development of StudyMe. Through an initial survey with 272 participants, we learned that individuals are interested in a variety of personal health aspects and have unique ideas on how to improve them. In an iterative, user-centered development process with intermediate user tests we developed StudyMe that also features an educational part to communicate N-of-1 trial concepts. A final empirical evaluation of StudyMe showed that all participants were able to create their own trials successfully using StudyMe and the app achieved a very good usability rating. Our findings suggest that StudyMe provides a significant step towards enabling individuals to apply a systematic science-oriented approach to personalize health-related interventions and behavior modifications in their everyday lives.
[ { "created": "Sat, 31 Jul 2021 20:43:36 GMT", "version": "v1" } ]
2021-08-03
[ [ "Zenner", "Alexander M.", "" ], [ "Böttinger", "Erwin", "" ], [ "Konigorski", "Stefan", "" ] ]
N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals. Although several tools for N-of-1 trials exist, none support non-experts in conducting their own user-centric trials. In this study we present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me and offers users flexibility and guidance in configuring every component of their trials. We also present research that informed the development of StudyMe. Through an initial survey with 272 participants, we learned that individuals are interested in a variety of personal health aspects and have unique ideas on how to improve them. In an iterative, user-centered development process with intermediate user tests we developed StudyMe that also features an educational part to communicate N-of-1 trial concepts. A final empirical evaluation of StudyMe showed that all participants were able to create their own trials successfully using StudyMe and the app achieved a very good usability rating. Our findings suggest that StudyMe provides a significant step towards enabling individuals to apply a systematic science-oriented approach to personalize health-related interventions and behavior modifications in their everyday lives.
2205.02754
Mulong Luo
Mulong Luo, G. Edward Suh
Accelerating Path Planning for Autonomous Driving with Hardware-Assisted Memoization
null
null
null
null
cs.RO cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Path planning for autonomous driving with dynamic obstacles poses a challenge because it needs to perform a higher-dimensional search (with time-dimension) while still meeting real-time constraints. This paper proposes an algorithm-hardware co-optimization approach to accelerate path planning with high-dimensional search space. First, we reduce the time for a nearest neighbor search and collision detection by mapping nodes and obstacles to a lower-dimensional space and memoizing recent search results. Then, we propose a hardware extension for efficient memoization. The experimental results on a modern processor and a cycle-level simulator show that the hardware-assisted memoization significantly reduces the execution time of path planning.
[ { "created": "Thu, 5 May 2022 16:31:14 GMT", "version": "v1" }, { "created": "Fri, 27 May 2022 15:35:50 GMT", "version": "v2" } ]
2022-05-30
[ [ "Luo", "Mulong", "" ], [ "Suh", "G. Edward", "" ] ]
Path planning for autonomous driving with dynamic obstacles poses a challenge because it needs to perform a higher-dimensional search (with time-dimension) while still meeting real-time constraints. This paper proposes an algorithm-hardware co-optimization approach to accelerate path planning with high-dimensional search space. First, we reduce the time for a nearest neighbor search and collision detection by mapping nodes and obstacles to a lower-dimensional space and memoizing recent search results. Then, we propose a hardware extension for efficient memoization. The experimental results on a modern processor and a cycle-level simulator show that the hardware-assisted memoization significantly reduces the execution time of path planning.
2404.13798
Jensen Hwa
Jensen Hwa, Qingyu Zhao, Aditya Lahiri, Adnan Masood, Babak Salimi, Ehsan Adeli
Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation
To appear at the 2024 IEEE CVPR Workshop on Fair, Data-Efficient, and Trusted Computer Vision
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Conditional independence (CI) constraints are critical for defining and evaluating fairness in machine learning, as well as for learning unconfounded or causal representations. Traditional methods for ensuring fairness either blindly learn invariant features with respect to a protected variable (e.g., race when classifying sex from face images) or enforce CI relative to the protected attribute only on the model output (e.g., the sex label). Neither of these methods are effective in enforcing CI in high-dimensional feature spaces. In this paper, we focus on a nascent approach characterizing the CI constraint in terms of two Jensen-Shannon divergence terms, and we extend it to high-dimensional feature spaces using a novel dynamic sampling strategy. In doing so, we introduce a new training paradigm that can be applied to any encoder architecture. We are able to enforce conditional independence of the diffusion autoencoder latent representation with respect to any protected attribute under the equalized odds constraint and show that this approach enables causal image generation with controllable latent spaces. Our experimental results demonstrate that our approach can achieve high accuracy on downstream tasks while upholding equality of odds.
[ { "created": "Sun, 21 Apr 2024 23:34:45 GMT", "version": "v1" } ]
2024-04-23
[ [ "Hwa", "Jensen", "" ], [ "Zhao", "Qingyu", "" ], [ "Lahiri", "Aditya", "" ], [ "Masood", "Adnan", "" ], [ "Salimi", "Babak", "" ], [ "Adeli", "Ehsan", "" ] ]
Conditional independence (CI) constraints are critical for defining and evaluating fairness in machine learning, as well as for learning unconfounded or causal representations. Traditional methods for ensuring fairness either blindly learn invariant features with respect to a protected variable (e.g., race when classifying sex from face images) or enforce CI relative to the protected attribute only on the model output (e.g., the sex label). Neither of these methods are effective in enforcing CI in high-dimensional feature spaces. In this paper, we focus on a nascent approach characterizing the CI constraint in terms of two Jensen-Shannon divergence terms, and we extend it to high-dimensional feature spaces using a novel dynamic sampling strategy. In doing so, we introduce a new training paradigm that can be applied to any encoder architecture. We are able to enforce conditional independence of the diffusion autoencoder latent representation with respect to any protected attribute under the equalized odds constraint and show that this approach enables causal image generation with controllable latent spaces. Our experimental results demonstrate that our approach can achieve high accuracy on downstream tasks while upholding equality of odds.
2405.12945
Rishikesh Gajjala
Rishikesh Gajjala and Jayanth Ravi
Improved upper bounds for the Heilbronn's Problem for $k$-gons
To appear in the Canadian Conference on Computational Geometry (CCCG) 2024
null
null
null
cs.DM cs.CG math.CO
http://creativecommons.org/licenses/by/4.0/
The Heilbronn triangle problem asks for the placement of $n$ points in a unit square that maximizes the smallest area of a triangle formed by any three of those points. In $1972$, Schmidt considered a natural generalization of this problem. He asked for the placement of $n$ points in a unit square that maximizes the smallest area of the convex hull formed by any four of those points. He showed a lower bound of $\Omega(n^{-3/2})$, which was improved to $\Omega(n^{-3/2}\log{n})$ by Leffman. A trivial upper bound of $3/n$ could be obtained, and Schmidt asked if this could be improved asymptotically. However, despite several efforts, no asymptotic improvement over the trivial upper bound was known for the last $50$ years, and the problem started to get the tag of being notoriously hard. Szemer{\'e}di posed the question of whether one can, at least, improve the constant in this trivial upper bound. In this work, we answer this question by proving an upper bound of $2/n+o(1/n)$. We also extend our results to any convex hulls formed by $k\geq 4$ points.
[ { "created": "Tue, 21 May 2024 17:17:25 GMT", "version": "v1" } ]
2024-05-22
[ [ "Gajjala", "Rishikesh", "" ], [ "Ravi", "Jayanth", "" ] ]
The Heilbronn triangle problem asks for the placement of $n$ points in a unit square that maximizes the smallest area of a triangle formed by any three of those points. In $1972$, Schmidt considered a natural generalization of this problem. He asked for the placement of $n$ points in a unit square that maximizes the smallest area of the convex hull formed by any four of those points. He showed a lower bound of $\Omega(n^{-3/2})$, which was improved to $\Omega(n^{-3/2}\log{n})$ by Leffman. A trivial upper bound of $3/n$ could be obtained, and Schmidt asked if this could be improved asymptotically. However, despite several efforts, no asymptotic improvement over the trivial upper bound was known for the last $50$ years, and the problem started to get the tag of being notoriously hard. Szemer{\'e}di posed the question of whether one can, at least, improve the constant in this trivial upper bound. In this work, we answer this question by proving an upper bound of $2/n+o(1/n)$. We also extend our results to any convex hulls formed by $k\geq 4$ points.
1904.02322
Youshan Zhang
Youshan Zhang, Brian D. Davison
Modified Distribution Alignment for Domain Adaptation with Pre-trained Inception ResNet
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have been widely used in computer vision. There are several well trained deep neural networks for the ImageNet classification challenge, which has played a significant role in image recognition. However, little work has explored pre-trained neural networks for image recognition in domain adaption. In this paper, we are the first to extract better-represented features from a pre-trained Inception ResNet model for domain adaptation. We then present a modified distribution alignment method for classification using the extracted features. We test our model using three benchmark datasets (Office+Caltech-10, Office-31, and Office-Home). Extensive experiments demonstrate significant improvements (4.8%, 5.5%, and 10%) in classification accuracy over the state-of-the-art.
[ { "created": "Thu, 4 Apr 2019 03:00:24 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2019 15:04:36 GMT", "version": "v2" } ]
2019-04-19
[ [ "Zhang", "Youshan", "" ], [ "Davison", "Brian D.", "" ] ]
Deep neural networks have been widely used in computer vision. There are several well trained deep neural networks for the ImageNet classification challenge, which has played a significant role in image recognition. However, little work has explored pre-trained neural networks for image recognition in domain adaption. In this paper, we are the first to extract better-represented features from a pre-trained Inception ResNet model for domain adaptation. We then present a modified distribution alignment method for classification using the extracted features. We test our model using three benchmark datasets (Office+Caltech-10, Office-31, and Office-Home). Extensive experiments demonstrate significant improvements (4.8%, 5.5%, and 10%) in classification accuracy over the state-of-the-art.
1904.05394
Marco Huber
Nina Schaaf, Marco F. Huber, and Johannes Maucher
Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
8 pages, 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.
[ { "created": "Wed, 10 Apr 2019 19:11:47 GMT", "version": "v1" }, { "created": "Thu, 3 Oct 2019 19:57:24 GMT", "version": "v2" } ]
2019-10-07
[ [ "Schaaf", "Nina", "" ], [ "Huber", "Marco F.", "" ], [ "Maucher", "Johannes", "" ] ]
One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.
1706.06810
Jongpil Lee
Jongpil Lee, Juhan Nam
Multi-Level and Multi-Scale Feature Aggregation Using Sample-level Deep Convolutional Neural Networks for Music Classification
ICML Music Discovery Workshop 2017
null
null
null
cs.SD cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Music tag words that describe music audio by text have different levels of abstraction. Taking this issue into account, we propose a music classification approach that aggregates multi-level and multi-scale features using pre-trained feature extractors. In particular, the feature extractors are trained in sample-level deep convolutional neural networks using raw waveforms. We show that this approach achieves state-of-the-art results on several music classification datasets.
[ { "created": "Wed, 21 Jun 2017 09:57:24 GMT", "version": "v1" } ]
2017-06-22
[ [ "Lee", "Jongpil", "" ], [ "Nam", "Juhan", "" ] ]
Music tag words that describe music audio by text have different levels of abstraction. Taking this issue into account, we propose a music classification approach that aggregates multi-level and multi-scale features using pre-trained feature extractors. In particular, the feature extractors are trained in sample-level deep convolutional neural networks using raw waveforms. We show that this approach achieves state-of-the-art results on several music classification datasets.
1108.6123
Aleksandar Nikolov
Jean Bolot, Nadia Fawaz, S. Muthukrishnan, Aleksandar Nikolov, Nina Taft
Private Decayed Sum Estimation under Continual Observation
null
null
10.1145/2448496.2448530
null
cs.DS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In monitoring applications, recent data is more important than distant data. How does this affect privacy of data analysis? We study a general class of data analyses - computing predicate sums - with privacy. Formally, we study the problem of estimating predicate sums {\em privately}, for sliding windows (and other well-known decay models of data, i.e. exponential and polynomial decay). We extend the recently proposed continual privacy model of Dwork et al. We present algorithms for decayed sum which are $\eps$-differentially private, and are accurate. For window and exponential decay sums, our algorithms are accurate up to additive $1/\eps$ and polylog terms in the range of the computed function; for polynomial decay sums which are technically more challenging because partial solutions do not compose easily, our algorithms incur additional relative error. Further, we show lower bounds, tight within polylog factors and tight with respect to the dependence on the probability of error.
[ { "created": "Wed, 31 Aug 2011 03:56:50 GMT", "version": "v1" }, { "created": "Sat, 3 Mar 2012 01:06:44 GMT", "version": "v2" } ]
2013-08-05
[ [ "Bolot", "Jean", "" ], [ "Fawaz", "Nadia", "" ], [ "Muthukrishnan", "S.", "" ], [ "Nikolov", "Aleksandar", "" ], [ "Taft", "Nina", "" ] ]
In monitoring applications, recent data is more important than distant data. How does this affect privacy of data analysis? We study a general class of data analyses - computing predicate sums - with privacy. Formally, we study the problem of estimating predicate sums {\em privately}, for sliding windows (and other well-known decay models of data, i.e. exponential and polynomial decay). We extend the recently proposed continual privacy model of Dwork et al. We present algorithms for decayed sum which are $\eps$-differentially private, and are accurate. For window and exponential decay sums, our algorithms are accurate up to additive $1/\eps$ and polylog terms in the range of the computed function; for polynomial decay sums which are technically more challenging because partial solutions do not compose easily, our algorithms incur additional relative error. Further, we show lower bounds, tight within polylog factors and tight with respect to the dependence on the probability of error.
2407.13490
Alexandre Bonlarron
Florian R\'egin, Elisabetta De Maria and Alexandre Bonlarron
Combining Constraint Programming Reasoning with Large Language Model Predictions
To appear at The 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP's struggle with implementing "meaning'' and ML's difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.
[ { "created": "Thu, 18 Jul 2024 13:15:55 GMT", "version": "v1" } ]
2024-07-19
[ [ "Régin", "Florian", "" ], [ "De Maria", "Elisabetta", "" ], [ "Bonlarron", "Alexandre", "" ] ]
Constraint Programming (CP) and Machine Learning (ML) face challenges in text generation due to CP's struggle with implementing "meaning'' and ML's difficulty with structural constraints. This paper proposes a solution by combining both approaches and embedding a Large Language Model (LLM) in CP. The LLM handles word generation and meaning, while CP manages structural constraints. This approach builds on GenCP, an improved version of On-the-fly Constraint Programming Search (OTFS) using LLM-generated domains. Compared to Beam Search (BS), a standard NLP method, this combined approach (GenCP with LLM) is faster and produces better results, ensuring all constraints are satisfied. This fusion of CP and ML presents new possibilities for enhancing text generation under constraints.
1602.02698
Yehia Elkhatib PhD
Yehia Elkhatib
Defining Cross-Cloud Systems
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen an increasing number of cross-cloud architectures, i.e. systems that span across cloud provisioning boundaries. However, the cloud computing world still lacks any standards in terms of programming interfaces, which has a knock-on effect on the costs associated with interoperability and severely limits the flexibility and portability of applications and virtual infrastructures. This paper outlines the different types of cross-cloud systems, and the associated design decisions.
[ { "created": "Mon, 8 Feb 2016 19:13:32 GMT", "version": "v1" } ]
2016-02-09
[ [ "Elkhatib", "Yehia", "" ] ]
Recent years have seen an increasing number of cross-cloud architectures, i.e. systems that span across cloud provisioning boundaries. However, the cloud computing world still lacks any standards in terms of programming interfaces, which has a knock-on effect on the costs associated with interoperability and severely limits the flexibility and portability of applications and virtual infrastructures. This paper outlines the different types of cross-cloud systems, and the associated design decisions.
2105.05517
Nicky Williams
Nicky Williams (LSL)
Towards exhaustive branch coverage with PathCrawler
null
2nd ACM/IEEE International Conference on Automation of Software Test AST 2021, May 2021, Madrid, Spain
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Branch coverage of source code is a very widely used test criterion. Moreover, branch coverage is a similar problem to line coverage, MC/DC and the coverage of assertion violations, certain runtime errors and various other types of test objective. Indeed, establishing that a large number of test objectives are unreachable, or conversely, providing the test inputs which reach them, is at the heart of many verification tasks. However, automatic test generation for exhaustive branch coverage remains an elusive goal: many modern tools obtain high coverage scores without being able to provide an explanation for why some branches are not covered, such as a demonstration that they are unreachable. Concolic test generation offers the promise of exhaustive coverage but covers paths more efficiently than branches. In this paper, I explain why, and propose different strategies to improve its performance on exhaustive branch coverage. A comparison of these strategies on examples of real code shows promising results.
[ { "created": "Wed, 12 May 2021 08:55:13 GMT", "version": "v1" } ]
2021-05-13
[ [ "Williams", "Nicky", "", "LSL" ] ]
Branch coverage of source code is a very widely used test criterion. Moreover, branch coverage is a similar problem to line coverage, MC/DC and the coverage of assertion violations, certain runtime errors and various other types of test objective. Indeed, establishing that a large number of test objectives are unreachable, or conversely, providing the test inputs which reach them, is at the heart of many verification tasks. However, automatic test generation for exhaustive branch coverage remains an elusive goal: many modern tools obtain high coverage scores without being able to provide an explanation for why some branches are not covered, such as a demonstration that they are unreachable. Concolic test generation offers the promise of exhaustive coverage but covers paths more efficiently than branches. In this paper, I explain why, and propose different strategies to improve its performance on exhaustive branch coverage. A comparison of these strategies on examples of real code shows promising results.
2209.09729
Andr\'as Kov\'acs
Andr\'as Kov\'acs
Staged Compilation with Two-Level Type Theory
null
null
10.1145/3547641
null
cs.PL cs.LO
http://creativecommons.org/licenses/by/4.0/
The aim of staged compilation is to enable metaprogramming in a way such that we have guarantees about the well-formedness of code output, and we can also mix together object-level and meta-level code in a concise and convenient manner. In this work, we observe that two-level type theory (2LTT), a system originally devised for the purpose of developing synthetic homotopy theory, also serves as a system for staged compilation with dependent types. 2LTT has numerous good properties for this use case: it has a concise specification, well-behaved model theory, and it supports a wide range of language features both at the object and the meta level. First, we give an overview of 2LTT's features and applications in staging. Then, we present a staging algorithm and prove its correctness. Our algorithm is "staging-by-evaluation", analogously to the technique of normalization-by-evaluation, in that staging is given by the evaluation of 2LTT syntax in a semantic domain. The staging algorithm together with its correctness constitutes a proof of strong conservativity of 2LLT over the object theory. To our knowledge, this is the first description of staged compilation which supports full dependent types and unrestricted staging for types.
[ { "created": "Tue, 20 Sep 2022 14:00:15 GMT", "version": "v1" } ]
2022-09-21
[ [ "Kovács", "András", "" ] ]
The aim of staged compilation is to enable metaprogramming in a way such that we have guarantees about the well-formedness of code output, and we can also mix together object-level and meta-level code in a concise and convenient manner. In this work, we observe that two-level type theory (2LTT), a system originally devised for the purpose of developing synthetic homotopy theory, also serves as a system for staged compilation with dependent types. 2LTT has numerous good properties for this use case: it has a concise specification, well-behaved model theory, and it supports a wide range of language features both at the object and the meta level. First, we give an overview of 2LTT's features and applications in staging. Then, we present a staging algorithm and prove its correctness. Our algorithm is "staging-by-evaluation", analogously to the technique of normalization-by-evaluation, in that staging is given by the evaluation of 2LTT syntax in a semantic domain. The staging algorithm together with its correctness constitutes a proof of strong conservativity of 2LLT over the object theory. To our knowledge, this is the first description of staged compilation which supports full dependent types and unrestricted staging for types.
2311.01279
Samie Mostafavi
Samie Mostafavi, Vishnu Narayanan Moothedath, Stefan R\"onngren, Neelabhro Roy, Gourav Prateek Sharma, Sangwon Seo, Manuel Olgu\'in Mu\~noz, James Gross
ExPECA: An Experimental Platform for Trustworthy Edge Computing Applications
null
null
10.1145/3583740.3626819
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
This paper presents ExPECA, an edge computing and wireless communication research testbed designed to tackle two pressing challenges: comprehensive end-to-end experimentation and high levels of experimental reproducibility. Leveraging OpenStack-based Chameleon Infrastructure (CHI) framework for its proven flexibility and ease of operation, ExPECA is located in a unique, isolated underground facility, providing a highly controlled setting for wireless experiments. The testbed is engineered to facilitate integrated studies of both communication and computation, offering a diverse array of Software-Defined Radios (SDR) and Commercial Off-The-Shelf (COTS) wireless and wired links, as well as containerized computational environments. We exemplify the experimental possibilities of the testbed using OpenRTiST, a latency-sensitive, bandwidth-intensive application, and analyze its performance. Lastly, we highlight an array of research domains and experimental setups that stand to gain from ExPECA's features, including closed-loop applications and time-sensitive networking.
[ { "created": "Thu, 2 Nov 2023 14:50:01 GMT", "version": "v1" } ]
2023-11-03
[ [ "Mostafavi", "Samie", "" ], [ "Moothedath", "Vishnu Narayanan", "" ], [ "Rönngren", "Stefan", "" ], [ "Roy", "Neelabhro", "" ], [ "Sharma", "Gourav Prateek", "" ], [ "Seo", "Sangwon", "" ], [ "Muñoz", "Manuel Olguín", "" ], [ "Gross", "James", "" ] ]
This paper presents ExPECA, an edge computing and wireless communication research testbed designed to tackle two pressing challenges: comprehensive end-to-end experimentation and high levels of experimental reproducibility. Leveraging OpenStack-based Chameleon Infrastructure (CHI) framework for its proven flexibility and ease of operation, ExPECA is located in a unique, isolated underground facility, providing a highly controlled setting for wireless experiments. The testbed is engineered to facilitate integrated studies of both communication and computation, offering a diverse array of Software-Defined Radios (SDR) and Commercial Off-The-Shelf (COTS) wireless and wired links, as well as containerized computational environments. We exemplify the experimental possibilities of the testbed using OpenRTiST, a latency-sensitive, bandwidth-intensive application, and analyze its performance. Lastly, we highlight an array of research domains and experimental setups that stand to gain from ExPECA's features, including closed-loop applications and time-sensitive networking.
2103.11520
Gabriel Bertocco
Gabriel Bertocco and Fernanda Andal\'o and Anderson Rocha
Unsupervised and self-adaptative techniques for cross-domain person re-identification
Published on IEEE Transactions on Information Forensics and Security
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task and, for this reason, most works in the prior art rely on supervised feature learning from a labeled dataset to match the same person in different views. However, it demands the time-consuming task of labeling the acquired data, prohibiting its fast deployment, specially in forensic scenarios. Unsupervised Domain Adaptation (UDA) emerges as a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation. However, most UDA-based algorithms rely upon a complex loss function with several hyper-parameters, which hinders the generalization to different scenarios. Moreover, as UDA depends on the translation between domains, it is important to select the most reliable data from the unseen domain, thus avoiding error propagation caused by noisy examples on the target data -- an often overlooked problem. In this sense, we propose a novel UDA-based ReID method that optimizes a simple loss function with only one hyper-parameter and that takes advantage of triplets of samples created by a new offline strategy based on the diversity of cameras within a cluster. This new strategy adapts the model and also regularizes it, avoiding overfitting on the target domain. We also introduce a new self-ensembling strategy, in which weights from different iterations are aggregated to create a final model combining knowledge from distinct moments of the adaptation. For evaluation, we consider three well-known deep learning architectures and combine them for final decision-making. The proposed method does not use person re-ranking nor any label on the target domain, and outperforms the state of the art, with a much simpler setup, on the Market to Duke, the challenging Market1501 to MSMT17, and Duke to MSMT17 adaptation scenarios.
[ { "created": "Sun, 21 Mar 2021 23:58:39 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 18:22:33 GMT", "version": "v2" }, { "created": "Mon, 7 Feb 2022 13:29:38 GMT", "version": "v3" } ]
2022-02-08
[ [ "Bertocco", "Gabriel", "" ], [ "Andaló", "Fernanda", "" ], [ "Rocha", "Anderson", "" ] ]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task and, for this reason, most works in the prior art rely on supervised feature learning from a labeled dataset to match the same person in different views. However, it demands the time-consuming task of labeling the acquired data, prohibiting its fast deployment, specially in forensic scenarios. Unsupervised Domain Adaptation (UDA) emerges as a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation. However, most UDA-based algorithms rely upon a complex loss function with several hyper-parameters, which hinders the generalization to different scenarios. Moreover, as UDA depends on the translation between domains, it is important to select the most reliable data from the unseen domain, thus avoiding error propagation caused by noisy examples on the target data -- an often overlooked problem. In this sense, we propose a novel UDA-based ReID method that optimizes a simple loss function with only one hyper-parameter and that takes advantage of triplets of samples created by a new offline strategy based on the diversity of cameras within a cluster. This new strategy adapts the model and also regularizes it, avoiding overfitting on the target domain. We also introduce a new self-ensembling strategy, in which weights from different iterations are aggregated to create a final model combining knowledge from distinct moments of the adaptation. For evaluation, we consider three well-known deep learning architectures and combine them for final decision-making. The proposed method does not use person re-ranking nor any label on the target domain, and outperforms the state of the art, with a much simpler setup, on the Market to Duke, the challenging Market1501 to MSMT17, and Duke to MSMT17 adaptation scenarios.
1107.5743
Siddhartha Jonnalagadda
Siddhartha Jonnalagadda, Philip Topham
NEMO: Extraction and normalization of organization names from PubMed affiliation strings
null
Siddhartha Jonnalagadda, Philip Topham. NEMO: Extraction and normalization of organization names from PubMed affiliation strings. Journal of Biomedical Discovery and Collaboration, 2010 Oct 4;5:50-75
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose NEMO, a system for extracting organization names in the affiliation and normalizing them to a canonical organization name. Our parsing process involves multi-layered rule matching with multiple dictionaries. The system achieves more than 98% f-score in extracting organization names. Our process of normalization that involves clustering based on local sequence alignment metrics and local learning based on finding connected components. A high precision was also observed in normalization. NEMO is the missing link in associating each biomedical paper and its authors to an organization name in its canonical form and the Geopolitical location of the organization. This research could potentially help in analyzing large social networks of organizations for landscaping a particular topic, improving performance of author disambiguation, adding weak links in the co-author network of authors, augmenting NLM's MARS system for correcting errors in OCR output of affiliation field, and automatically indexing the PubMed citations with the normalized organization name and country. Our system is available as a graphical user interface available for download along with this paper.
[ { "created": "Thu, 28 Jul 2011 15:37:56 GMT", "version": "v1" } ]
2011-07-29
[ [ "Jonnalagadda", "Siddhartha", "" ], [ "Topham", "Philip", "" ] ]
We propose NEMO, a system for extracting organization names in the affiliation and normalizing them to a canonical organization name. Our parsing process involves multi-layered rule matching with multiple dictionaries. The system achieves more than 98% f-score in extracting organization names. Our process of normalization that involves clustering based on local sequence alignment metrics and local learning based on finding connected components. A high precision was also observed in normalization. NEMO is the missing link in associating each biomedical paper and its authors to an organization name in its canonical form and the Geopolitical location of the organization. This research could potentially help in analyzing large social networks of organizations for landscaping a particular topic, improving performance of author disambiguation, adding weak links in the co-author network of authors, augmenting NLM's MARS system for correcting errors in OCR output of affiliation field, and automatically indexing the PubMed citations with the normalized organization name and country. Our system is available as a graphical user interface available for download along with this paper.
1807.01079
Anna Latour
Anna L.D. Latour, Behrouz Babaki, Siegfried Nijssen
Stochastic Constraint Optimization using Propagation on Ordered Binary Decision Diagrams
Eighth International Workshop on Statistical Relational AI, in conjunction with the 2018 International Joint Conference on Artificial Intelligence (IJCAI 2018)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
A number of problems in relational Artificial Intelligence can be viewed as Stochastic Constraint Optimization Problems (SCOPs). These are constraint optimization problems that involve objectives or constraints with a stochastic component. Building on the recently proposed language SC-ProbLog for modeling SCOPs, we propose a new method for solving these problems. Earlier methods used Probabilistic Logic Programming (PLP) techniques to create Ordered Binary Decision Diagrams (OBDDs), which were decomposed into smaller constraints in order to exploit existing constraint programming (CP) solvers. We argue that this approach has as drawback that a decomposed representation of an OBDD does not guarantee domain consistency during search, and hence limits the efficiency of the solver. For the specific case of monotonic distributions, we suggest an alternative method for using CP in SCOP, based on the development of a new propagator; we show that this propagator is linear in the size of the OBDD, and has the potential to be more efficient than the decomposition method, as it maintains domain consistency.
[ { "created": "Tue, 3 Jul 2018 10:58:38 GMT", "version": "v1" } ]
2018-07-04
[ [ "Latour", "Anna L. D.", "" ], [ "Babaki", "Behrouz", "" ], [ "Nijssen", "Siegfried", "" ] ]
A number of problems in relational Artificial Intelligence can be viewed as Stochastic Constraint Optimization Problems (SCOPs). These are constraint optimization problems that involve objectives or constraints with a stochastic component. Building on the recently proposed language SC-ProbLog for modeling SCOPs, we propose a new method for solving these problems. Earlier methods used Probabilistic Logic Programming (PLP) techniques to create Ordered Binary Decision Diagrams (OBDDs), which were decomposed into smaller constraints in order to exploit existing constraint programming (CP) solvers. We argue that this approach has as drawback that a decomposed representation of an OBDD does not guarantee domain consistency during search, and hence limits the efficiency of the solver. For the specific case of monotonic distributions, we suggest an alternative method for using CP in SCOP, based on the development of a new propagator; we show that this propagator is linear in the size of the OBDD, and has the potential to be more efficient than the decomposition method, as it maintains domain consistency.
1711.00913
Mohit Dubey
Mohit Dubey, Garrett Kenyon, Nils Carlson, Austin Thresher
Does Phase Matter For Monaural Source Separation?
4 pages, 2 figures, NIPS format
null
null
null
cs.SD cs.NE eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The "cocktail party" problem of fully separating multiple sources from a single channel audio waveform remains unsolved. Current biological understanding of neural encoding suggests that phase information is preserved and utilized at every stage of the auditory pathway. However, current computational approaches primarily discard phase information in order to mask amplitude spectrograms of sound. In this paper, we seek to address whether preserving phase information in spectral representations of sound provides better results in monaural separation of vocals from a musical track by using a neurally plausible sparse generative model. Our results demonstrate that preserving phase information reduces artifacts in the separated tracks, as quantified by the signal to artifact ratio (GSAR). Furthermore, our proposed method achieves state-of-the-art performance for source separation, as quantified by a mean signal to interference ratio (GSIR) of 19.46.
[ { "created": "Thu, 2 Nov 2017 20:10:00 GMT", "version": "v1" } ]
2017-11-06
[ [ "Dubey", "Mohit", "" ], [ "Kenyon", "Garrett", "" ], [ "Carlson", "Nils", "" ], [ "Thresher", "Austin", "" ] ]
The "cocktail party" problem of fully separating multiple sources from a single channel audio waveform remains unsolved. Current biological understanding of neural encoding suggests that phase information is preserved and utilized at every stage of the auditory pathway. However, current computational approaches primarily discard phase information in order to mask amplitude spectrograms of sound. In this paper, we seek to address whether preserving phase information in spectral representations of sound provides better results in monaural separation of vocals from a musical track by using a neurally plausible sparse generative model. Our results demonstrate that preserving phase information reduces artifacts in the separated tracks, as quantified by the signal to artifact ratio (GSAR). Furthermore, our proposed method achieves state-of-the-art performance for source separation, as quantified by a mean signal to interference ratio (GSIR) of 19.46.
2401.02180
Ivo Sbalzarini
Johannes Pahlke, Ivo F. Sbalzarini
Proven Distributed Memory Parallelization of Particle Methods
40 pages, 4 figures
null
null
null
cs.DC cs.DS cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
We provide a mathematically proven parallelization scheme for particle methods on distributed-memory computer systems. Particle methods are a versatile and widely used class of algorithms for computer simulations and numerical predictions in various applications, ranging from continuum fluid dynamics and granular flows, using methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Methods (DEM) to Molecular Dynamics (MD) simulations in molecular modeling. Particle methods naturally lend themselves to implementation on parallel-computing hardware. So far, however, a mathematical proof of correctness and equivalence to sequential implementations was only available for shared-memory parallelism. Here, we leverage a formal definition of the algorithmic class of particle methods to provide a proven parallelization scheme for distributed-memory computers. We prove that these parallelized particle methods on distributed memory computers are formally equivalent to their sequential counterpart for a well-defined class of particle methods. Notably, the here analyzed parallelization scheme is well-known and commonly used. Our analysis is, therefore, of immediate practical relevance to existing and new parallel software implementations of particle methods and places them on solid theoretical grounds.
[ { "created": "Thu, 4 Jan 2024 10:22:26 GMT", "version": "v1" } ]
2024-01-05
[ [ "Pahlke", "Johannes", "" ], [ "Sbalzarini", "Ivo F.", "" ] ]
We provide a mathematically proven parallelization scheme for particle methods on distributed-memory computer systems. Particle methods are a versatile and widely used class of algorithms for computer simulations and numerical predictions in various applications, ranging from continuum fluid dynamics and granular flows, using methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Methods (DEM) to Molecular Dynamics (MD) simulations in molecular modeling. Particle methods naturally lend themselves to implementation on parallel-computing hardware. So far, however, a mathematical proof of correctness and equivalence to sequential implementations was only available for shared-memory parallelism. Here, we leverage a formal definition of the algorithmic class of particle methods to provide a proven parallelization scheme for distributed-memory computers. We prove that these parallelized particle methods on distributed memory computers are formally equivalent to their sequential counterpart for a well-defined class of particle methods. Notably, the here analyzed parallelization scheme is well-known and commonly used. Our analysis is, therefore, of immediate practical relevance to existing and new parallel software implementations of particle methods and places them on solid theoretical grounds.
1008.1043
Sergiy Vorobyov A.
Zengmao Chen, Cheng-Xiang Wang, Xuemin Hong, John Thompson, Sergiy A. Vorobyov, Xiaohu Ge, Hailin Xiao, and Feng Zhao
Aggregate Interference Modeling in Cognitive Radio Networks with Power and Contention Control
24 pages, 8 figures, submitted to IEEE Trans. Communications in July 2010
Z. Chen, C.-X. Wang, S.A. Vorobyov, and et al, "Aggregate interference modeling in cognitive radio networks with power and contention control," IEEE Trans. Communications, vol. 60, no. 2, pp. 456-468, Feb. 2012
10.1109/TCOMM.2011.012012.100426
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an interference model for cognitive radio (CR) networks employing power control, contention control or hybrid power/contention control schemes. For the first case, a power control scheme is proposed to govern the transmission power of a CR node. For the second one, a contention control scheme at the media access control (MAC) layer, based on carrier sense multiple access with collision avoidance (CSMA/CA), is proposed to coordinate the operation of CR nodes with transmission requests. The probability density functions of the interference received at a primary receiver from a CR network are first derived numerically for these two cases. For the hybrid case, where power and contention controls are jointly adopted by a CR node to govern its transmission, the interference is analyzed and compared with that of the first two schemes by simulations. Then, the interference distributions under the first two control schemes are fitted by log-normal distributions with greatly reduced complexity. Moreover, the effect of a hidden primary receiver on the interference experienced at the receiver is investigated. It is demonstrated that both power and contention controls are effective approaches to alleviate the interference caused by CR networks. Some in-depth analysis of the impact of key parameters on the interference of CR networks is given via numerical studies as well.
[ { "created": "Thu, 5 Aug 2010 19:20:21 GMT", "version": "v1" } ]
2016-11-17
[ [ "Chen", "Zengmao", "" ], [ "Wang", "Cheng-Xiang", "" ], [ "Hong", "Xuemin", "" ], [ "Thompson", "John", "" ], [ "Vorobyov", "Sergiy A.", "" ], [ "Ge", "Xiaohu", "" ], [ "Xiao", "Hailin", "" ], [ "Zhao", "Feng", "" ] ]
In this paper, we present an interference model for cognitive radio (CR) networks employing power control, contention control or hybrid power/contention control schemes. For the first case, a power control scheme is proposed to govern the transmission power of a CR node. For the second one, a contention control scheme at the media access control (MAC) layer, based on carrier sense multiple access with collision avoidance (CSMA/CA), is proposed to coordinate the operation of CR nodes with transmission requests. The probability density functions of the interference received at a primary receiver from a CR network are first derived numerically for these two cases. For the hybrid case, where power and contention controls are jointly adopted by a CR node to govern its transmission, the interference is analyzed and compared with that of the first two schemes by simulations. Then, the interference distributions under the first two control schemes are fitted by log-normal distributions with greatly reduced complexity. Moreover, the effect of a hidden primary receiver on the interference experienced at the receiver is investigated. It is demonstrated that both power and contention controls are effective approaches to alleviate the interference caused by CR networks. Some in-depth analysis of the impact of key parameters on the interference of CR networks is given via numerical studies as well.
2210.01987
Artem Vysogorets
Dhrupad Bhardwaj, Julia Kempe, Artem Vysogorets, Angela M. Teng, and Evaristus C. Ezekwem
ImpressLearn: Continual Learning via Combined Task Impressions
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
This work proposes a new method to sequentially train deep neural networks on multiple tasks without suffering catastrophic forgetting, while endowing it with the capability to quickly adapt to unseen tasks. Starting from existing work on network masking (Wortsman et al., 2020), we show that simply learning a linear combination of a small number of task-specific supermasks (impressions) on a randomly initialized backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on unseen tasks. In contrast to previous methods, we do not require to generate dedicated masks or contexts for each new task, instead leveraging transfer learning to keep per-task parameter overhead small. Our work illustrates the power of linearly combining individual impressions, each of which fares poorly in isolation, to achieve performance comparable to a dedicated mask. Moreover, even repeated impressions from the same task (homogeneous masks), when combined, can approach the performance of heterogeneous combinations if sufficiently many impressions are used. Our approach scales more efficiently than existing methods, often requiring orders of magnitude fewer parameters and can function without modification even when task identity is missing. In addition, in the setting where task labels are not given at inference, our algorithm gives an often favorable alternative to the one-shot procedure used by Wortsman et al., 2020. We evaluate our method on a number of well-known image classification datasets and network architectures.
[ { "created": "Wed, 5 Oct 2022 02:28:25 GMT", "version": "v1" }, { "created": "Tue, 31 Jan 2023 19:52:37 GMT", "version": "v2" } ]
2023-02-02
[ [ "Bhardwaj", "Dhrupad", "" ], [ "Kempe", "Julia", "" ], [ "Vysogorets", "Artem", "" ], [ "Teng", "Angela M.", "" ], [ "Ezekwem", "Evaristus C.", "" ] ]
This work proposes a new method to sequentially train deep neural networks on multiple tasks without suffering catastrophic forgetting, while endowing it with the capability to quickly adapt to unseen tasks. Starting from existing work on network masking (Wortsman et al., 2020), we show that simply learning a linear combination of a small number of task-specific supermasks (impressions) on a randomly initialized backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on unseen tasks. In contrast to previous methods, we do not require to generate dedicated masks or contexts for each new task, instead leveraging transfer learning to keep per-task parameter overhead small. Our work illustrates the power of linearly combining individual impressions, each of which fares poorly in isolation, to achieve performance comparable to a dedicated mask. Moreover, even repeated impressions from the same task (homogeneous masks), when combined, can approach the performance of heterogeneous combinations if sufficiently many impressions are used. Our approach scales more efficiently than existing methods, often requiring orders of magnitude fewer parameters and can function without modification even when task identity is missing. In addition, in the setting where task labels are not given at inference, our algorithm gives an often favorable alternative to the one-shot procedure used by Wortsman et al., 2020. We evaluate our method on a number of well-known image classification datasets and network architectures.
1811.11660
Michiel de Bondt
Michiel de Bondt
A short and elegant proof of a theorem of J.-E. Pin
11 pages, major update with new proof
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a short proof of a theorem of J.-E. Pin (theorem 1.1 below), which can be found in his thesis. The part of the proof which is my own (not Pin's) is a complete replacement of the same part in an earlier version of this paper.
[ { "created": "Wed, 28 Nov 2018 16:36:15 GMT", "version": "v1" }, { "created": "Wed, 14 Sep 2022 11:52:28 GMT", "version": "v2" }, { "created": "Thu, 15 Sep 2022 11:44:46 GMT", "version": "v3" } ]
2022-09-16
[ [ "de Bondt", "Michiel", "" ] ]
We give a short proof of a theorem of J.-E. Pin (theorem 1.1 below), which can be found in his thesis. The part of the proof which is my own (not Pin's) is a complete replacement of the same part in an earlier version of this paper.
2407.05246
Yuxuan Yan
Yuxuan Yan, Na Lu, Ruofan Yan
Deep Online Probability Aggregation Clustering
19 pages,2 figures, conference
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Combining machine clustering with deep models has shown remarkable superiority in deep clustering. It modifies the data processing pipeline into two alternating phases: feature clustering and model training. However, such alternating schedule may lead to instability and computational burden issues. We propose a centerless clustering algorithm called Probability Aggregation Clustering (PAC) to proactively adapt deep learning technologies, enabling easy deployment in online deep clustering. PAC circumvents the cluster center and aligns the probability space and distribution space by formulating clustering as an optimization problem with a novel objective function. Based on the computation mechanism of the PAC, we propose a general online probability aggregation module to perform stable and flexible feature clustering over mini-batch data and further construct a deep visual clustering framework deep PAC (DPAC). Extensive experiments demonstrate that PAC has superior clustering robustness and performance and DPAC remarkably outperforms the state-of-the-art deep clustering methods.
[ { "created": "Sun, 7 Jul 2024 03:31:00 GMT", "version": "v1" }, { "created": "Sat, 13 Jul 2024 06:58:10 GMT", "version": "v2" } ]
2024-07-16
[ [ "Yan", "Yuxuan", "" ], [ "Lu", "Na", "" ], [ "Yan", "Ruofan", "" ] ]
Combining machine clustering with deep models has shown remarkable superiority in deep clustering. It modifies the data processing pipeline into two alternating phases: feature clustering and model training. However, such alternating schedule may lead to instability and computational burden issues. We propose a centerless clustering algorithm called Probability Aggregation Clustering (PAC) to proactively adapt deep learning technologies, enabling easy deployment in online deep clustering. PAC circumvents the cluster center and aligns the probability space and distribution space by formulating clustering as an optimization problem with a novel objective function. Based on the computation mechanism of the PAC, we propose a general online probability aggregation module to perform stable and flexible feature clustering over mini-batch data and further construct a deep visual clustering framework deep PAC (DPAC). Extensive experiments demonstrate that PAC has superior clustering robustness and performance and DPAC remarkably outperforms the state-of-the-art deep clustering methods.
1811.12240
Geoffrey Goodell
Geoff Goodell and Tomaso Aste
Can Cryptocurrencies Preserve Privacy and Comply with Regulations?
20 pages, 10 figures, 3 tables
null
10.3389/fbloc.2019.00004
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptocurrencies offer an alternative to traditional methods of electronic value exchange, promising anonymous, cash-like electronic transfers, but in practice they fall short for several key reasons. We consider the false choice between total surveillance, as represented by banking as currently implemented by institutions, and impenetrable lawlessness, as represented by privacy-enhancing cryptocurrencies as currently deployed. We identify a range of alternatives between those two extremes, and we consider two potential compromise approaches that offer both the auditability required for regulators and the anonymity required for users.
[ { "created": "Thu, 29 Nov 2018 15:21:07 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2019 16:34:47 GMT", "version": "v2" }, { "created": "Tue, 7 May 2019 13:56:05 GMT", "version": "v3" } ]
2019-06-05
[ [ "Goodell", "Geoff", "" ], [ "Aste", "Tomaso", "" ] ]
Cryptocurrencies offer an alternative to traditional methods of electronic value exchange, promising anonymous, cash-like electronic transfers, but in practice they fall short for several key reasons. We consider the false choice between total surveillance, as represented by banking as currently implemented by institutions, and impenetrable lawlessness, as represented by privacy-enhancing cryptocurrencies as currently deployed. We identify a range of alternatives between those two extremes, and we consider two potential compromise approaches that offer both the auditability required for regulators and the anonymity required for users.
1711.02026
Arman Shojaeifard
Arman Shojaeifard, Kai-Kit Wong, Wei Yu, Gan Zheng, Jie Tang
Full-Duplex Cloud Radio Access Network: Stochastic Design and Analysis
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Full-duplex (FD) has emerged as a disruptive communications paradigm for enhancing the achievable spectral efficiency (SE), thanks to the recent major breakthroughs in self-interference (SI) mitigation. The FD versus half-duplex (HD) SE gain, in cellular networks, is however largely limited by the mutual-interference (MI) between the downlink (DL) and the uplink (UL). A potential remedy for tackling the MI bottleneck is through cooperative communications. This paper provides a stochastic design and analysis of FD enabled cloud radio access network (C-RAN) under the Poisson point process (PPP)-based abstraction model of multi-antenna radio units (RUs) and user equipments (UEs). We consider different disjoint and user-centric approaches towards the formation of finite clusters in the C-RAN. Contrary to most existing studies, we explicitly take into consideration non-isotropic fading channel conditions and finite-capacity fronthaul links. Accordingly, upper-bound expressions for the C-RAN DL and UL SEs, involving the statistics of all intended and interfering signals, are derived. The performance of the FD C-RAN is investigated through the proposed theoretical framework and Monte-Carlo (MC) simulations. The results indicate that significant FD versus HD C-RAN SE gains can be achieved, particularly in the presence of sufficient-capacity fronthaul links and advanced interference cancellation capabilities.
[ { "created": "Mon, 6 Nov 2017 17:32:13 GMT", "version": "v1" } ]
2017-11-07
[ [ "Shojaeifard", "Arman", "" ], [ "Wong", "Kai-Kit", "" ], [ "Yu", "Wei", "" ], [ "Zheng", "Gan", "" ], [ "Tang", "Jie", "" ] ]
Full-duplex (FD) has emerged as a disruptive communications paradigm for enhancing the achievable spectral efficiency (SE), thanks to the recent major breakthroughs in self-interference (SI) mitigation. The FD versus half-duplex (HD) SE gain, in cellular networks, is however largely limited by the mutual-interference (MI) between the downlink (DL) and the uplink (UL). A potential remedy for tackling the MI bottleneck is through cooperative communications. This paper provides a stochastic design and analysis of FD enabled cloud radio access network (C-RAN) under the Poisson point process (PPP)-based abstraction model of multi-antenna radio units (RUs) and user equipments (UEs). We consider different disjoint and user-centric approaches towards the formation of finite clusters in the C-RAN. Contrary to most existing studies, we explicitly take into consideration non-isotropic fading channel conditions and finite-capacity fronthaul links. Accordingly, upper-bound expressions for the C-RAN DL and UL SEs, involving the statistics of all intended and interfering signals, are derived. The performance of the FD C-RAN is investigated through the proposed theoretical framework and Monte-Carlo (MC) simulations. The results indicate that significant FD versus HD C-RAN SE gains can be achieved, particularly in the presence of sufficient-capacity fronthaul links and advanced interference cancellation capabilities.
2209.04053
Thomas Steinke
Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Thomas Steinke
Algorithms with More Granular Differential Privacy Guarantees
null
null
null
null
cs.CR cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential privacy is often applied with a privacy parameter that is larger than the theory suggests is ideal; various informal justifications for tolerating large privacy parameters have been proposed. In this work, we consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis. In this framework, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person (i.e., all the attributes).
[ { "created": "Thu, 8 Sep 2022 22:43:50 GMT", "version": "v1" } ]
2022-09-12
[ [ "Ghazi", "Badih", "" ], [ "Kumar", "Ravi", "" ], [ "Manurangsi", "Pasin", "" ], [ "Steinke", "Thomas", "" ] ]
Differential privacy is often applied with a privacy parameter that is larger than the theory suggests is ideal; various informal justifications for tolerating large privacy parameters have been proposed. In this work, we consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis. In this framework, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person (i.e., all the attributes).
2105.07856
Edward Simmons
Edward Simmons
Correlations Between Learning Environments and Dropout Intention
null
null
10.13140/RG.2.2.28550.50245
null
cs.CY math.ST stat.TH
http://creativecommons.org/licenses/by/4.0/
This research is comparing learning environments to students dropout intentions. While using statistics I looked at data and the correlations between two articles to see how the two studies looked side to side. Learning environments and dropout intentions can both have vary effects on students. They can both determine if a student does well, or bad in school especially math.
[ { "created": "Fri, 7 May 2021 10:08:47 GMT", "version": "v1" } ]
2021-06-01
[ [ "Simmons", "Edward", "" ] ]
This research is comparing learning environments to students dropout intentions. While using statistics I looked at data and the correlations between two articles to see how the two studies looked side to side. Learning environments and dropout intentions can both have vary effects on students. They can both determine if a student does well, or bad in school especially math.
2107.12679
Mingbo Zhao
Wenlong Cheng and Mingbo Zhao and Zhiling Ye and Shuhang Gu
MFAGAN: A Compression Framework for Memory-Efficient On-Device Super-Resolution GAN
null
null
null
null
cs.AR cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory consumption of GAN-based SR (usually generators) causes performance degradation and more energy consumption, hindering the deployment of GAN-based SR into resource-constricted mobile devices. In this paper, we propose a novel compression framework \textbf{M}ulti-scale \textbf{F}eature \textbf{A}ggregation Net based \textbf{GAN} (MFAGAN) for reducing the memory access cost of the generator. First, to overcome the memory explosion of dense connections, we utilize a memory-efficient multi-scale feature aggregation net as the generator. Second, for faster and more stable training, our method introduces the PatchGAN discriminator. Third, to balance the student discriminator and the compressed generator, we distill both the generator and the discriminator. Finally, we perform a hardware-aware neural architecture search (NAS) to find a specialized SubGenerator for the target mobile phone. Benefiting from these improvements, the proposed MFAGAN achieves up to \textbf{8.3}$\times$ memory saving and \textbf{42.9}$\times$ computation reduction, with only minor visual quality degradation, compared with ESRGAN. Empirical studies also show $\sim$\textbf{70} milliseconds latency on Qualcomm Snapdragon 865 chipset.
[ { "created": "Tue, 27 Jul 2021 09:04:30 GMT", "version": "v1" } ]
2021-07-28
[ [ "Cheng", "Wenlong", "" ], [ "Zhao", "Mingbo", "" ], [ "Ye", "Zhiling", "" ], [ "Gu", "Shuhang", "" ] ]
Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory consumption of GAN-based SR (usually generators) causes performance degradation and more energy consumption, hindering the deployment of GAN-based SR into resource-constricted mobile devices. In this paper, we propose a novel compression framework \textbf{M}ulti-scale \textbf{F}eature \textbf{A}ggregation Net based \textbf{GAN} (MFAGAN) for reducing the memory access cost of the generator. First, to overcome the memory explosion of dense connections, we utilize a memory-efficient multi-scale feature aggregation net as the generator. Second, for faster and more stable training, our method introduces the PatchGAN discriminator. Third, to balance the student discriminator and the compressed generator, we distill both the generator and the discriminator. Finally, we perform a hardware-aware neural architecture search (NAS) to find a specialized SubGenerator for the target mobile phone. Benefiting from these improvements, the proposed MFAGAN achieves up to \textbf{8.3}$\times$ memory saving and \textbf{42.9}$\times$ computation reduction, with only minor visual quality degradation, compared with ESRGAN. Empirical studies also show $\sim$\textbf{70} milliseconds latency on Qualcomm Snapdragon 865 chipset.
2102.07886
Johannes Sedlmeir
Johannes Sedlmeir and Hans Ulrich Buhl and Gilbert Fridgen and Robert Keller
Recent Developments in Blockchain Technology and their Impact on Energy Consumption
This is a translated version of a German article published in Informatik Spektrum
null
10.1007/s00287-020-01321-z
null
cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
The enormous power consumption of Bitcoin has led to undifferentiated discussions in science and practice about the sustainability of blockchain and distributed ledger technology in general. However, blockchain technology is far from homogeneous - not only with regard to its applications, which now go far beyond cryptocurrencies and have reached businesses and the public sector, but also with regard to its technical characteristics and, in particular, its power consumption. This paper summarizes the status quo of the power consumption of various implementations of blockchain technology, with special emphasis on the recent 'Bitcoin Halving' and so-called 'zk-rollups'. We argue that although Bitcoin and other proof-of-work blockchains do indeed consume a lot of power, alternative blockchain solutions with significantly lower power consumption are already available today, and new promising concepts are being tested that could further reduce in particular the power consumption of large blockchain networks in the near future. From this we conclude that although the criticism of Bitcoin's power consumption is legitimate, it should not be used to derive an energy problem of blockchain technology in general. In many cases in which processes can be digitised or improved with the help of more energy-efficient blockchain variants, one can even expect net energy savings.
[ { "created": "Mon, 15 Feb 2021 22:55:30 GMT", "version": "v1" } ]
2021-02-17
[ [ "Sedlmeir", "Johannes", "" ], [ "Buhl", "Hans Ulrich", "" ], [ "Fridgen", "Gilbert", "" ], [ "Keller", "Robert", "" ] ]
The enormous power consumption of Bitcoin has led to undifferentiated discussions in science and practice about the sustainability of blockchain and distributed ledger technology in general. However, blockchain technology is far from homogeneous - not only with regard to its applications, which now go far beyond cryptocurrencies and have reached businesses and the public sector, but also with regard to its technical characteristics and, in particular, its power consumption. This paper summarizes the status quo of the power consumption of various implementations of blockchain technology, with special emphasis on the recent 'Bitcoin Halving' and so-called 'zk-rollups'. We argue that although Bitcoin and other proof-of-work blockchains do indeed consume a lot of power, alternative blockchain solutions with significantly lower power consumption are already available today, and new promising concepts are being tested that could further reduce in particular the power consumption of large blockchain networks in the near future. From this we conclude that although the criticism of Bitcoin's power consumption is legitimate, it should not be used to derive an energy problem of blockchain technology in general. In many cases in which processes can be digitised or improved with the help of more energy-efficient blockchain variants, one can even expect net energy savings.
1603.09012
Laleh Jalali
Laleh Jalali and Ramesh Jain
A framework for event co-occurrence detection in event streams
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper shows that characterizing co-occurrence between events is an important but non-trivial and neglected aspect of discovering potential causal relationships in multimedia event streams. First an introduction to the notion of event co-occurrence and its relation to co-occurrence pattern detection is given. Then a finite state automaton extended with a time model and event parameterization is introduced to convert high level co-occurrence pattern definition to its corresponding pattern matching automaton. Finally a processing algorithm is applied to count the occurrence frequency of a collection of patterns with only one pass through input event streams. The method proposed in this paper can be used for detecting co-occurrences between both events of one event stream (Auto co-occurrence), and events from multiple event streams (Cross co-occurrence). Some fundamental results concerning the characterization of event co-occurrence are presented in form of a visual co- occurrence matrix. Reusable causality rules can be extracted easily from co-occurrence matrix and fed into various analysis tools, such as recommendation systems and complex event processing systems for further analysis.
[ { "created": "Wed, 30 Mar 2016 01:16:37 GMT", "version": "v1" } ]
2016-03-31
[ [ "Jalali", "Laleh", "" ], [ "Jain", "Ramesh", "" ] ]
This paper shows that characterizing co-occurrence between events is an important but non-trivial and neglected aspect of discovering potential causal relationships in multimedia event streams. First an introduction to the notion of event co-occurrence and its relation to co-occurrence pattern detection is given. Then a finite state automaton extended with a time model and event parameterization is introduced to convert high level co-occurrence pattern definition to its corresponding pattern matching automaton. Finally a processing algorithm is applied to count the occurrence frequency of a collection of patterns with only one pass through input event streams. The method proposed in this paper can be used for detecting co-occurrences between both events of one event stream (Auto co-occurrence), and events from multiple event streams (Cross co-occurrence). Some fundamental results concerning the characterization of event co-occurrence are presented in form of a visual co- occurrence matrix. Reusable causality rules can be extracted easily from co-occurrence matrix and fed into various analysis tools, such as recommendation systems and complex event processing systems for further analysis.
2408.05476
Jonas Oppenlaender
Jonas Oppenlaender, Hannah Johnston, Johanna Silvennoinen, Helena Barranha
Artworks Reimagined: Exploring Human-AI Co-Creation through Body Prompting
16 pages, 5 figures, 2 tables
null
null
null
cs.HC cs.AI cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image generation using generative artificial intelligence is a popular activity. However, it is almost exclusively performed in the privacy of an individual's home via typing on a keyboard. In this article, we explore body prompting as input for image generation. Body prompting extends interaction with generative AI beyond textual inputs to reconnect the creative act of image generation with the physical act of creating artworks. We implement this concept in an interactive art installation, Artworks Reimagined, designed to transform artworks via body prompting. We deployed the installation at an event with hundreds of visitors in a public and private setting. Our results from a sample of visitors (N=79) show that body prompting was well-received and provides an engaging and fun experience. We identify three distinct patterns of embodied interaction with the generative AI and present insights into participants' experience of body prompting and AI co-creation. We provide valuable recommendations for practitioners seeking to design interactive generative AI experiences in museums, galleries, and other public cultural spaces.
[ { "created": "Sat, 10 Aug 2024 08:05:59 GMT", "version": "v1" } ]
2024-08-13
[ [ "Oppenlaender", "Jonas", "" ], [ "Johnston", "Hannah", "" ], [ "Silvennoinen", "Johanna", "" ], [ "Barranha", "Helena", "" ] ]
Image generation using generative artificial intelligence is a popular activity. However, it is almost exclusively performed in the privacy of an individual's home via typing on a keyboard. In this article, we explore body prompting as input for image generation. Body prompting extends interaction with generative AI beyond textual inputs to reconnect the creative act of image generation with the physical act of creating artworks. We implement this concept in an interactive art installation, Artworks Reimagined, designed to transform artworks via body prompting. We deployed the installation at an event with hundreds of visitors in a public and private setting. Our results from a sample of visitors (N=79) show that body prompting was well-received and provides an engaging and fun experience. We identify three distinct patterns of embodied interaction with the generative AI and present insights into participants' experience of body prompting and AI co-creation. We provide valuable recommendations for practitioners seeking to design interactive generative AI experiences in museums, galleries, and other public cultural spaces.
2201.12590
Christopher Bl\"ocker
Christopher Bl\"ocker, Juan Carlos Nieves, Martin Rosvall
Map Equation Centrality: Community-aware Centrality based on the Map Equation
null
Appl Netw Sci 7, 56 (2022)
10.1007/s41109-022-00477-9
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To measure node importance, network scientists employ centrality scores that typically take a microscopic or macroscopic perspective, relying on node features or global network structure. However, traditional centrality measures such as degree centrality, betweenness centrality, or PageRank neglect the community structure found in real-world networks. To study node importance based on network flows from a mesoscopic perspective, we analytically derive a community-aware information-theoretic centrality score based on network flow and the coding principles behind the map equation: map equation centrality. Map equation centrality measures how much further we can compress the network's modular description by not coding for random walker transitions to the respective node, using an adapted coding scheme and determining node importance from a network flow-based point of view. The information-theoretic centrality measure can be determined from a node's local network context alone because changes to the coding scheme only affect other nodes in the same module. Map equation centrality is agnostic to the chosen network flow model and allows researchers to select the model that best reflects the dynamics of the process under study. Applied to synthetic networks, we highlight how our approach enables a more fine-grained differentiation between nodes than node-local or network-global measures. Predicting influential nodes for two different dynamical processes on real-world networks with traditional and other community-aware centrality measures, we find that activating nodes based on map equation centrality scores tends to create the largest cascades in a linear threshold model.
[ { "created": "Sat, 29 Jan 2022 13:47:27 GMT", "version": "v1" }, { "created": "Wed, 17 Aug 2022 07:26:00 GMT", "version": "v2" } ]
2022-08-18
[ [ "Blöcker", "Christopher", "" ], [ "Nieves", "Juan Carlos", "" ], [ "Rosvall", "Martin", "" ] ]
To measure node importance, network scientists employ centrality scores that typically take a microscopic or macroscopic perspective, relying on node features or global network structure. However, traditional centrality measures such as degree centrality, betweenness centrality, or PageRank neglect the community structure found in real-world networks. To study node importance based on network flows from a mesoscopic perspective, we analytically derive a community-aware information-theoretic centrality score based on network flow and the coding principles behind the map equation: map equation centrality. Map equation centrality measures how much further we can compress the network's modular description by not coding for random walker transitions to the respective node, using an adapted coding scheme and determining node importance from a network flow-based point of view. The information-theoretic centrality measure can be determined from a node's local network context alone because changes to the coding scheme only affect other nodes in the same module. Map equation centrality is agnostic to the chosen network flow model and allows researchers to select the model that best reflects the dynamics of the process under study. Applied to synthetic networks, we highlight how our approach enables a more fine-grained differentiation between nodes than node-local or network-global measures. Predicting influential nodes for two different dynamical processes on real-world networks with traditional and other community-aware centrality measures, we find that activating nodes based on map equation centrality scores tends to create the largest cascades in a linear threshold model.
1603.05739
Elliot Schumacher
Elliot Schumacher, Maxine Eskenazi
A Readability Analysis of Campaign Speeches from the 2016 US Presidential Campaign
null
null
null
CMU-LTI-16-001
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Readability is defined as the reading level of the speech from grade 1 to grade 12. It results from the use of the REAP readability analysis (vocabulary - Collins-Thompson and Callan, 2004; syntax - Heilman et al ,2006, 2007), which use the lexical contents and grammatical structure of the sentences in a document to predict the reading level. After analysis, results were grouped into the average readability of each candidate, the evolution of the candidate's speeches' readability over time and the standard deviation, or how much each candidate varied their speech from one venue to another. For comparison, one speech from four past presidents and the Gettysburg Address were also analyzed.
[ { "created": "Fri, 18 Mar 2016 00:55:52 GMT", "version": "v1" } ]
2016-03-21
[ [ "Schumacher", "Elliot", "" ], [ "Eskenazi", "Maxine", "" ] ]
Readability is defined as the reading level of the speech from grade 1 to grade 12. It results from the use of the REAP readability analysis (vocabulary - Collins-Thompson and Callan, 2004; syntax - Heilman et al ,2006, 2007), which use the lexical contents and grammatical structure of the sentences in a document to predict the reading level. After analysis, results were grouped into the average readability of each candidate, the evolution of the candidate's speeches' readability over time and the standard deviation, or how much each candidate varied their speech from one venue to another. For comparison, one speech from four past presidents and the Gettysburg Address were also analyzed.
2305.15055
Mayank Singh
Mayank Kumar Singh, Naoya Takahashi, Onoe Naoyuki
Iteratively Improving Speech Recognition and Voice Conversion
null
null
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many existing works on voice conversion (VC) tasks use automatic speech recognition (ASR) models for ensuring linguistic consistency between source and converted samples. However, for the low-data resource domains, training a high-quality ASR remains to be a challenging task. In this work, we propose a novel iterative way of improving both the ASR and VC models. We first train an ASR model which is used to ensure content preservation while training a VC model. In the next iteration, the VC model is used as a data augmentation method to further fine-tune the ASR model and generalize it to diverse speakers. By iteratively leveraging the improved ASR model to train VC model and vice-versa, we experimentally show improvement in both the models. Our proposed framework outperforms the ASR and one-shot VC baseline models on English singing and Hindi speech domains in subjective and objective evaluations in low-data resource settings.
[ { "created": "Wed, 24 May 2023 11:45:42 GMT", "version": "v1" } ]
2023-05-25
[ [ "Singh", "Mayank Kumar", "" ], [ "Takahashi", "Naoya", "" ], [ "Naoyuki", "Onoe", "" ] ]
Many existing works on voice conversion (VC) tasks use automatic speech recognition (ASR) models for ensuring linguistic consistency between source and converted samples. However, for the low-data resource domains, training a high-quality ASR remains to be a challenging task. In this work, we propose a novel iterative way of improving both the ASR and VC models. We first train an ASR model which is used to ensure content preservation while training a VC model. In the next iteration, the VC model is used as a data augmentation method to further fine-tune the ASR model and generalize it to diverse speakers. By iteratively leveraging the improved ASR model to train VC model and vice-versa, we experimentally show improvement in both the models. Our proposed framework outperforms the ASR and one-shot VC baseline models on English singing and Hindi speech domains in subjective and objective evaluations in low-data resource settings.
2403.14292
Saad Noufel
Saad Noufel, Nadir Maaroufi, Mehdi Najib, Mohamed Bakhouya
HySim: An Efficient Hybrid Similarity Measure for Patch Matching in Image Inpainting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Inpainting, for filling missing image regions, is a crucial task in various applications, such as medical imaging and remote sensing. Trending data-driven approaches efficiency, for image inpainting, often requires extensive data preprocessing. In this sense, there is still a need for model-driven approaches in case of application constrained with data availability and quality, especially for those related for time series forecasting using image inpainting techniques. This paper proposes an improved modeldriven approach relying on patch-based techniques. Our approach deviates from the standard Sum of Squared Differences (SSD) similarity measure by introducing a Hybrid Similarity (HySim), which combines both strengths of Chebychev and Minkowski distances. This hybridization enhances patch selection, leading to high-quality inpainting results with reduced mismatch errors. Experimental results proved the effectiveness of our approach against other model-driven techniques, such as diffusion or patch-based approaches, showcasing its effectiveness in achieving visually pleasing restorations.
[ { "created": "Thu, 21 Mar 2024 10:59:44 GMT", "version": "v1" } ]
2024-03-22
[ [ "Noufel", "Saad", "" ], [ "Maaroufi", "Nadir", "" ], [ "Najib", "Mehdi", "" ], [ "Bakhouya", "Mohamed", "" ] ]
Inpainting, for filling missing image regions, is a crucial task in various applications, such as medical imaging and remote sensing. Trending data-driven approaches efficiency, for image inpainting, often requires extensive data preprocessing. In this sense, there is still a need for model-driven approaches in case of application constrained with data availability and quality, especially for those related for time series forecasting using image inpainting techniques. This paper proposes an improved modeldriven approach relying on patch-based techniques. Our approach deviates from the standard Sum of Squared Differences (SSD) similarity measure by introducing a Hybrid Similarity (HySim), which combines both strengths of Chebychev and Minkowski distances. This hybridization enhances patch selection, leading to high-quality inpainting results with reduced mismatch errors. Experimental results proved the effectiveness of our approach against other model-driven techniques, such as diffusion or patch-based approaches, showcasing its effectiveness in achieving visually pleasing restorations.
1502.01877
Helio M. de Oliveira
H.M. de Oliveira and T.H. Falk
On Wavelet Decomposition over Finite Fields
4 pages, 1 figure. conference: XIX Simposio Brasileiro de Telecomunicacoes, 2001, Fortaleza, CE, Brazil
null
null
null
cs.IT math.IT math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces some foundations of wavelets over Galois fields. Standard orthogonal finite-field wavelets (FF-Wavelets) including FF-Haar and FF-Daubechies are derived. Non-orthogonal FF-wavelets such as B-spline over GF(p) are also considered. A few examples of multiresolution analysis over Finite fields are presented showing how to perform Laplacian pyramid filtering of finite block lengths sequences. An application of FF-wavelets to design spread-spectrum sequences is presented.
[ { "created": "Fri, 6 Feb 2015 13:09:24 GMT", "version": "v1" } ]
2020-06-01
[ [ "de Oliveira", "H. M.", "" ], [ "Falk", "T. H.", "" ] ]
This paper introduces some foundations of wavelets over Galois fields. Standard orthogonal finite-field wavelets (FF-Wavelets) including FF-Haar and FF-Daubechies are derived. Non-orthogonal FF-wavelets such as B-spline over GF(p) are also considered. A few examples of multiresolution analysis over Finite fields are presented showing how to perform Laplacian pyramid filtering of finite block lengths sequences. An application of FF-wavelets to design spread-spectrum sequences is presented.
2304.11354
YuanFu Yang
Yuan-Fu Yang, Iuan-Kai Fang, Min Sun, Su-Chu Hsu
Medium. Permeation: SARS-COV-2 Painting Creation by Generative Model
Keywords: SARS-CoV-2; Generative Art; Graph Neural Network. arXiv admin note: text overlap with arXiv:1706.07068 by other authors
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Airborne particles are the medium for SARS-CoV-2 to invade the human body. Light also reflects through suspended particles in the air, allowing people to see a colorful world. Impressionism is the most prominent art school that explores the spectrum of color created through color reflection of light. We find similarities of color structure and color stacking in the Impressionist paintings and the illustrations of the novel coronavirus by artists around the world. With computerized data analysis through the main tones, the way of color layout, and the way of color stacking in the paintings of the Impressionists, we train computers to draw the novel coronavirus in an Impressionist style using a Generative Adversarial Network to create our artwork "Medium. Permeation". This artwork is composed of 196 randomly generated viral pictures arranged in a 14 by 14 matrix to form a large-scale painting. In addition, we have developed an extended work: Gradual Change, which is presented as video art. We use Graph Neural Network to present 196 paintings of the new coronavirus to the audience one by one in a gradual manner. In front of LED TV screen, audience will find 196 virus paintings whose colors will change continuously. This large video painting symbolizes that worldwide 196 countries have been invaded by the epidemic, and every nation continuously pops up mutant viruses. The speed of vaccine development cannot keep up with the speed of virus mutation. This is also the first generative art in the world based on the common features and a metaphorical symbiosis between Impressionist art and the novel coronavirus. This work warns us of the unprecedented challenges posed by the SARS-CoV-2, implying that the world should not ignore the invisible enemy who uses air as a medium.
[ { "created": "Sat, 22 Apr 2023 09:27:47 GMT", "version": "v1" } ]
2023-04-25
[ [ "Yang", "Yuan-Fu", "" ], [ "Fang", "Iuan-Kai", "" ], [ "Sun", "Min", "" ], [ "Hsu", "Su-Chu", "" ] ]
Airborne particles are the medium for SARS-CoV-2 to invade the human body. Light also reflects through suspended particles in the air, allowing people to see a colorful world. Impressionism is the most prominent art school that explores the spectrum of color created through color reflection of light. We find similarities of color structure and color stacking in the Impressionist paintings and the illustrations of the novel coronavirus by artists around the world. With computerized data analysis through the main tones, the way of color layout, and the way of color stacking in the paintings of the Impressionists, we train computers to draw the novel coronavirus in an Impressionist style using a Generative Adversarial Network to create our artwork "Medium. Permeation". This artwork is composed of 196 randomly generated viral pictures arranged in a 14 by 14 matrix to form a large-scale painting. In addition, we have developed an extended work: Gradual Change, which is presented as video art. We use Graph Neural Network to present 196 paintings of the new coronavirus to the audience one by one in a gradual manner. In front of LED TV screen, audience will find 196 virus paintings whose colors will change continuously. This large video painting symbolizes that worldwide 196 countries have been invaded by the epidemic, and every nation continuously pops up mutant viruses. The speed of vaccine development cannot keep up with the speed of virus mutation. This is also the first generative art in the world based on the common features and a metaphorical symbiosis between Impressionist art and the novel coronavirus. This work warns us of the unprecedented challenges posed by the SARS-CoV-2, implying that the world should not ignore the invisible enemy who uses air as a medium.
1806.04584
Chujie Wang
Chujie Wang, Zhifeng Zhao, Qi Sun, Honggang Zhang
Deep Learning-based Intelligent Dual Connectivity for Mobility Management in Dense Network
5 pages, 9 figures, conference
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ultra-dense network deployment has been proposed as a key technique for achieving capacity goals in the fifth-generation (5G) mobile communication system. However, the deployment of smaller cells inevitably leads to more frequent handovers, thus making mobility management more challenging and reducing the capacity gains offered by the dense network deployment. In order to fully reap the gains for mobile users in such a network environment, we propose an intelligent dual connectivity mechanism for mobility management through deep learning-based mobility prediction. We first use LSTM (Long Short Term Memory) algorithm, one of deep learning algorithms, to learn every user equipment's (UE's) mobility pattern from its historical trajectories and predict its movement trends in the future. Based on the corresponding prediction results, the network will judge whether a handover is required for the UE. For the handover case, a dual connection will be established for the related UE. Thus, the UE can get the radio signal from two base stations in the handover process. Simulation results verify that the proposed intelligent dual connectivity mechanism can significantly improve the quality of service of mobile users in the handover process while guaranteeing the network energy efficiency.
[ { "created": "Wed, 30 May 2018 07:59:12 GMT", "version": "v1" } ]
2018-06-13
[ [ "Wang", "Chujie", "" ], [ "Zhao", "Zhifeng", "" ], [ "Sun", "Qi", "" ], [ "Zhang", "Honggang", "" ] ]
Ultra-dense network deployment has been proposed as a key technique for achieving capacity goals in the fifth-generation (5G) mobile communication system. However, the deployment of smaller cells inevitably leads to more frequent handovers, thus making mobility management more challenging and reducing the capacity gains offered by the dense network deployment. In order to fully reap the gains for mobile users in such a network environment, we propose an intelligent dual connectivity mechanism for mobility management through deep learning-based mobility prediction. We first use LSTM (Long Short Term Memory) algorithm, one of deep learning algorithms, to learn every user equipment's (UE's) mobility pattern from its historical trajectories and predict its movement trends in the future. Based on the corresponding prediction results, the network will judge whether a handover is required for the UE. For the handover case, a dual connection will be established for the related UE. Thus, the UE can get the radio signal from two base stations in the handover process. Simulation results verify that the proposed intelligent dual connectivity mechanism can significantly improve the quality of service of mobile users in the handover process while guaranteeing the network energy efficiency.
2209.01386
Chao Zhang
Chao Zhang, Zijian Tang, Taoming Guo, Jiaxin Lei, Jiaxin Xiao, Anhe Wang, Shuo Bai, Milin Zhang
SaleNet: A low-power end-to-end CNN accelerator for sustained attention level evaluation using EEG
5 pages, 4 figures, to be published in IEEE International Symposium on Circuits and Systems (ISCAS) 2022
null
null
null
cs.AR cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes SaleNet - an end-to-end convolutional neural network (CNN) for sustained attention level evaluation using prefrontal electroencephalogram (EEG). A bias-driven pruning method is proposed together with group convolution, global average pooling (GAP), near-zero pruning, weight clustering and quantization for the model compression, achieving a total compression ratio of 183.11x. The compressed SaleNet obtains a state-of-the-art subject-independent sustained attention level classification accuracy of 84.2% on the recorded 6-subject EEG database in this work. The SaleNet is implemented on a Artix-7 FPGA with a competitive power consumption of 0.11 W and an energy-efficiency of 8.19 GOps/W.
[ { "created": "Sat, 3 Sep 2022 09:49:37 GMT", "version": "v1" } ]
2022-09-07
[ [ "Zhang", "Chao", "" ], [ "Tang", "Zijian", "" ], [ "Guo", "Taoming", "" ], [ "Lei", "Jiaxin", "" ], [ "Xiao", "Jiaxin", "" ], [ "Wang", "Anhe", "" ], [ "Bai", "Shuo", "" ], [ "Zhang", "Milin", "" ] ]
This paper proposes SaleNet - an end-to-end convolutional neural network (CNN) for sustained attention level evaluation using prefrontal electroencephalogram (EEG). A bias-driven pruning method is proposed together with group convolution, global average pooling (GAP), near-zero pruning, weight clustering and quantization for the model compression, achieving a total compression ratio of 183.11x. The compressed SaleNet obtains a state-of-the-art subject-independent sustained attention level classification accuracy of 84.2% on the recorded 6-subject EEG database in this work. The SaleNet is implemented on a Artix-7 FPGA with a competitive power consumption of 0.11 W and an energy-efficiency of 8.19 GOps/W.
1906.00189
Tongliang Liu
Xiaobo Xia and Tongliang Liu and Nannan Wang and Bo Han and Chen Gong and Gang Niu and Masashi Sugiyama
Are Anchor Points Really Indispensable in Label-Noise Learning?
Accepted by NeurIPS 2019
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In label-noise learning, \textit{noise transition matrix}, denoting the probabilities that clean labels flip into noisy labels, plays a central role in building \textit{statistically consistent classifiers}. Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i.e., data points that belong to a specific class almost surely). However, when there are no anchor points, the transition matrix will be poorly learned, and those current consistent classifiers will significantly degenerate. In this paper, without employing anchor points, we propose a \textit{transition-revision} ($T$-Revision) method to effectively learn transition matrices, leading to better classifiers. Specifically, to learn a transition matrix, we first initialize it by exploiting data points that are similar to anchor points, having high \textit{noisy class posterior probabilities}. Then, we modify the initialized matrix by adding a \textit{slack variable}, which can be learned and validated together with the classifier by using noisy data. Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to the state-of-the-art label-noise learning methods.
[ { "created": "Sat, 1 Jun 2019 09:14:54 GMT", "version": "v1" }, { "created": "Tue, 17 Dec 2019 02:23:29 GMT", "version": "v2" } ]
2019-12-18
[ [ "Xia", "Xiaobo", "" ], [ "Liu", "Tongliang", "" ], [ "Wang", "Nannan", "" ], [ "Han", "Bo", "" ], [ "Gong", "Chen", "" ], [ "Niu", "Gang", "" ], [ "Sugiyama", "Masashi", "" ] ]
In label-noise learning, \textit{noise transition matrix}, denoting the probabilities that clean labels flip into noisy labels, plays a central role in building \textit{statistically consistent classifiers}. Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i.e., data points that belong to a specific class almost surely). However, when there are no anchor points, the transition matrix will be poorly learned, and those current consistent classifiers will significantly degenerate. In this paper, without employing anchor points, we propose a \textit{transition-revision} ($T$-Revision) method to effectively learn transition matrices, leading to better classifiers. Specifically, to learn a transition matrix, we first initialize it by exploiting data points that are similar to anchor points, having high \textit{noisy class posterior probabilities}. Then, we modify the initialized matrix by adding a \textit{slack variable}, which can be learned and validated together with the classifier by using noisy data. Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to the state-of-the-art label-noise learning methods.
2010.13816
Maarten Sap
Xinyao Ma, Maarten Sap, Hannah Rashkin, Yejin Choi
PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
EMNLP 2020
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often portrayed as passive and powerless ("She daydreams about being a doctor") while a man is portrayed as more proactive and powerful ("He pursues his dream of being a doctor"). We formulate *Controllable Debiasing*, a new revision task that aims to rewrite a given text to correct the implicit and potentially undesirable bias in character portrayals. We then introduce PowerTransformer as an approach that debiases text through the lens of connotation frames (Sap et al., 2017), which encode pragmatic knowledge of implied power dynamics with respect to verb predicates. One key challenge of our task is the lack of parallel corpora. To address this challenge, we adopt an unsupervised approach using auxiliary supervision with related tasks such as paraphrasing and self-supervision based on a reconstruction loss, building on pretrained language models. Through comprehensive experiments based on automatic and human evaluations, we demonstrate that our approach outperforms ablations and existing methods from related tasks. Furthermore, we demonstrate the use of PowerTransformer as a step toward mitigating the well-documented gender bias in character portrayal in movie scripts.
[ { "created": "Mon, 26 Oct 2020 18:05:48 GMT", "version": "v1" } ]
2020-10-28
[ [ "Ma", "Xinyao", "" ], [ "Sap", "Maarten", "" ], [ "Rashkin", "Hannah", "" ], [ "Choi", "Yejin", "" ] ]
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often portrayed as passive and powerless ("She daydreams about being a doctor") while a man is portrayed as more proactive and powerful ("He pursues his dream of being a doctor"). We formulate *Controllable Debiasing*, a new revision task that aims to rewrite a given text to correct the implicit and potentially undesirable bias in character portrayals. We then introduce PowerTransformer as an approach that debiases text through the lens of connotation frames (Sap et al., 2017), which encode pragmatic knowledge of implied power dynamics with respect to verb predicates. One key challenge of our task is the lack of parallel corpora. To address this challenge, we adopt an unsupervised approach using auxiliary supervision with related tasks such as paraphrasing and self-supervision based on a reconstruction loss, building on pretrained language models. Through comprehensive experiments based on automatic and human evaluations, we demonstrate that our approach outperforms ablations and existing methods from related tasks. Furthermore, we demonstrate the use of PowerTransformer as a step toward mitigating the well-documented gender bias in character portrayal in movie scripts.
1703.05446
Ke Gong
Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, Liang Lin
Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing
Accepted to appear in CVPR 2017
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human parsing has recently attracted a lot of research interests due to its huge application potentials. However existing datasets have limited number of images and annotations, and lack the variety of human appearances and the coverage of challenging cases in unconstrained environment. In this paper, we introduce a new benchmark "Look into Person (LIP)" that makes a significant advance in terms of scalability, diversity and difficulty, a contribution that we feel is crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels, which are captured from a wider range of viewpoints, occlusions and background complexity. Given these rich annotations we perform detailed analyses of the leading human parsing approaches, gaining insights into the success and failures of these methods. Furthermore, in contrast to the existing efforts on improving the feature discriminative capability, we solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into parsing results without resorting to extra supervision (i.e., no need for specifically labeling human joints in model training). Our self-supervised learning framework can be injected into any advanced neural networks to help incorporate rich high-level knowledge regarding human joints from a global perspective and improve the parsing results. Extensive evaluations on our LIP and the public PASCAL-Person-Part dataset demonstrate the superiority of our method.
[ { "created": "Thu, 16 Mar 2017 01:14:36 GMT", "version": "v1" }, { "created": "Fri, 28 Jul 2017 01:41:39 GMT", "version": "v2" } ]
2017-07-31
[ [ "Gong", "Ke", "" ], [ "Liang", "Xiaodan", "" ], [ "Zhang", "Dongyu", "" ], [ "Shen", "Xiaohui", "" ], [ "Lin", "Liang", "" ] ]
Human parsing has recently attracted a lot of research interests due to its huge application potentials. However existing datasets have limited number of images and annotations, and lack the variety of human appearances and the coverage of challenging cases in unconstrained environment. In this paper, we introduce a new benchmark "Look into Person (LIP)" that makes a significant advance in terms of scalability, diversity and difficulty, a contribution that we feel is crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels, which are captured from a wider range of viewpoints, occlusions and background complexity. Given these rich annotations we perform detailed analyses of the leading human parsing approaches, gaining insights into the success and failures of these methods. Furthermore, in contrast to the existing efforts on improving the feature discriminative capability, we solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into parsing results without resorting to extra supervision (i.e., no need for specifically labeling human joints in model training). Our self-supervised learning framework can be injected into any advanced neural networks to help incorporate rich high-level knowledge regarding human joints from a global perspective and improve the parsing results. Extensive evaluations on our LIP and the public PASCAL-Person-Part dataset demonstrate the superiority of our method.
1110.2849
Karthick Jayaraman
Karthick Jayaraman, Vijay Ganesh, Mahesh Tripunitara, Martin C Rinard, Steve J. Chapin
ARBAC Policy for a Large Multi-National Bank
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Administrative role-based access control (ARBAC) is the first comprehensive administrative model proposed for role-based access control (RBAC). ARBAC has several features for designing highly expressive policies, but current work has not highlighted the utility of these expressive policies. In this report, we present a case study of designing an ARBAC policy for a bank comprising 18 branches. Using this case study we provide an assessment about the features of ARBAC that are likely to be used in realistic policies.
[ { "created": "Thu, 13 Oct 2011 07:13:11 GMT", "version": "v1" } ]
2011-10-14
[ [ "Jayaraman", "Karthick", "" ], [ "Ganesh", "Vijay", "" ], [ "Tripunitara", "Mahesh", "" ], [ "Rinard", "Martin C", "" ], [ "Chapin", "Steve J.", "" ] ]
Administrative role-based access control (ARBAC) is the first comprehensive administrative model proposed for role-based access control (RBAC). ARBAC has several features for designing highly expressive policies, but current work has not highlighted the utility of these expressive policies. In this report, we present a case study of designing an ARBAC policy for a bank comprising 18 branches. Using this case study we provide an assessment about the features of ARBAC that are likely to be used in realistic policies.
2301.03094
Jonas Witt
Jonas Witt, Stef Rasing, Sebastijan Duman\v{c}i\'c, Tias Guns and Claus-Christian Carbon
A Divide-Align-Conquer Strategy for Program Synthesis
11 pages, 9 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major bottleneck in search-based program synthesis is the exponentially growing search space which makes learning large programs intractable. Humans mitigate this problem by leveraging the compositional nature of the real world: In structured domains, a logical specification can often be decomposed into smaller, complementary solution programs. We show that compositional segmentation can be applied in the programming by examples setting to divide the search for large programs across multiple smaller program synthesis problems. For each example, we search for a decomposition into smaller units which maximizes the reconstruction accuracy in the output under a latent task program. A structural alignment of the constituent parts in the input and output leads to pairwise correspondences used to guide the program synthesis search. In order to align the input/output structures, we make use of the Structure-Mapping Theory (SMT), a formal model of human analogical reasoning which originated in the cognitive sciences. We show that decomposition-driven program synthesis with structural alignment outperforms Inductive Logic Programming (ILP) baselines on string transformation tasks even with minimal knowledge priors. Unlike existing methods, the predictive accuracy of our agent monotonically increases for additional examples and achieves an average time complexity of $\mathcal{O}(m)$ in the number $m$ of partial programs for highly structured domains such as strings. We extend this method to the complex setting of visual reasoning in the Abstraction and Reasoning Corpus (ARC) for which ILP methods were previously infeasible.
[ { "created": "Sun, 8 Jan 2023 19:10:55 GMT", "version": "v1" } ]
2023-01-10
[ [ "Witt", "Jonas", "" ], [ "Rasing", "Stef", "" ], [ "Dumančić", "Sebastijan", "" ], [ "Guns", "Tias", "" ], [ "Carbon", "Claus-Christian", "" ] ]
A major bottleneck in search-based program synthesis is the exponentially growing search space which makes learning large programs intractable. Humans mitigate this problem by leveraging the compositional nature of the real world: In structured domains, a logical specification can often be decomposed into smaller, complementary solution programs. We show that compositional segmentation can be applied in the programming by examples setting to divide the search for large programs across multiple smaller program synthesis problems. For each example, we search for a decomposition into smaller units which maximizes the reconstruction accuracy in the output under a latent task program. A structural alignment of the constituent parts in the input and output leads to pairwise correspondences used to guide the program synthesis search. In order to align the input/output structures, we make use of the Structure-Mapping Theory (SMT), a formal model of human analogical reasoning which originated in the cognitive sciences. We show that decomposition-driven program synthesis with structural alignment outperforms Inductive Logic Programming (ILP) baselines on string transformation tasks even with minimal knowledge priors. Unlike existing methods, the predictive accuracy of our agent monotonically increases for additional examples and achieves an average time complexity of $\mathcal{O}(m)$ in the number $m$ of partial programs for highly structured domains such as strings. We extend this method to the complex setting of visual reasoning in the Abstraction and Reasoning Corpus (ARC) for which ILP methods were previously infeasible.
cs/0212044
Sandor P. Fekete
Sandor P. Fekete, Henk Meijer, Andre Rohe, and Walter Tietze
Solving a "Hard" Problem to Approximate an "Easy" One: Heuristics for Maximum Matchings and Maximum Traveling Salesman Problems
20 pages, 14 figures, Latex, to appear in Journal of Experimental Algorithms, 2002
Journal of Experimental Algorithms, 7 (2002), article 11.
null
null
cs.DS
null
We consider geometric instances of the Maximum Weighted Matching Problem (MWMP) and the Maximum Traveling Salesman Problem (MTSP) with up to 3,000,000 vertices. Making use of a geometric duality relationship between MWMP, MTSP, and the Fermat-Weber-Problem (FWP), we develop a heuristic approach that yields in near-linear time solutions as well as upper bounds. Using various computational tools, we get solutions within considerably less than 1% of the optimum. An interesting feature of our approach is that, even though an FWP is hard to compute in theory and Edmonds' algorithm for maximum weighted matching yields a polynomial solution for the MWMP, the practical behavior is just the opposite, and we can solve the FWP with high accuracy in order to find a good heuristic solution for the MWMP.
[ { "created": "Mon, 16 Dec 2002 09:39:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Fekete", "Sandor P.", "" ], [ "Meijer", "Henk", "" ], [ "Rohe", "Andre", "" ], [ "Tietze", "Walter", "" ] ]
We consider geometric instances of the Maximum Weighted Matching Problem (MWMP) and the Maximum Traveling Salesman Problem (MTSP) with up to 3,000,000 vertices. Making use of a geometric duality relationship between MWMP, MTSP, and the Fermat-Weber-Problem (FWP), we develop a heuristic approach that yields in near-linear time solutions as well as upper bounds. Using various computational tools, we get solutions within considerably less than 1% of the optimum. An interesting feature of our approach is that, even though an FWP is hard to compute in theory and Edmonds' algorithm for maximum weighted matching yields a polynomial solution for the MWMP, the practical behavior is just the opposite, and we can solve the FWP with high accuracy in order to find a good heuristic solution for the MWMP.
1508.07504
Joseph Cheriyan
Joe Cheriyan, Zhihan Gao
Approximating (Unweighted) Tree Augmentation via Lift-and-Project, Part I: Stemless TAP
24 pages, 11 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Part I, we study a special case of the unweighted Tree Augmentation Problem (TAP) via the Lasserre (Sum of Squares) system. In the special case, we forbid so-called stems; these are a particular type of subtree configuration. For stemless TAP, we prove that the integrality ratio of an SDP relaxation (the Lasserre tightening of an LP relaxation) is $\leq \frac{3}{2}+\epsilon$, where $\epsilon>0$ can be any small constant. We obtain this result by designing a polynomial-time algorithm for stemless TAP that achieves an approximation guarantee of ($\frac32+\epsilon$) relative to the SDP relaxation. The algorithm is combinatorial and does not solve the SDP relaxation, but our analysis relies on the SDP relaxation. We generalize the combinatorial analysis of integral solutions from the previous literature to fractional solutions by identifying some properties of fractional solutions of the Lasserre system via the decomposition result of Karlin, Mathieu and Nguyen (IPCO 2011). Also, we present an example of stemless TAP such that the approximation guarantee of $\frac32$ is tight for the algorithm. In Part II of this paper, we extend the methods of Part I to prove the same results relative to the same SDP relaxation for TAP.
[ { "created": "Sat, 29 Aug 2015 20:48:09 GMT", "version": "v1" } ]
2015-09-01
[ [ "Cheriyan", "Joe", "" ], [ "Gao", "Zhihan", "" ] ]
In Part I, we study a special case of the unweighted Tree Augmentation Problem (TAP) via the Lasserre (Sum of Squares) system. In the special case, we forbid so-called stems; these are a particular type of subtree configuration. For stemless TAP, we prove that the integrality ratio of an SDP relaxation (the Lasserre tightening of an LP relaxation) is $\leq \frac{3}{2}+\epsilon$, where $\epsilon>0$ can be any small constant. We obtain this result by designing a polynomial-time algorithm for stemless TAP that achieves an approximation guarantee of ($\frac32+\epsilon$) relative to the SDP relaxation. The algorithm is combinatorial and does not solve the SDP relaxation, but our analysis relies on the SDP relaxation. We generalize the combinatorial analysis of integral solutions from the previous literature to fractional solutions by identifying some properties of fractional solutions of the Lasserre system via the decomposition result of Karlin, Mathieu and Nguyen (IPCO 2011). Also, we present an example of stemless TAP such that the approximation guarantee of $\frac32$ is tight for the algorithm. In Part II of this paper, we extend the methods of Part I to prove the same results relative to the same SDP relaxation for TAP.
1908.00485
Zhun Zhong
Zhun Zhong, Liang Zheng, Zhiming Luo, Shaozi Li and Yi Yang
Learning to Adapt Invariance in Memory for Person Re-identification
Extension of conference version: arXiv:1904.01990
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work considers the problem of unsupervised domain adaptation in person re-identification (re-ID), which aims to transfer knowledge from the source domain to the target domain. Existing methods are primary to reduce the inter-domain shift between the domains, which however usually overlook the relations among target samples. This paper investigates into the intra-domain variations of the target domain and proposes a novel adaptation framework w.r.t. three types of underlying invariance, i.e., Exemplar-Invariance, Camera-Invariance, and Neighborhood-Invariance. Specifically, an exemplar memory is introduced to store features of samples, which can effectively and efficiently enforce the invariance constraints over the global dataset. We further present the Graph-based Positive Prediction (GPP) method to explore reliable neighbors for the target domain, which is built upon the memory and is trained on the source samples. Experiments demonstrate that 1) the three invariance properties are indispensable for effective domain adaptation, 2) the memory plays a key role in implementing invariance learning and improves the performance with limited extra computation cost, 3) GPP could facilitate the invariance learning and thus significantly improves the results, and 4) our approach produces new state-of-the-art adaptation accuracy on three re-ID large-scale benchmarks.
[ { "created": "Thu, 1 Aug 2019 16:20:16 GMT", "version": "v1" } ]
2019-08-02
[ [ "Zhong", "Zhun", "" ], [ "Zheng", "Liang", "" ], [ "Luo", "Zhiming", "" ], [ "Li", "Shaozi", "" ], [ "Yang", "Yi", "" ] ]
This work considers the problem of unsupervised domain adaptation in person re-identification (re-ID), which aims to transfer knowledge from the source domain to the target domain. Existing methods are primary to reduce the inter-domain shift between the domains, which however usually overlook the relations among target samples. This paper investigates into the intra-domain variations of the target domain and proposes a novel adaptation framework w.r.t. three types of underlying invariance, i.e., Exemplar-Invariance, Camera-Invariance, and Neighborhood-Invariance. Specifically, an exemplar memory is introduced to store features of samples, which can effectively and efficiently enforce the invariance constraints over the global dataset. We further present the Graph-based Positive Prediction (GPP) method to explore reliable neighbors for the target domain, which is built upon the memory and is trained on the source samples. Experiments demonstrate that 1) the three invariance properties are indispensable for effective domain adaptation, 2) the memory plays a key role in implementing invariance learning and improves the performance with limited extra computation cost, 3) GPP could facilitate the invariance learning and thus significantly improves the results, and 4) our approach produces new state-of-the-art adaptation accuracy on three re-ID large-scale benchmarks.
2408.01962
Robert Wolfe
Robert Wolfe, Tanushree Mitra
The Implications of Open Generative Models in Human-Centered Data Science Work: A Case Study with Fact-Checking Organizations
Accepted at Artificial Intelligence, Ethics, and Society 2024
null
null
null
cs.HC cs.AI cs.CL cs.CY cs.ET
http://creativecommons.org/licenses/by-nc-sa/4.0/
Calls to use open generative language models in academic research have highlighted the need for reproducibility and transparency in scientific research. However, the impact of generative AI extends well beyond academia, as corporations and public interest organizations have begun integrating these models into their data science pipelines. We expand this lens to include the impact of open models on organizations, focusing specifically on fact-checking organizations, which use AI to observe and analyze large volumes of circulating misinformation, yet must also ensure the reproducibility and impartiality of their work. We wanted to understand where fact-checking organizations use open models in their data science pipelines; what motivates their use of open models or proprietary models; and how their use of open or proprietary models can inform research on the societal impact of generative AI. To answer these questions, we conducted an interview study with N=24 professionals at 20 fact-checking organizations on six continents. Based on these interviews, we offer a five-component conceptual model of where fact-checking organizations employ generative AI to support or automate parts of their data science pipeline, including Data Ingestion, Data Analysis, Data Retrieval, Data Delivery, and Data Sharing. We then provide taxonomies of fact-checking organizations' motivations for using open models and the limitations that prevent them for further adopting open models, finding that they prefer open models for Organizational Autonomy, Data Privacy and Ownership, Application Specificity, and Capability Transparency. However, they nonetheless use proprietary models due to perceived advantages in Performance, Usability, and Safety, as well as Opportunity Costs related to participation in emerging generative AI ecosystems. Our work provides novel perspective on open models in data-driven organizations.
[ { "created": "Sun, 4 Aug 2024 08:41:48 GMT", "version": "v1" } ]
2024-08-06
[ [ "Wolfe", "Robert", "" ], [ "Mitra", "Tanushree", "" ] ]
Calls to use open generative language models in academic research have highlighted the need for reproducibility and transparency in scientific research. However, the impact of generative AI extends well beyond academia, as corporations and public interest organizations have begun integrating these models into their data science pipelines. We expand this lens to include the impact of open models on organizations, focusing specifically on fact-checking organizations, which use AI to observe and analyze large volumes of circulating misinformation, yet must also ensure the reproducibility and impartiality of their work. We wanted to understand where fact-checking organizations use open models in their data science pipelines; what motivates their use of open models or proprietary models; and how their use of open or proprietary models can inform research on the societal impact of generative AI. To answer these questions, we conducted an interview study with N=24 professionals at 20 fact-checking organizations on six continents. Based on these interviews, we offer a five-component conceptual model of where fact-checking organizations employ generative AI to support or automate parts of their data science pipeline, including Data Ingestion, Data Analysis, Data Retrieval, Data Delivery, and Data Sharing. We then provide taxonomies of fact-checking organizations' motivations for using open models and the limitations that prevent them for further adopting open models, finding that they prefer open models for Organizational Autonomy, Data Privacy and Ownership, Application Specificity, and Capability Transparency. However, they nonetheless use proprietary models due to perceived advantages in Performance, Usability, and Safety, as well as Opportunity Costs related to participation in emerging generative AI ecosystems. Our work provides novel perspective on open models in data-driven organizations.
2311.03725
Arti Kumbhar
Arti Kumbhar, Amruta Chougule, Priya Lokhande, Saloni Navaghane, Aditi Burud, Saee Nimbalkar
DeepInspect: An AI-Powered Defect Detection for Manufacturing Industries
Research Paper for Defect Detection for Manufacturing Industries Using Deep Learning Techniques: 5 pages, 8 figures
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Utilizing Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs), our system introduces an innovative approach to defect detection in manufacturing. This technology excels in precisely identifying faults by extracting intricate details from product photographs, utilizing RNNs to detect evolving errors and generating synthetic defect data to bolster the model's robustness and adaptability across various defect scenarios. The project leverages a deep learning framework to automate real-time flaw detection in the manufacturing process. It harnesses extensive datasets of annotated images to discern complex defect patterns. This integrated system seamlessly fits into production workflows, thereby boosting efficiency and elevating product quality. As a result, it reduces waste and operational costs, ultimately enhancing market competitiveness.
[ { "created": "Tue, 7 Nov 2023 04:59:43 GMT", "version": "v1" }, { "created": "Wed, 8 Nov 2023 07:45:58 GMT", "version": "v2" } ]
2023-11-09
[ [ "Kumbhar", "Arti", "" ], [ "Chougule", "Amruta", "" ], [ "Lokhande", "Priya", "" ], [ "Navaghane", "Saloni", "" ], [ "Burud", "Aditi", "" ], [ "Nimbalkar", "Saee", "" ] ]
Utilizing Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs), our system introduces an innovative approach to defect detection in manufacturing. This technology excels in precisely identifying faults by extracting intricate details from product photographs, utilizing RNNs to detect evolving errors and generating synthetic defect data to bolster the model's robustness and adaptability across various defect scenarios. The project leverages a deep learning framework to automate real-time flaw detection in the manufacturing process. It harnesses extensive datasets of annotated images to discern complex defect patterns. This integrated system seamlessly fits into production workflows, thereby boosting efficiency and elevating product quality. As a result, it reduces waste and operational costs, ultimately enhancing market competitiveness.
2103.11297
Ryan Rossi
Camille Harris, Ryan A. Rossi, Sana Malik, Jane Hoffswell, Fan Du, Tak Yeon Lee, Eunyee Koh, Handong Zhao
Insight-centric Visualization Recommendation
null
null
null
null
cs.HC cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visualization recommendation systems simplify exploratory data analysis (EDA) and make understanding data more accessible to users of all skill levels by automatically generating visualizations for users to explore. However, most existing visualization recommendation systems focus on ranking all visualizations into a single list or set of groups based on particular attributes or encodings. This global ranking makes it difficult and time-consuming for users to find the most interesting or relevant insights. To address these limitations, we introduce a novel class of visualization recommendation systems that automatically rank and recommend both groups of related insights as well as the most important insights within each group. Our proposed approach combines results from many different learning-based methods to discover insights automatically. A key advantage is that this approach generalizes to a wide variety of attribute types such as categorical, numerical, and temporal, as well as complex non-trivial combinations of these different attribute types. To evaluate the effectiveness of our approach, we implemented a new insight-centric visualization recommendation system, SpotLight, which generates and ranks annotated visualizations to explain each insight. We conducted a user study with 12 participants and two datasets which showed that users are able to quickly understand and find relevant insights in unfamiliar data.
[ { "created": "Sun, 21 Mar 2021 03:30:22 GMT", "version": "v1" } ]
2021-03-23
[ [ "Harris", "Camille", "" ], [ "Rossi", "Ryan A.", "" ], [ "Malik", "Sana", "" ], [ "Hoffswell", "Jane", "" ], [ "Du", "Fan", "" ], [ "Lee", "Tak Yeon", "" ], [ "Koh", "Eunyee", "" ], [ "Zhao", "Handong", "" ] ]
Visualization recommendation systems simplify exploratory data analysis (EDA) and make understanding data more accessible to users of all skill levels by automatically generating visualizations for users to explore. However, most existing visualization recommendation systems focus on ranking all visualizations into a single list or set of groups based on particular attributes or encodings. This global ranking makes it difficult and time-consuming for users to find the most interesting or relevant insights. To address these limitations, we introduce a novel class of visualization recommendation systems that automatically rank and recommend both groups of related insights as well as the most important insights within each group. Our proposed approach combines results from many different learning-based methods to discover insights automatically. A key advantage is that this approach generalizes to a wide variety of attribute types such as categorical, numerical, and temporal, as well as complex non-trivial combinations of these different attribute types. To evaluate the effectiveness of our approach, we implemented a new insight-centric visualization recommendation system, SpotLight, which generates and ranks annotated visualizations to explain each insight. We conducted a user study with 12 participants and two datasets which showed that users are able to quickly understand and find relevant insights in unfamiliar data.
2310.15624
Yan Lu
Yan Lu, Xinzhu Ma, Lei Yang, Tianzhu Zhang, Yating Liu, Qi Chu, Tong He, Yonghui Li, Wanli Ouyang
GUPNet++: Geometry Uncertainty Propagation Network for Monocular 3D Object Detection
18 pages, 9 figures
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometry plays a significant role in monocular 3D object detection. It can be used to estimate object depth by using the perspective projection between object's physical size and 2D projection in the image plane, which can introduce mathematical priors into deep models. However, this projection process also introduces error amplification, where the error of the estimated height is amplified and reflected into the projected depth. It leads to unreliable depth inferences and also impairs training stability. To tackle this problem, we propose a novel Geometry Uncertainty Propagation Network (GUPNet++) by modeling geometry projection in a probabilistic manner. This ensures depth predictions are well-bounded and associated with a reasonable uncertainty. The significance of introducing such geometric uncertainty is two-fold: (1). It models the uncertainty propagation relationship of the geometry projection during training, improving the stability and efficiency of the end-to-end model learning. (2). It can be derived to a highly reliable confidence to indicate the quality of the 3D detection result, enabling more reliable detection inference. Experiments show that the proposed approach not only obtains (state-of-the-art) SOTA performance in image-based monocular 3D detection but also demonstrates superiority in efficacy with a simplified framework.
[ { "created": "Tue, 24 Oct 2023 08:45:15 GMT", "version": "v1" } ]
2023-10-25
[ [ "Lu", "Yan", "" ], [ "Ma", "Xinzhu", "" ], [ "Yang", "Lei", "" ], [ "Zhang", "Tianzhu", "" ], [ "Liu", "Yating", "" ], [ "Chu", "Qi", "" ], [ "He", "Tong", "" ], [ "Li", "Yonghui", "" ], [ "Ouyang", "Wanli", "" ] ]
Geometry plays a significant role in monocular 3D object detection. It can be used to estimate object depth by using the perspective projection between object's physical size and 2D projection in the image plane, which can introduce mathematical priors into deep models. However, this projection process also introduces error amplification, where the error of the estimated height is amplified and reflected into the projected depth. It leads to unreliable depth inferences and also impairs training stability. To tackle this problem, we propose a novel Geometry Uncertainty Propagation Network (GUPNet++) by modeling geometry projection in a probabilistic manner. This ensures depth predictions are well-bounded and associated with a reasonable uncertainty. The significance of introducing such geometric uncertainty is two-fold: (1). It models the uncertainty propagation relationship of the geometry projection during training, improving the stability and efficiency of the end-to-end model learning. (2). It can be derived to a highly reliable confidence to indicate the quality of the 3D detection result, enabling more reliable detection inference. Experiments show that the proposed approach not only obtains (state-of-the-art) SOTA performance in image-based monocular 3D detection but also demonstrates superiority in efficacy with a simplified framework.
2105.10016
Xinyu Liu
Xinyu Liu, Qi Zhou, Joy Arulraj, and Alessandro Orso
Testing DBMS Performance with Mutations
null
null
null
null
cs.DB cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Because database systems are the critical component of modern data-intensive applications, it is important to ensure that they operate correctly. To this end, developers extensively test these systems to eliminate bugs that negatively affect functionality. In addition to functional bugs, however, there is another important class of bugs: performance bugs. These bugs negatively affect the response time of a database system and can therefore affect the overall performance of the system. Despite their impact on end-user experience, performance bugs have received considerably less attention than functional bugs. In this paper, we present AMOEBA, a system for automatically detecting performance bugs in database systems. The core idea behind AMOEBA is to construct query pairs that are semantically equivalent to each other and then compare their response time on the same database system. If the queries exhibit a significant difference in their runtime performance, then the root cause is likely a performance bug in the system. We propose a novel set of structure and predicate mutation rules for constructing query pairs that are likely to uncover performance bugs. We introduce feedback mechanisms for improving the efficacy and computational efficiency of the tool. We evaluate AMOEBA on two widely-used DBMSs, namely PostgreSQL and CockroachDB. AMOEBA has discovered 20 previously-unknown performance bugs, among which developers have already confirmed 14 and fixed 4.
[ { "created": "Thu, 20 May 2021 20:18:43 GMT", "version": "v1" }, { "created": "Thu, 2 Sep 2021 01:49:11 GMT", "version": "v2" } ]
2021-09-03
[ [ "Liu", "Xinyu", "" ], [ "Zhou", "Qi", "" ], [ "Arulraj", "Joy", "" ], [ "Orso", "Alessandro", "" ] ]
Because database systems are the critical component of modern data-intensive applications, it is important to ensure that they operate correctly. To this end, developers extensively test these systems to eliminate bugs that negatively affect functionality. In addition to functional bugs, however, there is another important class of bugs: performance bugs. These bugs negatively affect the response time of a database system and can therefore affect the overall performance of the system. Despite their impact on end-user experience, performance bugs have received considerably less attention than functional bugs. In this paper, we present AMOEBA, a system for automatically detecting performance bugs in database systems. The core idea behind AMOEBA is to construct query pairs that are semantically equivalent to each other and then compare their response time on the same database system. If the queries exhibit a significant difference in their runtime performance, then the root cause is likely a performance bug in the system. We propose a novel set of structure and predicate mutation rules for constructing query pairs that are likely to uncover performance bugs. We introduce feedback mechanisms for improving the efficacy and computational efficiency of the tool. We evaluate AMOEBA on two widely-used DBMSs, namely PostgreSQL and CockroachDB. AMOEBA has discovered 20 previously-unknown performance bugs, among which developers have already confirmed 14 and fixed 4.
2005.10801
Farhad Pakdaman
Farhad Pakdaman, Mohammad Ali Adelimanesh, Moncef Gabbouj, Mahmoud Reza Hashemi
Complexity Analysis Of Next-Generation VVC Encoding and Decoding
IEEE ICIP 2020
Proceedings of International Conference on Image Processing (ICIP), (2020) 3134-3138
10.1109/ICIP40778.2020.9190983
null
cs.MM cs.CC eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the next generation video compression standard, Versatile Video Coding (VVC), provides a superior compression efficiency, its computational complexity dramatically increases. This paper thoroughly analyzes this complexity for both encoder and decoder of VVC Test Model 6, by quantifying the complexity break-down for each coding tool and measuring the complexity and memory requirements for VVC encoding/decoding. These extensive analyses are performed for six video sequences of 720p, 1080p, and 2160p, under Low-Delay (LD), Random-Access (RA), and All-Intra (AI) conditions (a total of 320 encoding/decoding). Results indicate that the VVC encoder and decoder are 5x and 1.5x more complex compared to HEVC in LD, and 31x and 1.8x in AI, respectively. Detailed analysis of coding tools reveals that in LD on average, motion estimation tools with 53%, transformation and quantization with 22%, and entropy coding with 7% dominate the encoding complexity. In decoding, loop filters with 30%, motion compensation with 20%, and entropy decoding with 16%, are the most complex modules. Moreover, the required memory bandwidth for VVC encoding/decoding are measured through memory profiling, which are 30x and 3x of HEVC. The reported results and insights are a guide for future research and implementations of energy-efficient VVC encoder/decoder.
[ { "created": "Thu, 21 May 2020 17:30:42 GMT", "version": "v1" } ]
2020-10-08
[ [ "Pakdaman", "Farhad", "" ], [ "Adelimanesh", "Mohammad Ali", "" ], [ "Gabbouj", "Moncef", "" ], [ "Hashemi", "Mahmoud Reza", "" ] ]
While the next generation video compression standard, Versatile Video Coding (VVC), provides a superior compression efficiency, its computational complexity dramatically increases. This paper thoroughly analyzes this complexity for both encoder and decoder of VVC Test Model 6, by quantifying the complexity break-down for each coding tool and measuring the complexity and memory requirements for VVC encoding/decoding. These extensive analyses are performed for six video sequences of 720p, 1080p, and 2160p, under Low-Delay (LD), Random-Access (RA), and All-Intra (AI) conditions (a total of 320 encoding/decoding). Results indicate that the VVC encoder and decoder are 5x and 1.5x more complex compared to HEVC in LD, and 31x and 1.8x in AI, respectively. Detailed analysis of coding tools reveals that in LD on average, motion estimation tools with 53%, transformation and quantization with 22%, and entropy coding with 7% dominate the encoding complexity. In decoding, loop filters with 30%, motion compensation with 20%, and entropy decoding with 16%, are the most complex modules. Moreover, the required memory bandwidth for VVC encoding/decoding are measured through memory profiling, which are 30x and 3x of HEVC. The reported results and insights are a guide for future research and implementations of energy-efficient VVC encoder/decoder.
2206.10025
Petra Wolf
Jonas Lingg, Mateus de Oliveira Oliveira, Petra Wolf
Learning from Positive and Negative Examples: New Proof for Binary Alphabets
null
null
null
null
cs.FL cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most fundamental problems in computational learning theory is the the problem of learning a finite automaton $A$ consistent with a finite set $P$ of positive examples and with a finite set $N$ of negative examples. By consistency, we mean that $A$ accepts all strings in $P$ and rejects all strings in $N$. It is well known that this problem is NP-complete. In the literature, it is stated that this NP-hardness holds even in the case of a binary alphabet. As a standard reference for this theorem, the work of Gold from 1978 is either cited or adapted. But as a crucial detail, the work of Gold actually considered Mealy machines and not deterministic finite state automata (DFAs) as they are considered nowadays. As Mealy automata are equipped with an output function, they can be more compact than DFAs which accept the same language. We show that the adaptions of Gold's construction for Mealy machines stated in the literature have some issues and give a new construction for DFAs with a binary alphabet ourselves.
[ { "created": "Mon, 20 Jun 2022 22:20:48 GMT", "version": "v1" } ]
2022-06-22
[ [ "Lingg", "Jonas", "" ], [ "Oliveira", "Mateus de Oliveira", "" ], [ "Wolf", "Petra", "" ] ]
One of the most fundamental problems in computational learning theory is the the problem of learning a finite automaton $A$ consistent with a finite set $P$ of positive examples and with a finite set $N$ of negative examples. By consistency, we mean that $A$ accepts all strings in $P$ and rejects all strings in $N$. It is well known that this problem is NP-complete. In the literature, it is stated that this NP-hardness holds even in the case of a binary alphabet. As a standard reference for this theorem, the work of Gold from 1978 is either cited or adapted. But as a crucial detail, the work of Gold actually considered Mealy machines and not deterministic finite state automata (DFAs) as they are considered nowadays. As Mealy automata are equipped with an output function, they can be more compact than DFAs which accept the same language. We show that the adaptions of Gold's construction for Mealy machines stated in the literature have some issues and give a new construction for DFAs with a binary alphabet ourselves.
2102.12684
Wil Thomason
Claire Liang (1), Wil Thomason (2), E. Andy Ricci (1), and Soham Sankaran (1, 3) ((1) Cornell University Department of Computer Science, (2) Rice University Department of Computer Science, (3) Pashi Corp.)
Ensuring Progress for Multiple Mobile Robots via Space Partitioning, Motion Rules, and Adaptively Centralized Conflict Resolution
9 pages, 4 figures. Submitted to IROS 2021
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In environments where multiple robots must coordinate in a shared space, decentralized approaches allow for decoupled planning at the cost of global guarantees, while centralized approaches make the opposite trade-off. These solutions make a range of assumptions - commonly, that all the robots share the same planning strategies. In this work, we present a framework that ensures progress for all robots without assumptions on any robot's planning strategy by (1) generating a partition of the environment into "flow", "open", and "passage" regions and (2) imposing a set of rules for robot motion in these regions. These rules for robot motion prevent deadlock through an adaptively centralized protocol for resolving spatial conflicts between robots. Our proposed framework ensures progress for all robots without a grid-like discretization of the environment or strong requirements on robot communication, coordination, or cooperation. Each robot can freely choose how to plan and coordinate for itself, without being vulnerable to other robots or groups of robots blocking them from their goals, as long as they follow the rules when necessary. We describe our space partition and motion rules, prove that the motion rules suffice to guarantee progress in partitioned environments, and demonstrate several cases in simulated polygonal environments. This work strikes a balance between each robot's planning independence and a guarantee that each robot can always reach any goal in finite time.
[ { "created": "Thu, 25 Feb 2021 04:51:09 GMT", "version": "v1" }, { "created": "Mon, 7 Mar 2022 18:45:50 GMT", "version": "v2" } ]
2022-03-08
[ [ "Liang", "Claire", "" ], [ "Thomason", "Wil", "" ], [ "Ricci", "E. Andy", "" ], [ "Sankaran", "Soham", "" ] ]
In environments where multiple robots must coordinate in a shared space, decentralized approaches allow for decoupled planning at the cost of global guarantees, while centralized approaches make the opposite trade-off. These solutions make a range of assumptions - commonly, that all the robots share the same planning strategies. In this work, we present a framework that ensures progress for all robots without assumptions on any robot's planning strategy by (1) generating a partition of the environment into "flow", "open", and "passage" regions and (2) imposing a set of rules for robot motion in these regions. These rules for robot motion prevent deadlock through an adaptively centralized protocol for resolving spatial conflicts between robots. Our proposed framework ensures progress for all robots without a grid-like discretization of the environment or strong requirements on robot communication, coordination, or cooperation. Each robot can freely choose how to plan and coordinate for itself, without being vulnerable to other robots or groups of robots blocking them from their goals, as long as they follow the rules when necessary. We describe our space partition and motion rules, prove that the motion rules suffice to guarantee progress in partitioned environments, and demonstrate several cases in simulated polygonal environments. This work strikes a balance between each robot's planning independence and a guarantee that each robot can always reach any goal in finite time.
1802.05666
Jonathan Uesato
Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, Pushmeet Kohli
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
null
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate 'adversarial risk' as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as 'obscurity to an adversary,' and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.
[ { "created": "Thu, 15 Feb 2018 17:13:18 GMT", "version": "v1" }, { "created": "Tue, 12 Jun 2018 14:20:27 GMT", "version": "v2" } ]
2018-06-13
[ [ "Uesato", "Jonathan", "" ], [ "O'Donoghue", "Brendan", "" ], [ "Oord", "Aaron van den", "" ], [ "Kohli", "Pushmeet", "" ] ]
This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate 'adversarial risk' as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as 'obscurity to an adversary,' and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.
2210.15804
Gayathri Manikutty
Sreejith Sasidharan, Pranav Prabha, Devasena Pasupuleti, Anand M Das, Chaitanya Kapoor, Gayathri Manikutty, Praveen Pankajakshan, Bhavani Rao
Handwashing Action Detection System for an Autonomous Social Robot
null
null
10.1109/TENCON55691.2022.9977684
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Young children are at an increased risk of contracting contagious diseases such as COVID-19 due to improper hand hygiene. An autonomous social agent that observes children while handwashing and encourages good hand washing practices could provide an opportunity for handwashing behavior to become a habit. In this article, we present a human action recognition system, which is part of the vision system of a social robot platform, to assist children in developing a correct handwashing technique. A modified convolution neural network (CNN) architecture with Channel Spatial Attention Bilinear Pooling (CSAB) frame, with a VGG-16 architecture as the backbone is trained and validated on an augmented dataset. The modified architecture generalizes well with an accuracy of 90% for the WHO-prescribed handwashing steps even in an unseen environment. Our findings indicate that the approach can recognize even subtle hand movements in the video and can be used for gesture detection and classification in social robotics.
[ { "created": "Thu, 27 Oct 2022 23:46:56 GMT", "version": "v1" } ]
2023-06-21
[ [ "Sasidharan", "Sreejith", "" ], [ "Prabha", "Pranav", "" ], [ "Pasupuleti", "Devasena", "" ], [ "Das", "Anand M", "" ], [ "Kapoor", "Chaitanya", "" ], [ "Manikutty", "Gayathri", "" ], [ "Pankajakshan", "Praveen", "" ], [ "Rao", "Bhavani", "" ] ]
Young children are at an increased risk of contracting contagious diseases such as COVID-19 due to improper hand hygiene. An autonomous social agent that observes children while handwashing and encourages good hand washing practices could provide an opportunity for handwashing behavior to become a habit. In this article, we present a human action recognition system, which is part of the vision system of a social robot platform, to assist children in developing a correct handwashing technique. A modified convolution neural network (CNN) architecture with Channel Spatial Attention Bilinear Pooling (CSAB) frame, with a VGG-16 architecture as the backbone is trained and validated on an augmented dataset. The modified architecture generalizes well with an accuracy of 90% for the WHO-prescribed handwashing steps even in an unseen environment. Our findings indicate that the approach can recognize even subtle hand movements in the video and can be used for gesture detection and classification in social robotics.
2312.05594
Dong In Kim
Nguyen Van Huynh, Jiacheng Wang, Hongyang Du, Dinh Thai Hoang, Dusit Niyato, Diep N. Nguyen, Dong In Kim, and Khaled B. Letaief
Generative AI for Physical Layer Communications: A Survey
null
null
null
null
cs.NI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent evolution of generative artificial intelligence (GAI) leads to the emergence of groundbreaking applications such as ChatGPT, which not only enhances the efficiency of digital content production, such as text, audio, video, or even network traffic data, but also enriches its diversity. Beyond digital content creation, GAI's capability in analyzing complex data distributions offers great potential for wireless communications, particularly amidst a rapid expansion of new physical layer communication technologies. For example, the diffusion model can learn input signal distributions and use them to improve the channel estimation accuracy, while the variational autoencoder can model channel distribution and infer latent variables for blind channel equalization. Therefore, this paper presents a comprehensive investigation of GAI's applications for communications at the physical layer, ranging from traditional issues, including signal classification, channel estimation, and equalization, to emerging topics, such as intelligent reflecting surfaces and joint source channel coding. We also compare GAI-enabled physical layer communications with those supported by traditional AI, highlighting GAI's inherent capabilities and unique contributions in these areas. Finally, the paper discusses open issues and proposes several future research directions, laying a foundation for further exploration and advancement of GAI in physical layer communications.
[ { "created": "Sat, 9 Dec 2023 15:20:56 GMT", "version": "v1" } ]
2023-12-12
[ [ "Van Huynh", "Nguyen", "" ], [ "Wang", "Jiacheng", "" ], [ "Du", "Hongyang", "" ], [ "Hoang", "Dinh Thai", "" ], [ "Niyato", "Dusit", "" ], [ "Nguyen", "Diep N.", "" ], [ "Kim", "Dong In", "" ], [ "Letaief", "Khaled B.", "" ] ]
The recent evolution of generative artificial intelligence (GAI) leads to the emergence of groundbreaking applications such as ChatGPT, which not only enhances the efficiency of digital content production, such as text, audio, video, or even network traffic data, but also enriches its diversity. Beyond digital content creation, GAI's capability in analyzing complex data distributions offers great potential for wireless communications, particularly amidst a rapid expansion of new physical layer communication technologies. For example, the diffusion model can learn input signal distributions and use them to improve the channel estimation accuracy, while the variational autoencoder can model channel distribution and infer latent variables for blind channel equalization. Therefore, this paper presents a comprehensive investigation of GAI's applications for communications at the physical layer, ranging from traditional issues, including signal classification, channel estimation, and equalization, to emerging topics, such as intelligent reflecting surfaces and joint source channel coding. We also compare GAI-enabled physical layer communications with those supported by traditional AI, highlighting GAI's inherent capabilities and unique contributions in these areas. Finally, the paper discusses open issues and proposes several future research directions, laying a foundation for further exploration and advancement of GAI in physical layer communications.
2307.00771
Ning Lin
Ning Lin, Shaocong Wang, Yi Li, Bo Wang, Shuhui Shi, Yangu He, Woyu Zhang, Yifei Yu, Yue Zhang, Xiaojuan Qi, Xiaoming Chen, Hao Jiang, Xumeng Zhang, Peng Lin, Xiaoxin Xu, Qi Liu, Zhongrui Wang, Dashan Shang and Ming Liu
Resistive memory-based zero-shot liquid state machine for multimodal event data learning
null
null
null
null
cs.ET
http://creativecommons.org/licenses/by-nc-sa/4.0/
The human brain is a complex spiking neural network (SNN) that learns multimodal signals in a zero-shot manner by generalizing existing knowledge. Remarkably, the brain achieves this with minimal power consumption, using event-based signals that propagate within its structure. However, mimicking the human brain in neuromorphic hardware presents both hardware and software challenges. Hardware limitations, such as the slowdown of Moore's law and the von Neumann bottleneck, hinder the efficiency of digital computers. On the software side, SNNs are known for their difficult training, especially when learning multimodal signals. To overcome these challenges, we propose a hardware-software co-design that combines a fixed and random liquid state machine (LSM) SNN encoder with trainable artificial neural network (ANN) projections. The LSM is physically implemented using analogue resistive memory, leveraging the inherent stochasticity of resistive switching to generate random weights. This highly efficient and nanoscale in-memory computing approach effectively addresses the von Neumann bottleneck and the slowdown of Moore's law. The ANN projections are implemented digitally, allowing for easy optimization using contrastive loss, which helps to overcome the difficulties associated with SNN training. We experimentally implement this co-design on a 40nm 256Kb in-memory computing macro. We first demonstrate LSM-based event encoding through supervised classification and linear probing on the N-MNIST and N-TIDIGITS datasets.
[ { "created": "Mon, 3 Jul 2023 06:21:05 GMT", "version": "v1" } ]
2023-07-04
[ [ "Lin", "Ning", "" ], [ "Wang", "Shaocong", "" ], [ "Li", "Yi", "" ], [ "Wang", "Bo", "" ], [ "Shi", "Shuhui", "" ], [ "He", "Yangu", "" ], [ "Zhang", "Woyu", "" ], [ "Yu", "Yifei", "" ], [ "Zhang", "Yue", "" ], [ "Qi", "Xiaojuan", "" ], [ "Chen", "Xiaoming", "" ], [ "Jiang", "Hao", "" ], [ "Zhang", "Xumeng", "" ], [ "Lin", "Peng", "" ], [ "Xu", "Xiaoxin", "" ], [ "Liu", "Qi", "" ], [ "Wang", "Zhongrui", "" ], [ "Shang", "Dashan", "" ], [ "Liu", "Ming", "" ] ]
The human brain is a complex spiking neural network (SNN) that learns multimodal signals in a zero-shot manner by generalizing existing knowledge. Remarkably, the brain achieves this with minimal power consumption, using event-based signals that propagate within its structure. However, mimicking the human brain in neuromorphic hardware presents both hardware and software challenges. Hardware limitations, such as the slowdown of Moore's law and the von Neumann bottleneck, hinder the efficiency of digital computers. On the software side, SNNs are known for their difficult training, especially when learning multimodal signals. To overcome these challenges, we propose a hardware-software co-design that combines a fixed and random liquid state machine (LSM) SNN encoder with trainable artificial neural network (ANN) projections. The LSM is physically implemented using analogue resistive memory, leveraging the inherent stochasticity of resistive switching to generate random weights. This highly efficient and nanoscale in-memory computing approach effectively addresses the von Neumann bottleneck and the slowdown of Moore's law. The ANN projections are implemented digitally, allowing for easy optimization using contrastive loss, which helps to overcome the difficulties associated with SNN training. We experimentally implement this co-design on a 40nm 256Kb in-memory computing macro. We first demonstrate LSM-based event encoding through supervised classification and linear probing on the N-MNIST and N-TIDIGITS datasets.
0812.1394
Sahbi Sidhom
Sahbi Sidhom (LORIA, Sii)
Conceptual approach through an annotation process for the representation and the information contents enhancement in economic intelligence (EI)
null
Journal of Global Management Research (2008) 15 pages
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of the information society, the impact of the information systems on the economy of material and immaterial is certainly perceptible. With regards to the information resources of an organization, the annotation involved to enrich informational content, to track the intellectual activities on a document and to set the added value on information for the benefit of solving a decision-making problem in the context of economic intelligence. Our contribution is distinguished by the representation of an annotation process and its inherent concepts to lead the decisionmaker to an anticipated decision: the provision of relevant and annotated information. Such information in the system is made easy by taking into account the diversity of resources and those that are well annotated so formally and informally by the EI actors. A capital research framework consist of integrating in the decision-making process the annotator activity, the software agent (or the reasoning mechanisms) and the information resources enhancement.
[ { "created": "Sun, 7 Dec 2008 20:07:37 GMT", "version": "v1" } ]
2008-12-10
[ [ "Sidhom", "Sahbi", "", "LORIA, Sii" ] ]
In the era of the information society, the impact of the information systems on the economy of material and immaterial is certainly perceptible. With regards to the information resources of an organization, the annotation involved to enrich informational content, to track the intellectual activities on a document and to set the added value on information for the benefit of solving a decision-making problem in the context of economic intelligence. Our contribution is distinguished by the representation of an annotation process and its inherent concepts to lead the decisionmaker to an anticipated decision: the provision of relevant and annotated information. Such information in the system is made easy by taking into account the diversity of resources and those that are well annotated so formally and informally by the EI actors. A capital research framework consist of integrating in the decision-making process the annotator activity, the software agent (or the reasoning mechanisms) and the information resources enhancement.
1703.10772
Irshad Bhat
Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Manish Shrivastava and Dipti Misra Sharma
Joining Hands: Exploiting Monolingual Treebanks for Parsing of Code-mixing Data
5 pages, EACL 2017 short paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose efficient and less resource-intensive strategies for parsing of code-mixed data. These strategies are not constrained by in-domain annotations, rather they leverage pre-existing monolingual annotated resources for training. We show that these methods can produce significantly better results as compared to an informed baseline. Besides, we also present a data set of 450 Hindi and English code-mixed tweets of Hindi multilingual speakers for evaluation. The data set is manually annotated with Universal Dependencies.
[ { "created": "Fri, 31 Mar 2017 07:10:30 GMT", "version": "v1" } ]
2017-04-03
[ [ "Bhat", "Irshad Ahmad", "" ], [ "Bhat", "Riyaz Ahmad", "" ], [ "Shrivastava", "Manish", "" ], [ "Sharma", "Dipti Misra", "" ] ]
In this paper, we propose efficient and less resource-intensive strategies for parsing of code-mixed data. These strategies are not constrained by in-domain annotations, rather they leverage pre-existing monolingual annotated resources for training. We show that these methods can produce significantly better results as compared to an informed baseline. Besides, we also present a data set of 450 Hindi and English code-mixed tweets of Hindi multilingual speakers for evaluation. The data set is manually annotated with Universal Dependencies.
2404.15194
Michal Nazarczuk
Michal Nazarczuk, Jan Kristof Behrens, Karla Stepanova, Matej Hoffmann, Krystian Mikolajczyk
Closed Loop Interactive Embodied Reasoning for Robot Manipulation
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embodied reasoning systems integrate robotic hardware and cognitive processes to perform complex tasks typically in response to a natural language query about a specific physical environment. This usually involves changing the belief about the scene or physically interacting and changing the scene (e.g. 'Sort the objects from lightest to heaviest'). In order to facilitate the development of such systems we introduce a new simulating environment that makes use of MuJoCo physics engine and high-quality renderer Blender to provide realistic visual observations that are also accurate to the physical state of the scene. Together with the simulator we propose a new benchmark composed of 10 classes of multi-step reasoning scenarios that require simultaneous visual and physical measurements. Finally, we develop a new modular Closed Loop Interactive Reasoning (CLIER) approach that takes into account the measurements of non-visual object properties, changes in the scene caused by external disturbances as well as uncertain outcomes of robotic actions. We extensively evaluate our reasoning approach in simulation and in the real world manipulation tasks with a success rate above 76% and 64%, respectively.
[ { "created": "Tue, 23 Apr 2024 16:33:28 GMT", "version": "v1" } ]
2024-04-24
[ [ "Nazarczuk", "Michal", "" ], [ "Behrens", "Jan Kristof", "" ], [ "Stepanova", "Karla", "" ], [ "Hoffmann", "Matej", "" ], [ "Mikolajczyk", "Krystian", "" ] ]
Embodied reasoning systems integrate robotic hardware and cognitive processes to perform complex tasks typically in response to a natural language query about a specific physical environment. This usually involves changing the belief about the scene or physically interacting and changing the scene (e.g. 'Sort the objects from lightest to heaviest'). In order to facilitate the development of such systems we introduce a new simulating environment that makes use of MuJoCo physics engine and high-quality renderer Blender to provide realistic visual observations that are also accurate to the physical state of the scene. Together with the simulator we propose a new benchmark composed of 10 classes of multi-step reasoning scenarios that require simultaneous visual and physical measurements. Finally, we develop a new modular Closed Loop Interactive Reasoning (CLIER) approach that takes into account the measurements of non-visual object properties, changes in the scene caused by external disturbances as well as uncertain outcomes of robotic actions. We extensively evaluate our reasoning approach in simulation and in the real world manipulation tasks with a success rate above 76% and 64%, respectively.
2303.06644
Tian Wen
Yan Lei, Tiantian Wen, Huan Xie, Lingfeng Fu, Chunyan Liu, Lei Xu, Hongxia Sun
Mitigating the Effect of Class Imbalance in Fault Localization Using Context-aware Generative Adversarial Network
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fault localization (FL) analyzes the execution information of a test suite to pinpoint the root cause of a failure. The class imbalance of a test suite, i.e., the imbalanced class proportion between passing test cases (i.e., majority class) and failing ones (i.e., minority class), adversely affects FL effectiveness. To mitigate the effect of class imbalance in FL, we propose CGAN4FL: a data augmentation approach using Context-aware Generative Adversarial Network for Fault Localization. Specifically, CGAN4FL uses program dependencies to construct a failure-inducing context showing how a failure is caused. Then, CGAN4FL leverages a generative adversarial network to analyze the failure-inducing context and synthesize the minority class of test cases (i.e., failing test cases). Finally, CGAN4FL augments the synthesized data into original test cases to acquire a class-balanced dataset for FL. Our experiments show that CGAN4FL significantly improves FL effectiveness, e.g., promoting MLP-FL by 200.00%, 25.49%, and 17.81% under the Top-1, Top-5, and Top-10 respectively.
[ { "created": "Sun, 12 Mar 2023 12:26:52 GMT", "version": "v1" } ]
2023-03-14
[ [ "Lei", "Yan", "" ], [ "Wen", "Tiantian", "" ], [ "Xie", "Huan", "" ], [ "Fu", "Lingfeng", "" ], [ "Liu", "Chunyan", "" ], [ "Xu", "Lei", "" ], [ "Sun", "Hongxia", "" ] ]
Fault localization (FL) analyzes the execution information of a test suite to pinpoint the root cause of a failure. The class imbalance of a test suite, i.e., the imbalanced class proportion between passing test cases (i.e., majority class) and failing ones (i.e., minority class), adversely affects FL effectiveness. To mitigate the effect of class imbalance in FL, we propose CGAN4FL: a data augmentation approach using Context-aware Generative Adversarial Network for Fault Localization. Specifically, CGAN4FL uses program dependencies to construct a failure-inducing context showing how a failure is caused. Then, CGAN4FL leverages a generative adversarial network to analyze the failure-inducing context and synthesize the minority class of test cases (i.e., failing test cases). Finally, CGAN4FL augments the synthesized data into original test cases to acquire a class-balanced dataset for FL. Our experiments show that CGAN4FL significantly improves FL effectiveness, e.g., promoting MLP-FL by 200.00%, 25.49%, and 17.81% under the Top-1, Top-5, and Top-10 respectively.
1105.1824
Haris Aziz
Haris Aziz and Paul Harrenstein and Evangelia Pyrga
Individual-based stability in hedonic games depending on the best or worst players
16 pages
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider coalition formation games in which each player has preferences over the other players and his preferences over coalitions are based on the best player ($\mathcal{B}$-/B-hedonic games) or the worst player ($\mathcal{W}$/W-hedonic games) in the coalition. We show that for $\mathcal{B}$-hedonic games, an individually stable partition is guaranteed to exist and can be computed efficiently. Similarly, there exists a polynomial-time algorithm which returns a Nash stable partition (if one exists) for $\mathcal{B}$-hedonic games with strict preferences. Both $\mathcal{W}$- and W-hedonic games are equivalent if individual rationality is assumed. It is also shown that for B- or $\mathcal{W}$-hedonic games, checking whether a Nash stable partition or an individually stable partition exists is NP-complete even in some cases for strict preferences. We identify a key source of intractability in compact coalition formation games in which preferences over players are extended to preferences over coalitions.
[ { "created": "Mon, 9 May 2011 23:51:47 GMT", "version": "v1" }, { "created": "Sat, 3 Dec 2011 08:20:21 GMT", "version": "v2" } ]
2011-12-06
[ [ "Aziz", "Haris", "" ], [ "Harrenstein", "Paul", "" ], [ "Pyrga", "Evangelia", "" ] ]
We consider coalition formation games in which each player has preferences over the other players and his preferences over coalitions are based on the best player ($\mathcal{B}$-/B-hedonic games) or the worst player ($\mathcal{W}$/W-hedonic games) in the coalition. We show that for $\mathcal{B}$-hedonic games, an individually stable partition is guaranteed to exist and can be computed efficiently. Similarly, there exists a polynomial-time algorithm which returns a Nash stable partition (if one exists) for $\mathcal{B}$-hedonic games with strict preferences. Both $\mathcal{W}$- and W-hedonic games are equivalent if individual rationality is assumed. It is also shown that for B- or $\mathcal{W}$-hedonic games, checking whether a Nash stable partition or an individually stable partition exists is NP-complete even in some cases for strict preferences. We identify a key source of intractability in compact coalition formation games in which preferences over players are extended to preferences over coalitions.
cs/0408065
Somdeb Lahiri
Somdeb Lahiri
The Core of Directed Network Problems with Quotas
6 pages, 0 figures, source file: MS Word; definitions of the feasible allocations have been strengthened; examples provided; network obtained by the procedure can be decentralized
null
null
null
cs.GT
null
This paper proves the existence of non-empty cores for directed network problems with quotas and for those combinatorial allocation problems which permit only exclusive allocations.
[ { "created": "Sat, 28 Aug 2004 10:12:17 GMT", "version": "v1" }, { "created": "Thu, 2 Sep 2004 11:18:12 GMT", "version": "v2" }, { "created": "Mon, 6 Sep 2004 11:05:37 GMT", "version": "v3" }, { "created": "Tue, 7 Sep 2004 09:37:08 GMT", "version": "v4" }, { "created": "Wed, 8 Sep 2004 12:15:49 GMT", "version": "v5" }, { "created": "Sat, 11 Sep 2004 10:06:31 GMT", "version": "v6" } ]
2007-05-23
[ [ "Lahiri", "Somdeb", "" ] ]
This paper proves the existence of non-empty cores for directed network problems with quotas and for those combinatorial allocation problems which permit only exclusive allocations.
1409.5715
Stefan Schulte
Stefan Schulte, Christian Janiesch, Srikumar Venugopal, Ingo Weber, Philipp Hoenisch
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.005
Future Generation Computer Systems, Volume 46, 36-50 (2015)
10.1016/j.future.2014.09.005
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.
[ { "created": "Fri, 19 Sep 2014 16:36:49 GMT", "version": "v1" }, { "created": "Mon, 22 Sep 2014 10:56:55 GMT", "version": "v2" } ]
2017-08-21
[ [ "Schulte", "Stefan", "" ], [ "Janiesch", "Christian", "" ], [ "Venugopal", "Srikumar", "" ], [ "Weber", "Ingo", "" ], [ "Hoenisch", "Philipp", "" ] ]
With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.