id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2002.09869
Alon Cohen
Alon Cohen, Haim Kaplan, Yishay Mansour and Aviv Rosenberg
Near-optimal Regret Bounds for Stochastic Shortest Path
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic shortest path (SSP) is a well-known problem in planning and control, in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent is unaware of the environment dynamics (i.e., the transition function) and has to repeatedly play for a given number of episodes while reasoning about the problem's optimal solution. Unlike other well-studied models in reinforcement learning (RL), the length of an episode is not predetermined (or bounded) and is influenced by the agent's actions. Recently, Tarbouriech et al. (2019) studied this problem in the context of regret minimization and provided an algorithm whose regret bound is inversely proportional to the square root of the minimum instantaneous cost. In this work we remove this dependence on the minimum cost---we give an algorithm that guarantees a regret bound of $\widetilde{O}(B_\star |S| \sqrt{|A| K})$, where $B_\star$ is an upper bound on the expected cost of the optimal policy, $S$ is the set of states, $A$ is the set of actions and $K$ is the number of episodes. We additionally show that any learning algorithm must have at least $\Omega(B_\star \sqrt{|S| |A| K})$ regret in the worst case.
[ { "created": "Sun, 23 Feb 2020 09:10:14 GMT", "version": "v1" } ]
2020-02-25
[ [ "Cohen", "Alon", "" ], [ "Kaplan", "Haim", "" ], [ "Mansour", "Yishay", "" ], [ "Rosenberg", "Aviv", "" ] ]
Stochastic shortest path (SSP) is a well-known problem in planning and control, in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent is unaware of the environment dynamics (i.e., the transition function) and has to repeatedly play for a given number of episodes while reasoning about the problem's optimal solution. Unlike other well-studied models in reinforcement learning (RL), the length of an episode is not predetermined (or bounded) and is influenced by the agent's actions. Recently, Tarbouriech et al. (2019) studied this problem in the context of regret minimization and provided an algorithm whose regret bound is inversely proportional to the square root of the minimum instantaneous cost. In this work we remove this dependence on the minimum cost---we give an algorithm that guarantees a regret bound of $\widetilde{O}(B_\star |S| \sqrt{|A| K})$, where $B_\star$ is an upper bound on the expected cost of the optimal policy, $S$ is the set of states, $A$ is the set of actions and $K$ is the number of episodes. We additionally show that any learning algorithm must have at least $\Omega(B_\star \sqrt{|S| |A| K})$ regret in the worst case.
1809.02847
Eric Wallace
Eric Wallace, Shi Feng, Jordan Boyd-Graber
Interpreting Neural Networks With Nearest Neighbors
EMNLP 2018 BlackboxNLP
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local model interpretation methods explain individual predictions by assigning an importance value to each input feature. This value is often determined by measuring the change in confidence when a feature is removed. However, the confidence of neural networks is not a robust measure of model uncertainty. This issue makes reliably judging the importance of the input features difficult. We address this by changing the test-time behavior of neural networks using Deep k-Nearest Neighbors. Without harming text classification accuracy, this algorithm provides a more robust uncertainty metric which we use to generate feature importance values. The resulting interpretations better align with human perception than baseline methods. Finally, we use our interpretation method to analyze model predictions on dataset annotation artifacts.
[ { "created": "Sat, 8 Sep 2018 18:03:56 GMT", "version": "v1" }, { "created": "Wed, 7 Nov 2018 13:05:39 GMT", "version": "v2" } ]
2018-11-08
[ [ "Wallace", "Eric", "" ], [ "Feng", "Shi", "" ], [ "Boyd-Graber", "Jordan", "" ] ]
Local model interpretation methods explain individual predictions by assigning an importance value to each input feature. This value is often determined by measuring the change in confidence when a feature is removed. However, the confidence of neural networks is not a robust measure of model uncertainty. This issue makes reliably judging the importance of the input features difficult. We address this by changing the test-time behavior of neural networks using Deep k-Nearest Neighbors. Without harming text classification accuracy, this algorithm provides a more robust uncertainty metric which we use to generate feature importance values. The resulting interpretations better align with human perception than baseline methods. Finally, we use our interpretation method to analyze model predictions on dataset annotation artifacts.
2012.02990
Shubham Vatsal
Dhruval Jain, Arun D Prabhu, Shubham Vatsal, Gopi Ramena, Naresh Purre
Codeswitched Sentence Creation using Dependency Parsing
null
null
10.1109/ICSC50631.2021.00030
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Codeswitching has become one of the most common occurrences across multilingual speakers of the world, especially in countries like India which encompasses around 23 official languages with the number of bilingual speakers being around 300 million. The scarcity of Codeswitched data becomes a bottleneck in the exploration of this domain with respect to various Natural Language Processing (NLP) tasks. We thus present a novel algorithm which harnesses the syntactic structure of English grammar to develop grammatically sensible Codeswitched versions of English-Hindi, English-Marathi and English-Kannada data. Apart from maintaining the grammatical sanity to a great extent, our methodology also guarantees abundant generation of data from a minuscule snapshot of given data. We use multiple datasets to showcase the capabilities of our algorithm while at the same time we assess the quality of generated Codeswitched data using some qualitative metrics along with providing baseline results for couple of NLP tasks.
[ { "created": "Sat, 5 Dec 2020 10:00:06 GMT", "version": "v1" } ]
2022-01-03
[ [ "Jain", "Dhruval", "" ], [ "Prabhu", "Arun D", "" ], [ "Vatsal", "Shubham", "" ], [ "Ramena", "Gopi", "" ], [ "Purre", "Naresh", "" ] ]
Codeswitching has become one of the most common occurrences across multilingual speakers of the world, especially in countries like India which encompasses around 23 official languages with the number of bilingual speakers being around 300 million. The scarcity of Codeswitched data becomes a bottleneck in the exploration of this domain with respect to various Natural Language Processing (NLP) tasks. We thus present a novel algorithm which harnesses the syntactic structure of English grammar to develop grammatically sensible Codeswitched versions of English-Hindi, English-Marathi and English-Kannada data. Apart from maintaining the grammatical sanity to a great extent, our methodology also guarantees abundant generation of data from a minuscule snapshot of given data. We use multiple datasets to showcase the capabilities of our algorithm while at the same time we assess the quality of generated Codeswitched data using some qualitative metrics along with providing baseline results for couple of NLP tasks.
1908.09822
Zhiding Yu
Yang Zou, Zhiding Yu, Xiaofeng Liu, B. V. K. Vijaya Kumar, Jinsong Wang
Confidence Regularized Self-Training
Accepted to ICCV 2019 (Oral)
null
null
null
cs.CV cs.LG cs.MM cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.
[ { "created": "Mon, 26 Aug 2019 17:56:13 GMT", "version": "v1" }, { "created": "Tue, 27 Aug 2019 05:26:12 GMT", "version": "v2" }, { "created": "Wed, 15 Jul 2020 10:57:38 GMT", "version": "v3" } ]
2020-07-16
[ [ "Zou", "Yang", "" ], [ "Yu", "Zhiding", "" ], [ "Liu", "Xiaofeng", "" ], [ "Kumar", "B. V. K. Vijaya", "" ], [ "Wang", "Jinsong", "" ] ]
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.
2308.12964
Yunji Kim
Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, Jun-Yan Zhu
Dense Text-to-Image Generation with Attention Modulation
Accepted by ICCV2023. Code and data are available at https://github.com/naver-ai/DenseDiffusion
null
null
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by/4.0/
Existing text-to-image diffusion models struggle to synthesize realistic images given dense captions, where each text prompt provides a detailed description for a specific image region. To address this, we propose DenseDiffusion, a training-free method that adapts a pre-trained text-to-image model to handle such dense captions while offering control over the scene layout. We first analyze the relationship between generated images' layouts and the pre-trained model's intermediate attention maps. Next, we develop an attention modulation method that guides objects to appear in specific regions according to layout guidance. Without requiring additional fine-tuning or datasets, we improve image generation performance given dense captions regarding both automatic and human evaluation scores. In addition, we achieve similar-quality visual results with models specifically trained with layout conditions.
[ { "created": "Thu, 24 Aug 2023 17:59:01 GMT", "version": "v1" } ]
2023-08-25
[ [ "Kim", "Yunji", "" ], [ "Lee", "Jiyoung", "" ], [ "Kim", "Jin-Hwa", "" ], [ "Ha", "Jung-Woo", "" ], [ "Zhu", "Jun-Yan", "" ] ]
Existing text-to-image diffusion models struggle to synthesize realistic images given dense captions, where each text prompt provides a detailed description for a specific image region. To address this, we propose DenseDiffusion, a training-free method that adapts a pre-trained text-to-image model to handle such dense captions while offering control over the scene layout. We first analyze the relationship between generated images' layouts and the pre-trained model's intermediate attention maps. Next, we develop an attention modulation method that guides objects to appear in specific regions according to layout guidance. Without requiring additional fine-tuning or datasets, we improve image generation performance given dense captions regarding both automatic and human evaluation scores. In addition, we achieve similar-quality visual results with models specifically trained with layout conditions.
1406.2285
Navya Chodisetti
Navya Chodisetti
A Piggybank Protocol for Quantum Cryptography
6 pages, 2 figures, 1 table
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a quantum mechanical version of the piggy-bank cryptography protocol. The basic piggybank cryptography idea is to use two communications: one with the encrypted message, and the other regarding the encryption transformation which the receiver must decipher first. In the quantum mechanical version of the protocol, the encrypting unitary transformation information is sent separately but just deciphering it is not enough to break the system. The proposed quantum protocol consists of two stages.
[ { "created": "Mon, 9 Jun 2014 19:07:01 GMT", "version": "v1" } ]
2014-06-10
[ [ "Chodisetti", "Navya", "" ] ]
This paper presents a quantum mechanical version of the piggy-bank cryptography protocol. The basic piggybank cryptography idea is to use two communications: one with the encrypted message, and the other regarding the encryption transformation which the receiver must decipher first. In the quantum mechanical version of the protocol, the encrypting unitary transformation information is sent separately but just deciphering it is not enough to break the system. The proposed quantum protocol consists of two stages.
2308.10496
Oliver Niggemann
Jan-Philipp Roche and Oliver Niggemann and Jens Friebe
Using Autoencoders and AutoDiff to Reconstruct Missing Variables in a Set of Time Series
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing black box modeling approaches in machine learning suffer from a fixed input and output feature combination. In this paper, a new approach to reconstruct missing variables in a set of time series is presented. An autoencoder is trained as usual with every feature on both sides and the neural network parameters are fixed after this training. Then, the searched variables are defined as missing variables at the autoencoder input and optimized via automatic differentiation. This optimization is performed with respect to the available features loss calculation. With this method, different input and output feature combinations of the trained model can be realized by defining the searched variables as missing variables and reconstructing them. The combination can be changed without training the autoencoder again. The approach is evaluated on the base of a strongly nonlinear electrical component. It is working well for one of four variables missing and generally even for multiple missing variables.
[ { "created": "Mon, 21 Aug 2023 06:35:08 GMT", "version": "v1" } ]
2023-08-22
[ [ "Roche", "Jan-Philipp", "" ], [ "Niggemann", "Oliver", "" ], [ "Friebe", "Jens", "" ] ]
Existing black box modeling approaches in machine learning suffer from a fixed input and output feature combination. In this paper, a new approach to reconstruct missing variables in a set of time series is presented. An autoencoder is trained as usual with every feature on both sides and the neural network parameters are fixed after this training. Then, the searched variables are defined as missing variables at the autoencoder input and optimized via automatic differentiation. This optimization is performed with respect to the available features loss calculation. With this method, different input and output feature combinations of the trained model can be realized by defining the searched variables as missing variables and reconstructing them. The combination can be changed without training the autoencoder again. The approach is evaluated on the base of a strongly nonlinear electrical component. It is working well for one of four variables missing and generally even for multiple missing variables.
1408.0023
Michael Winterrose
Michael L. Winterrose and Kevin M. Carter
Strategic Evolution of Adversaries Against Temporal Platform Diversity Active Cyber Defenses
null
Proceedings of the Agent-Directed Simulation Symposium at the Spring Simulation Multiconference (2014): 68-76
null
null
cs.CR cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial dynamics are a critical facet within the cyber security domain, in which there exists a co-evolution between attackers and defenders in any given threat scenario. While defenders leverage capabilities to minimize the potential impact of an attack, the adversary is simultaneously developing countermeasures to the observed defenses. In this study, we develop a set of tools to model the adaptive strategy formulation of an intelligent actor against an active cyber defensive system. We encode strategies as binary chromosomes representing finite state machines that evolve according to Holland's genetic algorithm. We study the strategic considerations including overall actor reward balanced against the complexity of the determined strategies. We present a series of simulation results demonstrating the ability to automatically search a large strategy space for optimal resultant fitness against a variety of counter-strategies.
[ { "created": "Thu, 31 Jul 2014 20:32:09 GMT", "version": "v1" } ]
2014-08-19
[ [ "Winterrose", "Michael L.", "" ], [ "Carter", "Kevin M.", "" ] ]
Adversarial dynamics are a critical facet within the cyber security domain, in which there exists a co-evolution between attackers and defenders in any given threat scenario. While defenders leverage capabilities to minimize the potential impact of an attack, the adversary is simultaneously developing countermeasures to the observed defenses. In this study, we develop a set of tools to model the adaptive strategy formulation of an intelligent actor against an active cyber defensive system. We encode strategies as binary chromosomes representing finite state machines that evolve according to Holland's genetic algorithm. We study the strategic considerations including overall actor reward balanced against the complexity of the determined strategies. We present a series of simulation results demonstrating the ability to automatically search a large strategy space for optimal resultant fitness against a variety of counter-strategies.
1605.05404
Matthias Petri
Simon Gog and Alistair Moffat and Matthias Petri
CSA++: Fast Pattern Search for Large Alphabets
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indexed pattern search in text has been studied for many decades. For small alphabets, the FM-Index provides unmatched performance, in terms of both space required and search speed. For large alphabets -- for example, when the tokens are words -- the situation is more complex, and FM-Index representations are compact, but potentially slow. In this paper we apply recent innovations from the field of inverted indexing and document retrieval to compressed pattern search, including for alphabets into the millions. Commencing with the practical compressed suffix array structure developed by Sadakane, we show that the Elias-Fano code-based approach to document indexing can be adapted to provide new tradeoff options in indexed pattern search, and offers significantly faster pattern processing compared to previous implementations, as well as reduced space requirements. We report a detailed experimental evaluation that demonstrates the relative advantages of the new approach, using the standard Pizza&Chili methodology and files, as well as applied use-cases derived from large-scale data compression, and from natural language processing. For large alphabets, the new structure gives rise to space requirements that are close to those of the most highly-compressed FM-Index variants, in conjunction with unparalleled search throughput rates.
[ { "created": "Wed, 18 May 2016 00:20:00 GMT", "version": "v1" } ]
2016-05-19
[ [ "Gog", "Simon", "" ], [ "Moffat", "Alistair", "" ], [ "Petri", "Matthias", "" ] ]
Indexed pattern search in text has been studied for many decades. For small alphabets, the FM-Index provides unmatched performance, in terms of both space required and search speed. For large alphabets -- for example, when the tokens are words -- the situation is more complex, and FM-Index representations are compact, but potentially slow. In this paper we apply recent innovations from the field of inverted indexing and document retrieval to compressed pattern search, including for alphabets into the millions. Commencing with the practical compressed suffix array structure developed by Sadakane, we show that the Elias-Fano code-based approach to document indexing can be adapted to provide new tradeoff options in indexed pattern search, and offers significantly faster pattern processing compared to previous implementations, as well as reduced space requirements. We report a detailed experimental evaluation that demonstrates the relative advantages of the new approach, using the standard Pizza&Chili methodology and files, as well as applied use-cases derived from large-scale data compression, and from natural language processing. For large alphabets, the new structure gives rise to space requirements that are close to those of the most highly-compressed FM-Index variants, in conjunction with unparalleled search throughput rates.
2201.03486
Shahrzad Kiani Dehkordi
Shahrzad Kiani and Stark C. Draper
Successive Approximation Coding for Distributed Matrix Multiplication
null
null
null
null
cs.DC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coded distributed computing was recently introduced to mitigate the effect of stragglers on distributed computing. This paper combines ideas of approximate computing with coded computing to further accelerate computation. We propose successive approximation coding (SAC) techniques that realize a tradeoff between accuracy and speed, allowing the distributed computing system to produce approximations that increase in accuracy over time. If a sufficient number of compute nodes finish their tasks, SAC exactly recovers the desired computation. We theoretically provide design guidelines for our SAC techniques, and numerically show that SAC achieves a better accuracy-speed tradeoff in comparison with previous methods.
[ { "created": "Mon, 10 Jan 2022 17:32:16 GMT", "version": "v1" } ]
2022-01-11
[ [ "Kiani", "Shahrzad", "" ], [ "Draper", "Stark C.", "" ] ]
Coded distributed computing was recently introduced to mitigate the effect of stragglers on distributed computing. This paper combines ideas of approximate computing with coded computing to further accelerate computation. We propose successive approximation coding (SAC) techniques that realize a tradeoff between accuracy and speed, allowing the distributed computing system to produce approximations that increase in accuracy over time. If a sufficient number of compute nodes finish their tasks, SAC exactly recovers the desired computation. We theoretically provide design guidelines for our SAC techniques, and numerically show that SAC achieves a better accuracy-speed tradeoff in comparison with previous methods.
2405.13781
Martin Engilberge
Yingxue Yu, Vidit Vidit, Andrey Davydov, Martin Engilberge, Pascal Fua
Addressing the Elephant in the Room: Robust Animal Re-Identification with Unsupervised Part-Based Feature Alignment
Accepted to CVPR workshop CV4Animals 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Animal Re-ID is crucial for wildlife conservation, yet it faces unique challenges compared to person Re-ID. First, the scarcity and lack of diversity in datasets lead to background-biased models. Second, animal Re-ID depends on subtle, species-specific cues, further complicated by variations in pose, background, and lighting. This study addresses background biases by proposing a method to systematically remove backgrounds in both training and evaluation phases. And unlike prior works that depend on pose annotations, our approach utilizes an unsupervised technique for feature alignment across body parts and pose variations, enhancing practicality. Our method achieves superior results on three key animal Re-ID datasets: ATRW, YakReID-103, and ELPephants.
[ { "created": "Wed, 22 May 2024 16:08:06 GMT", "version": "v1" } ]
2024-05-24
[ [ "Yu", "Yingxue", "" ], [ "Vidit", "Vidit", "" ], [ "Davydov", "Andrey", "" ], [ "Engilberge", "Martin", "" ], [ "Fua", "Pascal", "" ] ]
Animal Re-ID is crucial for wildlife conservation, yet it faces unique challenges compared to person Re-ID. First, the scarcity and lack of diversity in datasets lead to background-biased models. Second, animal Re-ID depends on subtle, species-specific cues, further complicated by variations in pose, background, and lighting. This study addresses background biases by proposing a method to systematically remove backgrounds in both training and evaluation phases. And unlike prior works that depend on pose annotations, our approach utilizes an unsupervised technique for feature alignment across body parts and pose variations, enhancing practicality. Our method achieves superior results on three key animal Re-ID datasets: ATRW, YakReID-103, and ELPephants.
2003.08680
Rui Xiang
Rui Xiang, Rongjie Lai, Hongkai Zhao
Efficient and Robust Shape Correspondence via Sparsity-Enforced Quadratic Assignment
8 pages, 6 figures. Compared to the version to be published in the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Figure 1 has been changed to a more illustrative example and run time table 1 has been updated by our recently optimized code
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we introduce a novel local pairwise descriptor and then develop a simple, effective iterative method to solve the resulting quadratic assignment through sparsity control for shape correspondence between two approximate isometric surfaces. Our pairwise descriptor is based on the stiffness and mass matrix of finite element approximation of the Laplace-Beltrami differential operator, which is local in space, sparse to represent, and extremely easy to compute while containing global information. It allows us to deal with open surfaces, partial matching, and topological perturbations robustly. To solve the resulting quadratic assignment problem efficiently, the two key ideas of our iterative algorithm are: 1) select pairs with good (approximate) correspondence as anchor points, 2) solve a regularized quadratic assignment problem only in the neighborhood of selected anchor points through sparsity control. These two ingredients can improve and increase the number of anchor points quickly while reducing the computation cost in each quadratic assignment iteration significantly. With enough high-quality anchor points, one may use various pointwise global features with reference to these anchor points to further improve the dense shape correspondence. We use various experiments to show the efficiency, quality, and versatility of our method on large data sets, patches, and point clouds (without global meshes).
[ { "created": "Thu, 19 Mar 2020 10:56:16 GMT", "version": "v1" }, { "created": "Fri, 20 Mar 2020 21:23:38 GMT", "version": "v2" } ]
2020-03-24
[ [ "Xiang", "Rui", "" ], [ "Lai", "Rongjie", "" ], [ "Zhao", "Hongkai", "" ] ]
In this work, we introduce a novel local pairwise descriptor and then develop a simple, effective iterative method to solve the resulting quadratic assignment through sparsity control for shape correspondence between two approximate isometric surfaces. Our pairwise descriptor is based on the stiffness and mass matrix of finite element approximation of the Laplace-Beltrami differential operator, which is local in space, sparse to represent, and extremely easy to compute while containing global information. It allows us to deal with open surfaces, partial matching, and topological perturbations robustly. To solve the resulting quadratic assignment problem efficiently, the two key ideas of our iterative algorithm are: 1) select pairs with good (approximate) correspondence as anchor points, 2) solve a regularized quadratic assignment problem only in the neighborhood of selected anchor points through sparsity control. These two ingredients can improve and increase the number of anchor points quickly while reducing the computation cost in each quadratic assignment iteration significantly. With enough high-quality anchor points, one may use various pointwise global features with reference to these anchor points to further improve the dense shape correspondence. We use various experiments to show the efficiency, quality, and versatility of our method on large data sets, patches, and point clouds (without global meshes).
2307.14439
Ryan Kortvelesy
Ryan Kortvelesy
Fixed Integral Neural Networks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is often useful to perform integration over learned functions represented by neural networks. However, this integration is usually performed numerically, as analytical integration over learned functions (especially neural networks) is generally viewed as intractable. In this work, we present a method for representing the analytical integral of a learned function $f$. This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised by applying constraints directly to the integral. Crucially, we also introduce a method to constrain $f$ to be positive, a necessary condition for many applications (e.g. probability distributions, distance metrics, etc). Finally, we introduce several applications where our fixed-integral neural network (FINN) can be utilised.
[ { "created": "Wed, 26 Jul 2023 18:16:43 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2023 14:03:34 GMT", "version": "v2" }, { "created": "Tue, 10 Oct 2023 13:41:57 GMT", "version": "v3" }, { "created": "Sun, 24 Dec 2023 02:49:37 GMT", "version": "v4" } ]
2023-12-27
[ [ "Kortvelesy", "Ryan", "" ] ]
It is often useful to perform integration over learned functions represented by neural networks. However, this integration is usually performed numerically, as analytical integration over learned functions (especially neural networks) is generally viewed as intractable. In this work, we present a method for representing the analytical integral of a learned function $f$. This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised by applying constraints directly to the integral. Crucially, we also introduce a method to constrain $f$ to be positive, a necessary condition for many applications (e.g. probability distributions, distance metrics, etc). Finally, we introduce several applications where our fixed-integral neural network (FINN) can be utilised.
1001.2331
Sriram Vishwanath
Sriram Vishwanath
Information Theoretic Bounds for Low-Rank Matrix Completion
null
null
null
null
cs.IT cs.CC math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the low-rank matrix completion problem from an information theoretic perspective. The completion problem is rephrased as a communication problem of an (uncoded) low-rank matrix source over an erasure channel. The paper then uses achievability and converse arguments to present order-wise optimal bounds for the completion problem.
[ { "created": "Thu, 14 Jan 2010 20:54:22 GMT", "version": "v1" } ]
2016-09-08
[ [ "Vishwanath", "Sriram", "" ] ]
This paper studies the low-rank matrix completion problem from an information theoretic perspective. The completion problem is rephrased as a communication problem of an (uncoded) low-rank matrix source over an erasure channel. The paper then uses achievability and converse arguments to present order-wise optimal bounds for the completion problem.
2211.06034
Ke Liao
Ke Liao, Wei Wang, Armagan Elibol, Lingzhong Meng, Xu Zhao, and Nak Young Chong
Does Deep Learning REALLY Outperform Non-deep Machine Learning for Clinical Prediction on Physiological Time Series?
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning has been widely used in healthcare applications to approximate complex models, for clinical diagnosis, prognosis, and treatment. As deep learning has the outstanding ability to extract information from time series, its true capabilities on sparse, irregularly sampled, multivariate, and imbalanced physiological data are not yet fully explored. In this paper, we systematically examine the performance of machine learning models for the clinical prediction task based on the EHR, especially physiological time series. We choose Physionet 2019 challenge public dataset to predict Sepsis outcomes in ICU units. Ten baseline machine learning models are compared, including 3 deep learning methods and 7 non-deep learning methods, commonly used in the clinical prediction domain. Nine evaluation metrics with specific clinical implications are used to assess the performance of models. Besides, we sub-sample training dataset sizes and use learning curve fit to investigate the impact of the training dataset size on the performance of the machine learning models. We also propose the general pre-processing method for the physiology time-series data and use Dice Loss to deal with the dataset imbalanced problem. The results show that deep learning indeed outperforms non-deep learning, but with certain conditions: firstly, evaluating with some particular evaluation metrics (AUROC, AUPRC, Sensitivity, and FNR), but not others; secondly, the training dataset size is large enough (with an estimation of a magnitude of thousands).
[ { "created": "Fri, 11 Nov 2022 07:09:49 GMT", "version": "v1" } ]
2022-11-14
[ [ "Liao", "Ke", "" ], [ "Wang", "Wei", "" ], [ "Elibol", "Armagan", "" ], [ "Meng", "Lingzhong", "" ], [ "Zhao", "Xu", "" ], [ "Chong", "Nak Young", "" ] ]
Machine learning has been widely used in healthcare applications to approximate complex models, for clinical diagnosis, prognosis, and treatment. As deep learning has the outstanding ability to extract information from time series, its true capabilities on sparse, irregularly sampled, multivariate, and imbalanced physiological data are not yet fully explored. In this paper, we systematically examine the performance of machine learning models for the clinical prediction task based on the EHR, especially physiological time series. We choose Physionet 2019 challenge public dataset to predict Sepsis outcomes in ICU units. Ten baseline machine learning models are compared, including 3 deep learning methods and 7 non-deep learning methods, commonly used in the clinical prediction domain. Nine evaluation metrics with specific clinical implications are used to assess the performance of models. Besides, we sub-sample training dataset sizes and use learning curve fit to investigate the impact of the training dataset size on the performance of the machine learning models. We also propose the general pre-processing method for the physiology time-series data and use Dice Loss to deal with the dataset imbalanced problem. The results show that deep learning indeed outperforms non-deep learning, but with certain conditions: firstly, evaluating with some particular evaluation metrics (AUROC, AUPRC, Sensitivity, and FNR), but not others; secondly, the training dataset size is large enough (with an estimation of a magnitude of thousands).
1711.10388
Rushil Anirudh
Rushil Anirudh, Hyojin Kim, Jayaraman J. Thiagarajan, K. Aditya Mohan, Kyle Champley, Timo Bremer
Lose The Views: Limited Angle CT Reconstruction via Implicit Sinogram Completion
Spotlight presentation at CVPR 2018
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computed Tomography (CT) reconstruction is a fundamental component to a wide variety of applications ranging from security, to healthcare. The classical techniques require measuring projections, called sinograms, from a full 180$^\circ$ view of the object. This is impractical in a limited angle scenario, when the viewing angle is less than 180$^\circ$, which can occur due to different factors including restrictions on scanning time, limited flexibility of scanner rotation, etc. The sinograms obtained as a result, cause existing techniques to produce highly artifact-laden reconstructions. In this paper, we propose to address this problem through implicit sinogram completion, on a challenging real world dataset containing scans of common checked-in luggage. We propose a system, consisting of 1D and 2D convolutional neural networks, that operates on a limited angle sinogram to directly produce the best estimate of a reconstruction. Next, we use the x-ray transform on this reconstruction to obtain a "completed" sinogram, as if it came from a full 180$^\circ$ measurement. We feed this to standard analytical and iterative reconstruction techniques to obtain the final reconstruction. We show with extensive experimentation that this combined strategy outperforms many competitive baselines. We also propose a measure of confidence for the reconstruction that enables a practitioner to gauge the reliability of a prediction made by our network. We show that this measure is a strong indicator of quality as measured by the PSNR, while not requiring ground truth at test time. Finally, using a segmentation experiment, we show that our reconstruction preserves the 3D structure of objects effectively.
[ { "created": "Tue, 28 Nov 2017 16:37:14 GMT", "version": "v1" }, { "created": "Wed, 14 Mar 2018 16:23:30 GMT", "version": "v2" }, { "created": "Wed, 11 Jul 2018 17:20:08 GMT", "version": "v3" } ]
2018-07-12
[ [ "Anirudh", "Rushil", "" ], [ "Kim", "Hyojin", "" ], [ "Thiagarajan", "Jayaraman J.", "" ], [ "Mohan", "K. Aditya", "" ], [ "Champley", "Kyle", "" ], [ "Bremer", "Timo", "" ] ]
Computed Tomography (CT) reconstruction is a fundamental component to a wide variety of applications ranging from security, to healthcare. The classical techniques require measuring projections, called sinograms, from a full 180$^\circ$ view of the object. This is impractical in a limited angle scenario, when the viewing angle is less than 180$^\circ$, which can occur due to different factors including restrictions on scanning time, limited flexibility of scanner rotation, etc. The sinograms obtained as a result, cause existing techniques to produce highly artifact-laden reconstructions. In this paper, we propose to address this problem through implicit sinogram completion, on a challenging real world dataset containing scans of common checked-in luggage. We propose a system, consisting of 1D and 2D convolutional neural networks, that operates on a limited angle sinogram to directly produce the best estimate of a reconstruction. Next, we use the x-ray transform on this reconstruction to obtain a "completed" sinogram, as if it came from a full 180$^\circ$ measurement. We feed this to standard analytical and iterative reconstruction techniques to obtain the final reconstruction. We show with extensive experimentation that this combined strategy outperforms many competitive baselines. We also propose a measure of confidence for the reconstruction that enables a practitioner to gauge the reliability of a prediction made by our network. We show that this measure is a strong indicator of quality as measured by the PSNR, while not requiring ground truth at test time. Finally, using a segmentation experiment, we show that our reconstruction preserves the 3D structure of objects effectively.
0710.4802
EDA Publishing Association
M. Scholive, V. Beroulle, C. Robach, M. L. Flottes (LIRMM), B. Rouzeyre (LIRMM)
Mutation Sampling Technique for the Generation of Structural Test Data
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)
null
null
cs.OH
null
Our goal is to produce validation data that can be used as an efficient (pre) test set for structural stuck-at faults. In this paper, we detail an original test-oriented mutation sampling technique used for generating such data and we present a first evaluation on these validation data with regard to a structural test.
[ { "created": "Thu, 25 Oct 2007 11:56:47 GMT", "version": "v1" } ]
2011-11-09
[ [ "Scholive", "M.", "", "LIRMM" ], [ "Beroulle", "V.", "", "LIRMM" ], [ "Robach", "C.", "", "LIRMM" ], [ "Flottes", "M. L.", "", "LIRMM" ], [ "Rouzeyre", "B.", "", "LIRMM" ] ]
Our goal is to produce validation data that can be used as an efficient (pre) test set for structural stuck-at faults. In this paper, we detail an original test-oriented mutation sampling technique used for generating such data and we present a first evaluation on these validation data with regard to a structural test.
2112.05705
Patrick Xia
Patrick Xia, Richard Shin
Pruning Pretrained Encoders with a Multitask Objective
ENLSP NeurIPS 2021
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sizes of pretrained language models make them challenging and expensive to use when there are multiple desired downstream tasks. In this work, we adopt recent strategies for model pruning during finetuning to explore the question of whether it is possible to prune a single encoder so that it can be used for multiple tasks. We allocate a fixed parameter budget and compare pruning a single model with a multitask objective against the best ensemble of single-task models. We find that under two pruning strategies (element-wise and rank pruning), the approach with the multitask objective outperforms training models separately when averaged across all tasks, and it is competitive on each individual one. Additional analysis finds that using a multitask objective during pruning can also be an effective method for reducing model sizes for low-resource tasks.
[ { "created": "Fri, 10 Dec 2021 17:57:33 GMT", "version": "v1" } ]
2021-12-13
[ [ "Xia", "Patrick", "" ], [ "Shin", "Richard", "" ] ]
The sizes of pretrained language models make them challenging and expensive to use when there are multiple desired downstream tasks. In this work, we adopt recent strategies for model pruning during finetuning to explore the question of whether it is possible to prune a single encoder so that it can be used for multiple tasks. We allocate a fixed parameter budget and compare pruning a single model with a multitask objective against the best ensemble of single-task models. We find that under two pruning strategies (element-wise and rank pruning), the approach with the multitask objective outperforms training models separately when averaged across all tasks, and it is competitive on each individual one. Additional analysis finds that using a multitask objective during pruning can also be an effective method for reducing model sizes for low-resource tasks.
2401.13371
Patrick Kolpaczki
Patrick Kolpaczki, Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke H\"ullermeier
SVARM-IQ: Efficient Approximation of Any-order Shapley Interactions through Stratification
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Addressing the limitations of individual attribution scores via the Shapley value (SV), the field of explainable AI (XAI) has recently explored intricate interactions of features or data points. In particular, extensions of the SV, such as the Shapley Interaction Index (SII), have been proposed as a measure to still benefit from the axiomatic basis of the SV. However, similar to the SV, their exact computation remains computationally prohibitive. Hence, we propose with SVARM-IQ a sampling-based approach to efficiently approximate Shapley-based interaction indices of any order. SVARM-IQ can be applied to a broad class of interaction indices, including the SII, by leveraging a novel stratified representation. We provide non-asymptotic theoretical guarantees on its approximation quality and empirically demonstrate that SVARM-IQ achieves state-of-the-art estimation results in practical XAI scenarios on different model classes and application domains.
[ { "created": "Wed, 24 Jan 2024 11:01:15 GMT", "version": "v1" }, { "created": "Thu, 25 Jan 2024 08:36:35 GMT", "version": "v2" }, { "created": "Fri, 1 Mar 2024 14:44:23 GMT", "version": "v3" } ]
2024-03-04
[ [ "Kolpaczki", "Patrick", "" ], [ "Muschalik", "Maximilian", "" ], [ "Fumagalli", "Fabian", "" ], [ "Hammer", "Barbara", "" ], [ "Hüllermeier", "Eyke", "" ] ]
Addressing the limitations of individual attribution scores via the Shapley value (SV), the field of explainable AI (XAI) has recently explored intricate interactions of features or data points. In particular, extensions of the SV, such as the Shapley Interaction Index (SII), have been proposed as a measure to still benefit from the axiomatic basis of the SV. However, similar to the SV, their exact computation remains computationally prohibitive. Hence, we propose with SVARM-IQ a sampling-based approach to efficiently approximate Shapley-based interaction indices of any order. SVARM-IQ can be applied to a broad class of interaction indices, including the SII, by leveraging a novel stratified representation. We provide non-asymptotic theoretical guarantees on its approximation quality and empirically demonstrate that SVARM-IQ achieves state-of-the-art estimation results in practical XAI scenarios on different model classes and application domains.
1704.01937
Joshua Brakensiek
Joshua Brakensiek and Venkatesan Guruswami
Promise Constraint Satisfaction: Algebraic Structure and a Symmetric Boolean Dichotomy
39 pages; various revisions including removal of appendices and updates/corrections to some proofs
null
null
null
cs.CC cs.DM cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A classic result due to Schaefer (1978) classifies all constraint satisfaction problems (CSPs) over the Boolean domain as being either in $\mathsf{P}$ or $\mathsf{NP}$-hard. This paper considers a promise-problem variant of CSPs called PCSPs. A PCSP over a finite set of pairs of constraints $\Gamma$ consists of a pair $(\Psi_P, \Psi_Q)$ of CSPs with the same set of variables such that for every $(P, Q) \in \Gamma$, $P(x_{i_1}, ..., x_{i_k})$ is a clause of $\Psi_P$ if and only if $Q(x_{i_1}, ..., x_{i_k})$ is a clause of $\Psi_Q$. The promise problem $\operatorname{PCSP}(\Gamma)$ is to distinguish, given $(\Psi_P, \Psi_Q)$, between the cases $\Psi_P$ is satisfiable and $\Psi_Q$ is unsatisfiable. Many natural problems including approximate graph and hypergraph coloring can be placed in this framework. This paper is motivated by the pursuit of understanding the computational complexity of Boolean promise CSPs. As our main result, we show that $\operatorname{PCSP}(\Gamma)$ exhibits a dichotomy (it is either polynomial time solvable or $\mathsf{NP}$-hard) when the relations in $\Gamma$ are symmetric and allow for negations of variables. We achieve our dichotomy theorem by extending the weak polymorphism framework of Austrin, Guruswami, and H\aa stad [FOCS '14] which itself is a generalization of the algebraic approach to study CSPs. In both the algorithm and hardness portions of our proof, we incorporate new ideas and techniques not utilized in the CSP case. Furthermore, we show that the computational complexity of any promise CSP (over arbitrary finite domains) is captured entirely by its weak polymorphisms, a feature known as Galois correspondence, as well as give necessary and sufficient conditions for the structure of this set of weak polymorphisms. Such insights call us to question the existence of a general dichotomy for Boolean PCSPs.
[ { "created": "Thu, 6 Apr 2017 17:07:10 GMT", "version": "v1" }, { "created": "Thu, 6 May 2021 15:34:08 GMT", "version": "v2" } ]
2021-05-07
[ [ "Brakensiek", "Joshua", "" ], [ "Guruswami", "Venkatesan", "" ] ]
A classic result due to Schaefer (1978) classifies all constraint satisfaction problems (CSPs) over the Boolean domain as being either in $\mathsf{P}$ or $\mathsf{NP}$-hard. This paper considers a promise-problem variant of CSPs called PCSPs. A PCSP over a finite set of pairs of constraints $\Gamma$ consists of a pair $(\Psi_P, \Psi_Q)$ of CSPs with the same set of variables such that for every $(P, Q) \in \Gamma$, $P(x_{i_1}, ..., x_{i_k})$ is a clause of $\Psi_P$ if and only if $Q(x_{i_1}, ..., x_{i_k})$ is a clause of $\Psi_Q$. The promise problem $\operatorname{PCSP}(\Gamma)$ is to distinguish, given $(\Psi_P, \Psi_Q)$, between the cases $\Psi_P$ is satisfiable and $\Psi_Q$ is unsatisfiable. Many natural problems including approximate graph and hypergraph coloring can be placed in this framework. This paper is motivated by the pursuit of understanding the computational complexity of Boolean promise CSPs. As our main result, we show that $\operatorname{PCSP}(\Gamma)$ exhibits a dichotomy (it is either polynomial time solvable or $\mathsf{NP}$-hard) when the relations in $\Gamma$ are symmetric and allow for negations of variables. We achieve our dichotomy theorem by extending the weak polymorphism framework of Austrin, Guruswami, and H\aa stad [FOCS '14] which itself is a generalization of the algebraic approach to study CSPs. In both the algorithm and hardness portions of our proof, we incorporate new ideas and techniques not utilized in the CSP case. Furthermore, we show that the computational complexity of any promise CSP (over arbitrary finite domains) is captured entirely by its weak polymorphisms, a feature known as Galois correspondence, as well as give necessary and sufficient conditions for the structure of this set of weak polymorphisms. Such insights call us to question the existence of a general dichotomy for Boolean PCSPs.
2405.03613
Jinwei Han
Jinwei Han, Yingguo Gao, Zhiwen Lin, Ke Yan, Shouhong Ding, Yuan Gao, Gui-Song Xia
Dual Relation Mining Network for Zero-Shot Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zero-shot learning (ZSL) aims to recognize novel classes through transferring shared semantic knowledge (e.g., attributes) from seen classes to unseen classes. Recently, attention-based methods have exhibited significant progress which align visual features and attributes via a spatial attention mechanism. However, these methods only explore visual-semantic relationship in the spatial dimension, which can lead to classification ambiguity when different attributes share similar attention regions, and semantic relationship between attributes is rarely discussed. To alleviate the above problems, we propose a Dual Relation Mining Network (DRMN) to enable more effective visual-semantic interactions and learn semantic relationship among attributes for knowledge transfer. Specifically, we introduce a Dual Attention Block (DAB) for visual-semantic relationship mining, which enriches visual information by multi-level feature fusion and conducts spatial attention for visual to semantic embedding. Moreover, an attribute-guided channel attention is utilized to decouple entangled semantic features. For semantic relationship modeling, we utilize a Semantic Interaction Transformer (SIT) to enhance the generalization of attribute representations among images. Additionally, a global classification branch is introduced as a complement to human-defined semantic attributes, and we then combine the results with attribute-based classification. Extensive experiments demonstrate that the proposed DRMN leads to new state-of-the-art performances on three standard ZSL benchmarks, i.e., CUB, SUN, and AwA2.
[ { "created": "Mon, 6 May 2024 16:31:19 GMT", "version": "v1" } ]
2024-05-07
[ [ "Han", "Jinwei", "" ], [ "Gao", "Yingguo", "" ], [ "Lin", "Zhiwen", "" ], [ "Yan", "Ke", "" ], [ "Ding", "Shouhong", "" ], [ "Gao", "Yuan", "" ], [ "Xia", "Gui-Song", "" ] ]
Zero-shot learning (ZSL) aims to recognize novel classes through transferring shared semantic knowledge (e.g., attributes) from seen classes to unseen classes. Recently, attention-based methods have exhibited significant progress which align visual features and attributes via a spatial attention mechanism. However, these methods only explore visual-semantic relationship in the spatial dimension, which can lead to classification ambiguity when different attributes share similar attention regions, and semantic relationship between attributes is rarely discussed. To alleviate the above problems, we propose a Dual Relation Mining Network (DRMN) to enable more effective visual-semantic interactions and learn semantic relationship among attributes for knowledge transfer. Specifically, we introduce a Dual Attention Block (DAB) for visual-semantic relationship mining, which enriches visual information by multi-level feature fusion and conducts spatial attention for visual to semantic embedding. Moreover, an attribute-guided channel attention is utilized to decouple entangled semantic features. For semantic relationship modeling, we utilize a Semantic Interaction Transformer (SIT) to enhance the generalization of attribute representations among images. Additionally, a global classification branch is introduced as a complement to human-defined semantic attributes, and we then combine the results with attribute-based classification. Extensive experiments demonstrate that the proposed DRMN leads to new state-of-the-art performances on three standard ZSL benchmarks, i.e., CUB, SUN, and AwA2.
1802.09189
Xingyu Fu
XingYu Fu, ZiYi Yang, XiuWen Duan
Language Distribution Prediction based on Batch Markov Monte Carlo Simulation with Migration
25 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language spreading is a complex mechanism that involves issues like culture, economics, migration, population etc. In this paper, we propose a set of methods to model the dynamics of the spreading system. To model the randomness of language spreading, we propose the Batch Markov Monte Carlo Simulation with Migration(BMMCSM) algorithm, in which each agent is treated as a language stack. The agent learns languages and migrates based on the proposed Batch Markov Property according to the transition matrix T and migration matrix M. Since population plays a crucial role in language spreading, we also introduce the Mortality and Fertility Mechanism, which controls the birth and death of the simulated agents, into the BMMCSM algorithm. The simulation results of BMMCSM show that the numerical and geographic distribution of languages varies across the time. The change of distribution fits the world cultural and economic development trend. Next, when we construct Matrix T, there are some entries of T can be directly calculated from historical statistics while some entries of T is unknown. Thus, the key to the success of the BMMCSM lies in the accurate estimation of transition matrix T by estimating the unknown entries of T under the supervision of the known entries. To achieve this, we first construct a 20 by 20 by 5 factor tensor X to characterize each entry of T. Then we train a Random Forest Regressor on the known entries of T and use the trained regressor to predict the unknown entries. The reason why we choose Random Forest(RF) is that, compared to Single Decision Tree, it conquers the problem of over fitting and the Shapiro test also suggests that the residual of RF subjects to the Normal distribution.
[ { "created": "Mon, 26 Feb 2018 07:52:30 GMT", "version": "v1" } ]
2018-02-27
[ [ "Fu", "XingYu", "" ], [ "Yang", "ZiYi", "" ], [ "Duan", "XiuWen", "" ] ]
Language spreading is a complex mechanism that involves issues like culture, economics, migration, population etc. In this paper, we propose a set of methods to model the dynamics of the spreading system. To model the randomness of language spreading, we propose the Batch Markov Monte Carlo Simulation with Migration(BMMCSM) algorithm, in which each agent is treated as a language stack. The agent learns languages and migrates based on the proposed Batch Markov Property according to the transition matrix T and migration matrix M. Since population plays a crucial role in language spreading, we also introduce the Mortality and Fertility Mechanism, which controls the birth and death of the simulated agents, into the BMMCSM algorithm. The simulation results of BMMCSM show that the numerical and geographic distribution of languages varies across the time. The change of distribution fits the world cultural and economic development trend. Next, when we construct Matrix T, there are some entries of T can be directly calculated from historical statistics while some entries of T is unknown. Thus, the key to the success of the BMMCSM lies in the accurate estimation of transition matrix T by estimating the unknown entries of T under the supervision of the known entries. To achieve this, we first construct a 20 by 20 by 5 factor tensor X to characterize each entry of T. Then we train a Random Forest Regressor on the known entries of T and use the trained regressor to predict the unknown entries. The reason why we choose Random Forest(RF) is that, compared to Single Decision Tree, it conquers the problem of over fitting and the Shapiro test also suggests that the residual of RF subjects to the Normal distribution.
2401.14983
Dmitry O. Litvintsev
Dmitry Litvintsev (1), Chitrapu Krishnaveni (2), Svenja Meyer (3), Paul Millar (3), Tigran Mkrtchyan (1), Lea Morschel (3), Albert Rossi (1), and Marina Sahakyan (3) ((1) Fermi National Accelerator Laboratory, (2) Linkoping University, (3) Deutsches Elektronen-Synchrotron DESY)
Quota management in dCache or making a perfectly normal file system normal
26th Intl Conf Computing High Energy & Nuclear Phys (CHEP 2023)
null
null
FERMILAB-CONF-23-530-CSAID
cs.DB hep-ex
http://creativecommons.org/licenses/by/4.0/
dCache (https://dcache.org) is a highly scalable storage system providing location-independent access to data. The data are stored across multiple data servers as complete files presented to the end-user via a single-rooted namespace. From its inception, dCache has been designed as a caching disk buffer to a tertiary tape storage system with the assumption that the latter has virtually unlimited capacity. dCache can also be configured as a disk-only storage system with no tape backend. Owing to the idea that a tape resource is infinite, or purely physically limited by budget considerations, the system has never provided for any restrictions on how much data can be stored on tape. Likewise, in the disk-only configuration, the capacity of the system is only limited by the aggregate disk capacity of the data servers. In a multi-user environment, however, this has become problematic. This presentation will describe the design and implementation of a user- and group-based quota system, that allows to manage tape and disk space allocations, as part of dCache namespace.
[ { "created": "Fri, 26 Jan 2024 16:16:51 GMT", "version": "v1" } ]
2024-01-29
[ [ "Litvintsev", "Dmitry", "" ], [ "Krishnaveni", "Chitrapu", "" ], [ "Meyer", "Svenja", "" ], [ "Millar", "Paul", "" ], [ "Mkrtchyan", "Tigran", "" ], [ "Morschel", "Lea", "" ], [ "Rossi", "Albert", "" ], [ "Sahakyan", "Marina", "" ] ]
dCache (https://dcache.org) is a highly scalable storage system providing location-independent access to data. The data are stored across multiple data servers as complete files presented to the end-user via a single-rooted namespace. From its inception, dCache has been designed as a caching disk buffer to a tertiary tape storage system with the assumption that the latter has virtually unlimited capacity. dCache can also be configured as a disk-only storage system with no tape backend. Owing to the idea that a tape resource is infinite, or purely physically limited by budget considerations, the system has never provided for any restrictions on how much data can be stored on tape. Likewise, in the disk-only configuration, the capacity of the system is only limited by the aggregate disk capacity of the data servers. In a multi-user environment, however, this has become problematic. This presentation will describe the design and implementation of a user- and group-based quota system, that allows to manage tape and disk space allocations, as part of dCache namespace.
1907.12014
Takahiro Hirofuchi
Takahiro Hirofuchi and Ryousei Takano
The Preliminary Evaluation of a Hypervisor-based Virtualization Mechanism for Intel Optane DC Persistent Memory Module
null
null
null
null
cs.OS cs.AR cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-volatile memory (NVM) technologies, being accessible in the same manner as DRAM, are considered indispensable for expanding main memory capacities. Intel Optane DCPMM is a long-awaited product that drastically increases main memory capacities. However, a substantial performance gap exists between DRAM and DCPMM. In our experiments, the read/write latencies of DCPMM were 400% and 407% higher than those of DRAM, respectively. The read/write bandwidths were 37% and 8% of those of DRAM. This performance gap in main memory presents a new challenge to researchers; we need a new system software technology supporting emerging hybrid memory architecture. In this paper, we present RAMinate, a hypervisor-based virtualization mechanism for hybrid memory systems, and a key technology to address the performance gap in main memory systems. It provides great flexibility in memory management and maximizes the performance of virtual machines (VMs) by dynamically optimizing memory mappings. Through experiments, we confirmed that even though a VM has only 1% of DRAM in its RAM, the performance degradation of the VM was drastically alleviated by memory mapping optimization. The elapsed time to finish the build of Linux Kernel in the VM was 557 seconds, which was only 13% increase from the 100% DRAM case (i.e., 495 seconds). When the optimization mechanism was disabled, the elapsed time increased to 624 seconds (i.e. 26% increase from the 100% DRAM case).
[ { "created": "Sun, 28 Jul 2019 05:06:24 GMT", "version": "v1" } ]
2019-07-30
[ [ "Hirofuchi", "Takahiro", "" ], [ "Takano", "Ryousei", "" ] ]
Non-volatile memory (NVM) technologies, being accessible in the same manner as DRAM, are considered indispensable for expanding main memory capacities. Intel Optane DCPMM is a long-awaited product that drastically increases main memory capacities. However, a substantial performance gap exists between DRAM and DCPMM. In our experiments, the read/write latencies of DCPMM were 400% and 407% higher than those of DRAM, respectively. The read/write bandwidths were 37% and 8% of those of DRAM. This performance gap in main memory presents a new challenge to researchers; we need a new system software technology supporting emerging hybrid memory architecture. In this paper, we present RAMinate, a hypervisor-based virtualization mechanism for hybrid memory systems, and a key technology to address the performance gap in main memory systems. It provides great flexibility in memory management and maximizes the performance of virtual machines (VMs) by dynamically optimizing memory mappings. Through experiments, we confirmed that even though a VM has only 1% of DRAM in its RAM, the performance degradation of the VM was drastically alleviated by memory mapping optimization. The elapsed time to finish the build of Linux Kernel in the VM was 557 seconds, which was only 13% increase from the 100% DRAM case (i.e., 495 seconds). When the optimization mechanism was disabled, the elapsed time increased to 624 seconds (i.e. 26% increase from the 100% DRAM case).
cs/0608044
Jay Kumar Sundararajan
Jay Kumar Sundararajan, Muriel Medard, MinJi Kim, Atilla Eryilmaz, Devavrat Shah, Ralf Koetter
Network Coding in a Multicast Switch
9 pages, submitted to IEEE INFOCOM 2007
null
10.1109/INFCOM.2007.137
null
cs.NI cs.IT math.IT
null
We consider the problem of serving multicast flows in a crossbar switch. We show that linear network coding across packets of a flow can sustain traffic patterns that cannot be served if network coding were not allowed. Thus, network coding leads to a larger rate region in a multicast crossbar switch. We demonstrate a traffic pattern which requires a switch speedup if coding is not allowed, whereas, with coding the speedup requirement is eliminated completely. In addition to throughput benefits, coding simplifies the characterization of the rate region. We give a graph-theoretic characterization of the rate region with fanout splitting and intra-flow coding, in terms of the stable set polytope of the 'enhanced conflict graph' of the traffic pattern. Such a formulation is not known in the case of fanout splitting without coding. We show that computing the offline schedule (i.e. using prior knowledge of the flow arrival rates) can be reduced to certain graph coloring problems. Finally, we propose online algorithms (i.e. using only the current queue occupancy information) for multicast scheduling based on our graph-theoretic formulation. In particular, we show that a maximum weighted stable set algorithm stabilizes the queues for all rates within the rate region.
[ { "created": "Tue, 8 Aug 2006 19:59:33 GMT", "version": "v1" } ]
2016-11-17
[ [ "Sundararajan", "Jay Kumar", "" ], [ "Medard", "Muriel", "" ], [ "Kim", "MinJi", "" ], [ "Eryilmaz", "Atilla", "" ], [ "Shah", "Devavrat", "" ], [ "Koetter", "Ralf", "" ] ]
We consider the problem of serving multicast flows in a crossbar switch. We show that linear network coding across packets of a flow can sustain traffic patterns that cannot be served if network coding were not allowed. Thus, network coding leads to a larger rate region in a multicast crossbar switch. We demonstrate a traffic pattern which requires a switch speedup if coding is not allowed, whereas, with coding the speedup requirement is eliminated completely. In addition to throughput benefits, coding simplifies the characterization of the rate region. We give a graph-theoretic characterization of the rate region with fanout splitting and intra-flow coding, in terms of the stable set polytope of the 'enhanced conflict graph' of the traffic pattern. Such a formulation is not known in the case of fanout splitting without coding. We show that computing the offline schedule (i.e. using prior knowledge of the flow arrival rates) can be reduced to certain graph coloring problems. Finally, we propose online algorithms (i.e. using only the current queue occupancy information) for multicast scheduling based on our graph-theoretic formulation. In particular, we show that a maximum weighted stable set algorithm stabilizes the queues for all rates within the rate region.
2311.04578
Tuan Thanh Nguyen
Tuan Thanh Nguyen, Kui Cai, and Paul H. Siegel
A New Version of q-ary Varshamov-Tenengolts Codes with More Efficient Encoders: The Differential VT Codes and The Differential Shifted VT Codes
arXiv admin note: substantial text overlap with arXiv:2212.10721
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of correcting deletions and insertions has recently received significantly increased attention due to the DNA-based data storage technology, which suffers from deletions and insertions with extremely high probability. In this work, we study the problem of constructing non-binary burst-deletion/insertion correcting codes. Particularly, for the quaternary alphabet, our designed codes are suited for correcting a burst of deletions/insertions in DNA storage. Non-binary codes correcting a single deletion or insertion were introduced by Tenengolts [1984], and the results were extended to correct a fixed-length burst of deletions or insertions by Schoeny et al. [2017]. Recently, Wang et al. [2021] proposed constructions of non-binary codes of length n, correcting a burst of length at most two for q-ary alphabets with redundancy log n+O(log q log log n) bits, for arbitrary even q. The common idea in those constructions is to convert non-binary sequences into binary sequences, and the error decoding algorithms for the q-ary sequences are mainly based on the success of recovering the corresponding binary sequences, respectively. In this work, we look at a natural solution in which the error detection and correction algorithms are performed directly over q-ary sequences, and for certain cases, our codes provide a more efficient encoder with lower redundancy than the best-known encoder in the literature.
[ { "created": "Wed, 8 Nov 2023 10:19:10 GMT", "version": "v1" } ]
2023-11-09
[ [ "Nguyen", "Tuan Thanh", "" ], [ "Cai", "Kui", "" ], [ "Siegel", "Paul H.", "" ] ]
The problem of correcting deletions and insertions has recently received significantly increased attention due to the DNA-based data storage technology, which suffers from deletions and insertions with extremely high probability. In this work, we study the problem of constructing non-binary burst-deletion/insertion correcting codes. Particularly, for the quaternary alphabet, our designed codes are suited for correcting a burst of deletions/insertions in DNA storage. Non-binary codes correcting a single deletion or insertion were introduced by Tenengolts [1984], and the results were extended to correct a fixed-length burst of deletions or insertions by Schoeny et al. [2017]. Recently, Wang et al. [2021] proposed constructions of non-binary codes of length n, correcting a burst of length at most two for q-ary alphabets with redundancy log n+O(log q log log n) bits, for arbitrary even q. The common idea in those constructions is to convert non-binary sequences into binary sequences, and the error decoding algorithms for the q-ary sequences are mainly based on the success of recovering the corresponding binary sequences, respectively. In this work, we look at a natural solution in which the error detection and correction algorithms are performed directly over q-ary sequences, and for certain cases, our codes provide a more efficient encoder with lower redundancy than the best-known encoder in the literature.
2003.04339
Zhishuai Guo
Zhishuai Guo, Yan Yan and Tianbao Yang
Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic gradient descent (SGD) has been widely studied in the literature from different angles, and is commonly employed for solving many big data machine learning problems. However, the averaging technique, which combines all iterative solutions into a single solution, is still under-explored. While some increasingly weighted averaging schemes have been considered in the literature, existing works are mostly restricted to strongly convex objective functions and the convergence of optimization error. It remains unclear how these averaging schemes affect the convergence of {\it both optimization error and generalization error} (two equally important components of testing error) for {\bf non-strongly convex objectives, including non-convex problems}. In this paper, we {\it fill the gap} by comprehensively analyzing the increasingly weighted averaging on convex, strongly convex and non-convex objective functions in terms of both optimization error and generalization error. In particular, we analyze a family of increasingly weighted averaging, where the weight for the solution at iteration $t$ is proportional to $t^{\alpha}$ ($\alpha > 0$). We show how $\alpha$ affects the optimization error and the generalization error, and exhibit the trade-off caused by $\alpha$. Experiments have demonstrated this trade-off and the effectiveness of polynomially increased weighted averaging compared with other averaging schemes for a wide range of problems including deep learning.
[ { "created": "Mon, 9 Mar 2020 18:14:00 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2020 03:07:15 GMT", "version": "v2" }, { "created": "Wed, 27 May 2020 01:21:16 GMT", "version": "v3" } ]
2020-05-28
[ [ "Guo", "Zhishuai", "" ], [ "Yan", "Yan", "" ], [ "Yang", "Tianbao", "" ] ]
Stochastic gradient descent (SGD) has been widely studied in the literature from different angles, and is commonly employed for solving many big data machine learning problems. However, the averaging technique, which combines all iterative solutions into a single solution, is still under-explored. While some increasingly weighted averaging schemes have been considered in the literature, existing works are mostly restricted to strongly convex objective functions and the convergence of optimization error. It remains unclear how these averaging schemes affect the convergence of {\it both optimization error and generalization error} (two equally important components of testing error) for {\bf non-strongly convex objectives, including non-convex problems}. In this paper, we {\it fill the gap} by comprehensively analyzing the increasingly weighted averaging on convex, strongly convex and non-convex objective functions in terms of both optimization error and generalization error. In particular, we analyze a family of increasingly weighted averaging, where the weight for the solution at iteration $t$ is proportional to $t^{\alpha}$ ($\alpha > 0$). We show how $\alpha$ affects the optimization error and the generalization error, and exhibit the trade-off caused by $\alpha$. Experiments have demonstrated this trade-off and the effectiveness of polynomially increased weighted averaging compared with other averaging schemes for a wide range of problems including deep learning.
2312.11062
Frank Mtumbuka
Frank Mtumbuka and Steven Schockaert
Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM). However, a key challenge arises from the fact that relation extraction cannot straightforwardly be reduced to sequence or token classification. Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings. Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way. In particular, we experiment with appending a prompt with a [MASK] token, whose contextualised representation is treated as a relation embedding. While, on its own, this strategy significantly underperforms the aforementioned approach, we find that the resulting relation embeddings are highly complementary to what is captured by embeddings of the head and tail entity. By jointly considering both types of representations, we end up with a simple model that outperforms the state-of-the-art across several relation extraction benchmarks.
[ { "created": "Mon, 18 Dec 2023 09:58:19 GMT", "version": "v1" } ]
2023-12-19
[ [ "Mtumbuka", "Frank", "" ], [ "Schockaert", "Steven", "" ] ]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM). However, a key challenge arises from the fact that relation extraction cannot straightforwardly be reduced to sequence or token classification. Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings. Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way. In particular, we experiment with appending a prompt with a [MASK] token, whose contextualised representation is treated as a relation embedding. While, on its own, this strategy significantly underperforms the aforementioned approach, we find that the resulting relation embeddings are highly complementary to what is captured by embeddings of the head and tail entity. By jointly considering both types of representations, we end up with a simple model that outperforms the state-of-the-art across several relation extraction benchmarks.
2203.15480
Yuwen Deng
Yuwen Deng, Donghai Guan, Yanyu Chen, Weiwei Yuan, Jiemin Ji, Mingqiang Wei
SAR-ShipNet: SAR-Ship Detection Neural Network via Bidirectional Coordinate Attention and Multi-resolution Feature Fusion
This paper was accepted by the International Conference on Acoustics, Speech, and Signal Processing(ICASSP) 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies a practically meaningful ship detection problem from synthetic aperture radar (SAR) images by the neural network. We broadly extract different types of SAR image features and raise the intriguing question that whether these extracted features are beneficial to (1) suppress data variations (e.g., complex land-sea backgrounds, scattered noise) of real-world SAR images, and (2) enhance the features of ships that are small objects and have different aspect (length-width) ratios, therefore resulting in the improvement of ship detection. To answer this question, we propose a SAR-ship detection neural network (call SAR-ShipNet for short), by newly developing Bidirectional Coordinate Attention (BCA) and Multi-resolution Feature Fusion (MRF) based on CenterNet. Moreover, considering the varying length-width ratio of arbitrary ships, we adopt elliptical Gaussian probability distribution in CenterNet to improve the performance of base detector models. Experimental results on the public SAR-Ship dataset show that our SAR-ShipNet achieves competitive advantages in both speed and accuracy.
[ { "created": "Tue, 29 Mar 2022 12:27:04 GMT", "version": "v1" } ]
2022-03-30
[ [ "Deng", "Yuwen", "" ], [ "Guan", "Donghai", "" ], [ "Chen", "Yanyu", "" ], [ "Yuan", "Weiwei", "" ], [ "Ji", "Jiemin", "" ], [ "Wei", "Mingqiang", "" ] ]
This paper studies a practically meaningful ship detection problem from synthetic aperture radar (SAR) images by the neural network. We broadly extract different types of SAR image features and raise the intriguing question that whether these extracted features are beneficial to (1) suppress data variations (e.g., complex land-sea backgrounds, scattered noise) of real-world SAR images, and (2) enhance the features of ships that are small objects and have different aspect (length-width) ratios, therefore resulting in the improvement of ship detection. To answer this question, we propose a SAR-ship detection neural network (call SAR-ShipNet for short), by newly developing Bidirectional Coordinate Attention (BCA) and Multi-resolution Feature Fusion (MRF) based on CenterNet. Moreover, considering the varying length-width ratio of arbitrary ships, we adopt elliptical Gaussian probability distribution in CenterNet to improve the performance of base detector models. Experimental results on the public SAR-Ship dataset show that our SAR-ShipNet achieves competitive advantages in both speed and accuracy.
2011.05354
Maxime Mulamba Ke Tchomba
Maxime Mulamba, Jayanta Mandi, Michelangelo Diligenti, Michele Lombardi, Victor Bucarey, Tias Guns
Contrastive Losses and Solution Caching for Predict-and-Optimize
Accepted at IJCAI2021
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.
[ { "created": "Tue, 10 Nov 2020 19:09:12 GMT", "version": "v1" }, { "created": "Tue, 6 Jul 2021 10:39:33 GMT", "version": "v2" } ]
2021-07-07
[ [ "Mulamba", "Maxime", "" ], [ "Mandi", "Jayanta", "" ], [ "Diligenti", "Michelangelo", "" ], [ "Lombardi", "Michele", "" ], [ "Bucarey", "Victor", "" ], [ "Guns", "Tias", "" ] ]
Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.
1709.04421
Gerg\"o Barany
Gerg\"o Barany
Liveness-Driven Random Program Generation
Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854)
null
null
LOPSTR/2017/6
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
[ { "created": "Wed, 13 Sep 2017 17:06:10 GMT", "version": "v1" } ]
2017-09-14
[ [ "Barany", "Gergö", "" ] ]
Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
1804.02245
Christian Torrero
Christian Torrero, Carlo Caprini and Daniele Miorandi
A Wikipedia-based approach to profiling activities on social media
8 pages, 5 figures - Work presented at the WikiWorkshop2018 during the Web Conference 2018 hosted in Lyon (France) from April 23 till April 27
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online user profiling is a very active research field, catalyzing great interest by both scientists and practitioners. In this paper, in particular, we look at approaches able to mine social media activities of users to create a rich user profile. We look at the case in which the profiling is meant to characterize the user's interests along a set of predefined dimensions (that we refer to as categories). A conventional way to do so is to use semantic analysis techniques to (i) extract relevant entities from the online conversations of users (ii) mapping said entities to the predefined categories of interest. While entity extraction is a well-understood topic, the mapping part lacks a reference standardized approach. In this paper we propose using graph navigation techniques on the Wikipedia tree to achieve such a mapping. A prototypical implementation is presented and some preliminary results are reported.
[ { "created": "Fri, 6 Apr 2018 13:06:13 GMT", "version": "v1" }, { "created": "Mon, 9 Apr 2018 08:59:23 GMT", "version": "v2" }, { "created": "Wed, 26 Sep 2018 16:34:40 GMT", "version": "v3" } ]
2018-09-27
[ [ "Torrero", "Christian", "" ], [ "Caprini", "Carlo", "" ], [ "Miorandi", "Daniele", "" ] ]
Online user profiling is a very active research field, catalyzing great interest by both scientists and practitioners. In this paper, in particular, we look at approaches able to mine social media activities of users to create a rich user profile. We look at the case in which the profiling is meant to characterize the user's interests along a set of predefined dimensions (that we refer to as categories). A conventional way to do so is to use semantic analysis techniques to (i) extract relevant entities from the online conversations of users (ii) mapping said entities to the predefined categories of interest. While entity extraction is a well-understood topic, the mapping part lacks a reference standardized approach. In this paper we propose using graph navigation techniques on the Wikipedia tree to achieve such a mapping. A prototypical implementation is presented and some preliminary results are reported.
2105.01400
Iftach Haitner
Itay Berman and Iftach Haitner and Aris Tentes
Coin Flipping of \emph{Any} Constant Bias Implies One-Way Functions
This is the final draft of this paper. The full version was published in the Journal of the ACM 2018. An extended abstract of this work appeared in the proceedings of STOC 2014
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that the existence of a coin-flipping protocol safe against \emph{any} non-trivial constant bias (\eg $.499$) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias $\frac{\sqrt2 -1}2 - o(1) \approx .207$. Unlike the result of Haitner and Omri, our result also holds for \emph{weak} coin-flipping protocols.
[ { "created": "Tue, 4 May 2021 10:26:22 GMT", "version": "v1" } ]
2021-05-05
[ [ "Berman", "Itay", "" ], [ "Haitner", "Iftach", "" ], [ "Tentes", "Aris", "" ] ]
We show that the existence of a coin-flipping protocol safe against \emph{any} non-trivial constant bias (\eg $.499$) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias $\frac{\sqrt2 -1}2 - o(1) \approx .207$. Unlike the result of Haitner and Omri, our result also holds for \emph{weak} coin-flipping protocols.
2302.13279
Xingchao Yang
Xingchao Yang, Takafumi Taketomi, Yoshihiro Kanamori
Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition
Eurographics 2023
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial makeup enriches the beauty of not only real humans but also virtual characters; therefore, makeup for 3D facial models is highly in demand in productions. However, painting directly on 3D faces and capturing real-world makeup are costly, and extracting makeup from 2D images often struggles with shading effects and occlusions. This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait. Our method consists of the following three steps. First, we exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials such as geometry and diffuse/specular albedos that are represented in the UV space. Second, we refine the coarse materials, which may have missing pixels due to occlusions. We apply inpainting and optimization. Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse albedo. Our method offers various applications for not only 3D facial models but also 2D portrait images. The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces. Our disentangled materials also yield robust makeup transfer and illumination-aware makeup interpolation/removal without a reference image.
[ { "created": "Sun, 26 Feb 2023 09:48:57 GMT", "version": "v1" } ]
2023-02-28
[ [ "Yang", "Xingchao", "" ], [ "Taketomi", "Takafumi", "" ], [ "Kanamori", "Yoshihiro", "" ] ]
Facial makeup enriches the beauty of not only real humans but also virtual characters; therefore, makeup for 3D facial models is highly in demand in productions. However, painting directly on 3D faces and capturing real-world makeup are costly, and extracting makeup from 2D images often struggles with shading effects and occlusions. This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait. Our method consists of the following three steps. First, we exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials such as geometry and diffuse/specular albedos that are represented in the UV space. Second, we refine the coarse materials, which may have missing pixels due to occlusions. We apply inpainting and optimization. Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse albedo. Our method offers various applications for not only 3D facial models but also 2D portrait images. The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces. Our disentangled materials also yield robust makeup transfer and illumination-aware makeup interpolation/removal without a reference image.
2211.06569
Xiaoqing Tan
Xiaoqing Tan, Zhengling Qi, Christopher W. Seymour, Lu Tang
RISE: Robust Individualized Decision Learning with Sensitive Variables
Accepted at NeurIPS 2022
null
null
null
cs.LG stat.AP stat.ME stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper introduces RISE, a robust individualized decision learning framework with sensitive variables, where sensitive variables are collectible data and important to the intervention decision, but their inclusion in decision making is prohibited due to reasons such as delayed availability or fairness concerns. A naive baseline is to ignore these sensitive variables in learning decision rules, leading to significant uncertainty and bias. To address this, we propose a decision learning framework to incorporate sensitive variables during offline training but not include them in the input of the learned decision rule during model deployment. Specifically, from a causal perspective, the proposed framework intends to improve the worst-case outcomes of individuals caused by sensitive variables that are unavailable at the time of decision. Unlike most existing literature that uses mean-optimal objectives, we propose a robust learning framework by finding a newly defined quantile- or infimum-optimal decision rule. The reliable performance of the proposed method is demonstrated through synthetic experiments and three real-world applications.
[ { "created": "Sat, 12 Nov 2022 04:31:38 GMT", "version": "v1" } ]
2022-11-15
[ [ "Tan", "Xiaoqing", "" ], [ "Qi", "Zhengling", "" ], [ "Seymour", "Christopher W.", "" ], [ "Tang", "Lu", "" ] ]
This paper introduces RISE, a robust individualized decision learning framework with sensitive variables, where sensitive variables are collectible data and important to the intervention decision, but their inclusion in decision making is prohibited due to reasons such as delayed availability or fairness concerns. A naive baseline is to ignore these sensitive variables in learning decision rules, leading to significant uncertainty and bias. To address this, we propose a decision learning framework to incorporate sensitive variables during offline training but not include them in the input of the learned decision rule during model deployment. Specifically, from a causal perspective, the proposed framework intends to improve the worst-case outcomes of individuals caused by sensitive variables that are unavailable at the time of decision. Unlike most existing literature that uses mean-optimal objectives, we propose a robust learning framework by finding a newly defined quantile- or infimum-optimal decision rule. The reliable performance of the proposed method is demonstrated through synthetic experiments and three real-world applications.
1811.07999
Steven Kommrusch
Steve Kommrusch and Louis-No\"el Pouchet
Synthetic Lung Nodule 3D Image Generation Using Autoencoders
19 pages, 12 figures, full paper for work initially presented at IJCAI 2018
null
null
CS-18-101
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the challenges of using machine learning techniques with medical data is the frequent dearth of source image data on which to train. A representative example is automated lung cancer diagnosis, where nodule images need to be classified as suspicious or benign. In this work we propose an automatic synthetic lung nodule image generator. Our 3D shape generator is designed to augment the variety of 3D images. Our proposed system takes root in autoencoder techniques, and we provide extensive experimental characterization that demonstrates its ability to produce quality synthetic images.
[ { "created": "Mon, 19 Nov 2018 21:51:38 GMT", "version": "v1" }, { "created": "Mon, 24 Dec 2018 05:58:20 GMT", "version": "v2" }, { "created": "Mon, 9 Sep 2019 05:58:21 GMT", "version": "v3" } ]
2019-09-10
[ [ "Kommrusch", "Steve", "" ], [ "Pouchet", "Louis-Noël", "" ] ]
One of the challenges of using machine learning techniques with medical data is the frequent dearth of source image data on which to train. A representative example is automated lung cancer diagnosis, where nodule images need to be classified as suspicious or benign. In this work we propose an automatic synthetic lung nodule image generator. Our 3D shape generator is designed to augment the variety of 3D images. Our proposed system takes root in autoencoder techniques, and we provide extensive experimental characterization that demonstrates its ability to produce quality synthetic images.
2010.14432
Edon Kelmendi
Shaull Almagor, Toghrul Karimov, Edon Kelmendi, J\"oel Ouaknine, James Worrell
Deciding $\omega$-Regular Properties on Linear Recurrence Sequences
null
null
null
null
cs.LO cs.FL cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of deciding $\omega$-regular properties on infinite traces produced by linear loops. Here we think of a given loop as producing a single infinite trace that encodes information about the signs of program variables at each time step. Formally, our main result is a procedure that inputs a prefix-independent $\omega$-regular property and a sequence of numbers satisfying a linear recurrence, and determines whether the sign description of the sequence (obtained by replacing each positive entry with "$+$", each negative entry with "$-$", and each zero entry with "$0$") satisfies the given property. Our procedure requires that the recurrence be simple, \ie, that the update matrix of the underlying loop be diagonalisable. This assumption is instrumental in proving our key technical lemma: namely that the sign description of a simple linear recurrence sequence is almost periodic in the sense of Muchnik, Sem\"enov, and Ushakov. To complement this lemma, we give an example of a linear recurrence sequence whose sign description fails to be almost periodic. Generalising from sign descriptions, we also consider the verification of properties involving semi-algebraic predicates on program variables.
[ { "created": "Tue, 27 Oct 2020 16:49:14 GMT", "version": "v1" } ]
2020-10-28
[ [ "Almagor", "Shaull", "" ], [ "Karimov", "Toghrul", "" ], [ "Kelmendi", "Edon", "" ], [ "Ouaknine", "Jöel", "" ], [ "Worrell", "James", "" ] ]
We consider the problem of deciding $\omega$-regular properties on infinite traces produced by linear loops. Here we think of a given loop as producing a single infinite trace that encodes information about the signs of program variables at each time step. Formally, our main result is a procedure that inputs a prefix-independent $\omega$-regular property and a sequence of numbers satisfying a linear recurrence, and determines whether the sign description of the sequence (obtained by replacing each positive entry with "$+$", each negative entry with "$-$", and each zero entry with "$0$") satisfies the given property. Our procedure requires that the recurrence be simple, \ie, that the update matrix of the underlying loop be diagonalisable. This assumption is instrumental in proving our key technical lemma: namely that the sign description of a simple linear recurrence sequence is almost periodic in the sense of Muchnik, Sem\"enov, and Ushakov. To complement this lemma, we give an example of a linear recurrence sequence whose sign description fails to be almost periodic. Generalising from sign descriptions, we also consider the verification of properties involving semi-algebraic predicates on program variables.
2101.00828
Le Fang
Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, Changyou Chen
Transformer-based Conditional Variational Autoencoder for Controllable Story Generation
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate large-scale latent variable models (LVMs) for neural story generation -- an under-explored application for open-domain long text -- with objectives in two threads: generation effectiveness and controllability. LVMs, especially the variational autoencoder (VAE), have achieved both effective and controllable generation through exploiting flexible distributional latent representations. Recently, Transformers and its variants have achieved remarkable effectiveness without explicit latent representation learning, thus lack satisfying controllability in generation. In this paper, we advocate to revive latent variable modeling, essentially the power of representation learning, in the era of Transformers to enhance controllability without hurting state-of-the-art generation effectiveness. Specifically, we integrate latent representation vectors with a Transformer-based pre-trained architecture to build conditional variational autoencoder (CVAE). Model components such as encoder, decoder and the variational posterior are all built on top of pre-trained language models -- GPT2 specifically in this paper. Experiments demonstrate state-of-the-art conditional generation ability of our model, as well as its excellent representation learning capability and controllability.
[ { "created": "Mon, 4 Jan 2021 08:31:11 GMT", "version": "v1" }, { "created": "Thu, 8 Jul 2021 17:18:13 GMT", "version": "v2" } ]
2021-07-09
[ [ "Fang", "Le", "" ], [ "Zeng", "Tao", "" ], [ "Liu", "Chaochun", "" ], [ "Bo", "Liefeng", "" ], [ "Dong", "Wen", "" ], [ "Chen", "Changyou", "" ] ]
We investigate large-scale latent variable models (LVMs) for neural story generation -- an under-explored application for open-domain long text -- with objectives in two threads: generation effectiveness and controllability. LVMs, especially the variational autoencoder (VAE), have achieved both effective and controllable generation through exploiting flexible distributional latent representations. Recently, Transformers and its variants have achieved remarkable effectiveness without explicit latent representation learning, thus lack satisfying controllability in generation. In this paper, we advocate to revive latent variable modeling, essentially the power of representation learning, in the era of Transformers to enhance controllability without hurting state-of-the-art generation effectiveness. Specifically, we integrate latent representation vectors with a Transformer-based pre-trained architecture to build conditional variational autoencoder (CVAE). Model components such as encoder, decoder and the variational posterior are all built on top of pre-trained language models -- GPT2 specifically in this paper. Experiments demonstrate state-of-the-art conditional generation ability of our model, as well as its excellent representation learning capability and controllability.
2308.16599
Felix Wagner
Felix Wagner and Florian Nachtigall and Lukas Franken and Nikola Milojevic-Dupont and Rafael H.M. Pereira and Nicolas Koch and Jakob Runge and Marta Gonzalez and Felix Creutzig
Using machine learning to understand causal relationships between urban form and travel CO2 emissions across continents
32 pages, 24 figures, 6 tables
null
null
null
cs.LG physics.soc-ph
http://creativecommons.org/licenses/by-sa/4.0/
Climate change mitigation in urban mobility requires policies reconfiguring urban form to increase accessibility and facilitate low-carbon modes of transport. However, current policy research has insufficiently assessed urban form effects on car travel at three levels: (1) Causality -- Can causality be established beyond theoretical and correlation-based analyses? (2) Generalizability -- Do relationships hold across different cities and world regions? (3) Context specificity -- How do relationships vary across neighborhoods of a city? Here, we address all three gaps via causal graph discovery and explainable machine learning to detect urban form effects on intra-city car travel, based on mobility data of six cities across three continents. We find significant causal effects of urban form on trip emissions and inter-feature effects, which had been neglected in previous work. Our results demonstrate that destination accessibility matters most overall, while low density and low connectivity also sharply increase CO$_2$ emissions. These general trends are similar across cities but we find idiosyncratic effects that can lead to substantially different recommendations. In more monocentric cities, we identify spatial corridors -- about 10--50 km from the city center -- where subcenter-oriented development is more relevant than increased access to the main center. Our work demonstrates a novel application of machine learning that enables new research addressing the needs of causality, generalizability, and contextual specificity for scaling evidence-based urban climate solutions.
[ { "created": "Thu, 31 Aug 2023 09:57:52 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2023 13:37:42 GMT", "version": "v2" } ]
2023-12-18
[ [ "Wagner", "Felix", "" ], [ "Nachtigall", "Florian", "" ], [ "Franken", "Lukas", "" ], [ "Milojevic-Dupont", "Nikola", "" ], [ "Pereira", "Rafael H. M.", "" ], [ "Koch", "Nicolas", "" ], [ "Runge", "Jakob", "" ], [ "Gonzalez", "Marta", "" ], [ "Creutzig", "Felix", "" ] ]
Climate change mitigation in urban mobility requires policies reconfiguring urban form to increase accessibility and facilitate low-carbon modes of transport. However, current policy research has insufficiently assessed urban form effects on car travel at three levels: (1) Causality -- Can causality be established beyond theoretical and correlation-based analyses? (2) Generalizability -- Do relationships hold across different cities and world regions? (3) Context specificity -- How do relationships vary across neighborhoods of a city? Here, we address all three gaps via causal graph discovery and explainable machine learning to detect urban form effects on intra-city car travel, based on mobility data of six cities across three continents. We find significant causal effects of urban form on trip emissions and inter-feature effects, which had been neglected in previous work. Our results demonstrate that destination accessibility matters most overall, while low density and low connectivity also sharply increase CO$_2$ emissions. These general trends are similar across cities but we find idiosyncratic effects that can lead to substantially different recommendations. In more monocentric cities, we identify spatial corridors -- about 10--50 km from the city center -- where subcenter-oriented development is more relevant than increased access to the main center. Our work demonstrates a novel application of machine learning that enables new research addressing the needs of causality, generalizability, and contextual specificity for scaling evidence-based urban climate solutions.
1312.0686
Yong Wang
Yong Wang
A Process Algebra for Games
24 pages, 16 figures
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatic foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatic system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatic system for PO. To overcome the new occurred non-determinacy introduced by GameACP, we extend truly concurrent process algebra APTC for games called GameAPTC. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP and GameAPTC processes.
[ { "created": "Tue, 3 Dec 2013 02:59:47 GMT", "version": "v1" }, { "created": "Thu, 3 Apr 2014 03:35:00 GMT", "version": "v2" }, { "created": "Sun, 4 May 2014 04:21:19 GMT", "version": "v3" }, { "created": "Mon, 13 Jul 2015 07:57:27 GMT", "version": "v4" }, { "created": "Wed, 8 May 2019 13:22:50 GMT", "version": "v5" } ]
2019-05-09
[ [ "Wang", "Yong", "" ] ]
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatic foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatic system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatic system for PO. To overcome the new occurred non-determinacy introduced by GameACP, we extend truly concurrent process algebra APTC for games called GameAPTC. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP and GameAPTC processes.
1509.06947
Gilles Puy
Gilles Puy and Mike Davies and R\'emi Gribonval
Recipes for stable linear embeddings from Hilbert spaces to R^m
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of constructing a linear map from a Hilbert space $\mathcal{H}$ (possibly infinite dimensional) to $\mathbb{R}^m$ that satisfies a restricted isometry property (RIP) on an arbitrary signal model $\mathcal{S} \subset \mathcal{H}$. We present a generic framework that handles a large class of low-dimensional subsets but also unstructured and structured linear maps. We provide a simple recipe to prove that a random linear map satisfies a general RIP on $\mathcal{S}$ with high probability. We also describe a generic technique to construct linear maps that satisfy the RIP. Finally, we detail how to use our results in several examples, which allow us to recover and extend many known compressive sampling results.
[ { "created": "Wed, 23 Sep 2015 12:49:16 GMT", "version": "v1" }, { "created": "Tue, 17 Jan 2017 12:41:29 GMT", "version": "v2" } ]
2017-01-18
[ [ "Puy", "Gilles", "" ], [ "Davies", "Mike", "" ], [ "Gribonval", "Rémi", "" ] ]
We consider the problem of constructing a linear map from a Hilbert space $\mathcal{H}$ (possibly infinite dimensional) to $\mathbb{R}^m$ that satisfies a restricted isometry property (RIP) on an arbitrary signal model $\mathcal{S} \subset \mathcal{H}$. We present a generic framework that handles a large class of low-dimensional subsets but also unstructured and structured linear maps. We provide a simple recipe to prove that a random linear map satisfies a general RIP on $\mathcal{S}$ with high probability. We also describe a generic technique to construct linear maps that satisfy the RIP. Finally, we detail how to use our results in several examples, which allow us to recover and extend many known compressive sampling results.
2210.07415
Negar Mokhberian
Negar Mokhberian, Frederic R. Hopp, Bahareh Harandizadeh, Fred Morstatter, Kristina Lerman
Noise Audits Improve Moral Foundation Classification
null
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
Morality plays an important role in culture, identity, and emotion. Recent advances in natural language processing have shown that it is possible to classify moral values expressed in text at scale. Morality classification relies on human annotators to label the moral expressions in text, which provides training data to achieve state-of-the-art performance. However, these annotations are inherently subjective and some of the instances are hard to classify, resulting in noisy annotations due to error or lack of agreement. The presence of noise in training data harms the classifier's ability to accurately recognize moral foundations from text. We propose two metrics to audit the noise of annotations. The first metric is entropy of instance labels, which is a proxy measure of annotator disagreement about how the instance should be labeled. The second metric is the silhouette coefficient of a label assigned by an annotator to an instance. This metric leverages the idea that instances with the same label should have similar latent representations, and deviations from collective judgments are indicative of errors. Our experiments on three widely used moral foundations datasets show that removing noisy annotations based on the proposed metrics improves classification performance.
[ { "created": "Thu, 13 Oct 2022 23:37:47 GMT", "version": "v1" } ]
2022-10-17
[ [ "Mokhberian", "Negar", "" ], [ "Hopp", "Frederic R.", "" ], [ "Harandizadeh", "Bahareh", "" ], [ "Morstatter", "Fred", "" ], [ "Lerman", "Kristina", "" ] ]
Morality plays an important role in culture, identity, and emotion. Recent advances in natural language processing have shown that it is possible to classify moral values expressed in text at scale. Morality classification relies on human annotators to label the moral expressions in text, which provides training data to achieve state-of-the-art performance. However, these annotations are inherently subjective and some of the instances are hard to classify, resulting in noisy annotations due to error or lack of agreement. The presence of noise in training data harms the classifier's ability to accurately recognize moral foundations from text. We propose two metrics to audit the noise of annotations. The first metric is entropy of instance labels, which is a proxy measure of annotator disagreement about how the instance should be labeled. The second metric is the silhouette coefficient of a label assigned by an annotator to an instance. This metric leverages the idea that instances with the same label should have similar latent representations, and deviations from collective judgments are indicative of errors. Our experiments on three widely used moral foundations datasets show that removing noisy annotations based on the proposed metrics improves classification performance.
1706.00498
Karan Maheshwari
Karan Maheshwari, Nalini N
Facial Recognition Enabled Smart Door Using Microsoft Face API
4 pages, 5 figures, published with International Journal of Engineering Trends and Applications
IJETA V4(3): Page(1-4) May - Jun 2017. ISSN: 2393-9516 http://www.ijetajournal.org. Published by Eighth Sense Research Group
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Privacy and Security are two universal rights and, to ensure that in our daily life we are secure, a lot of research is going on in the field of home security, and IoT is the turning point for the industry, where we connect everyday objects to share data for our betterment. Facial recognition is a well-established process in which the face is detected and identified out of the image. We aim to create a smart door, which secures the gateway on the basis of who we are. In our proof of concept of a smart door we have used a live HD camera on the front side of setup attached to a display monitor connected to the camera to show who is standing in front of the door, also the whole system will be able to give voice outputs by processing text them on the Raspberry Pi ARM processor used and show the answers as output on the screen. We are using a set of electromagnets controlled by the microcontroller, which will act as a lock. So a person can open the smart door with the help of facial recognition and at the same time also be able to interact with it. The facial recognition is done by Microsoft face API but our state of the art desktop application operating over Microsoft Visual Studio IDE reduces the computational time by detecting the face out of the photo and giving that as the output to Microsoft Face API, which is hosted over Microsoft Azure cloud support.
[ { "created": "Mon, 29 May 2017 16:29:09 GMT", "version": "v1" } ]
2017-06-05
[ [ "Maheshwari", "Karan", "" ], [ "N", "Nalini", "" ] ]
Privacy and Security are two universal rights and, to ensure that in our daily life we are secure, a lot of research is going on in the field of home security, and IoT is the turning point for the industry, where we connect everyday objects to share data for our betterment. Facial recognition is a well-established process in which the face is detected and identified out of the image. We aim to create a smart door, which secures the gateway on the basis of who we are. In our proof of concept of a smart door we have used a live HD camera on the front side of setup attached to a display monitor connected to the camera to show who is standing in front of the door, also the whole system will be able to give voice outputs by processing text them on the Raspberry Pi ARM processor used and show the answers as output on the screen. We are using a set of electromagnets controlled by the microcontroller, which will act as a lock. So a person can open the smart door with the help of facial recognition and at the same time also be able to interact with it. The facial recognition is done by Microsoft face API but our state of the art desktop application operating over Microsoft Visual Studio IDE reduces the computational time by detecting the face out of the photo and giving that as the output to Microsoft Face API, which is hosted over Microsoft Azure cloud support.
2310.00108
Myeongseob Ko
Myeongseob Ko, Ming Jin, Chenguang Wang, and Ruoxi Jia
Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study
International Conference on Computer Vision (ICCV) 2023
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model. These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data. While MIAs have been traditionally studied for simple classification models, recent advancements in multi-modal pre-training, such as CLIP, have demonstrated remarkable zero-shot performance across a range of computer vision tasks. However, the sheer scale of data and models presents significant computational challenges for performing the attacks. This paper takes a first step towards developing practical MIAs against large-scale multi-modal models. We introduce a simple baseline strategy by thresholding the cosine similarity between text and image features of a target point and propose further enhancing the baseline by aggregating cosine similarity across transformations of the target. We also present a new weakly supervised attack method that leverages ground-truth non-members (e.g., obtained by using the publication date of a target model and the timestamps of the open data) to further enhance the attack. Our evaluation shows that CLIP models are susceptible to our attack strategies, with our simple baseline achieving over $75\%$ membership identification accuracy. Furthermore, our enhanced attacks outperform the baseline across multiple models and datasets, with the weakly supervised attack demonstrating an average-case performance improvement of $17\%$ and being at least $7$X more effective at low false-positive rates. These findings highlight the importance of protecting the privacy of multi-modal foundational models, which were previously assumed to be less susceptible to MIAs due to less overfitting. Our code is available at https://github.com/ruoxi-jia-group/CLIP-MIA.
[ { "created": "Fri, 29 Sep 2023 19:38:40 GMT", "version": "v1" } ]
2023-10-03
[ [ "Ko", "Myeongseob", "" ], [ "Jin", "Ming", "" ], [ "Wang", "Chenguang", "" ], [ "Jia", "Ruoxi", "" ] ]
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model. These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data. While MIAs have been traditionally studied for simple classification models, recent advancements in multi-modal pre-training, such as CLIP, have demonstrated remarkable zero-shot performance across a range of computer vision tasks. However, the sheer scale of data and models presents significant computational challenges for performing the attacks. This paper takes a first step towards developing practical MIAs against large-scale multi-modal models. We introduce a simple baseline strategy by thresholding the cosine similarity between text and image features of a target point and propose further enhancing the baseline by aggregating cosine similarity across transformations of the target. We also present a new weakly supervised attack method that leverages ground-truth non-members (e.g., obtained by using the publication date of a target model and the timestamps of the open data) to further enhance the attack. Our evaluation shows that CLIP models are susceptible to our attack strategies, with our simple baseline achieving over $75\%$ membership identification accuracy. Furthermore, our enhanced attacks outperform the baseline across multiple models and datasets, with the weakly supervised attack demonstrating an average-case performance improvement of $17\%$ and being at least $7$X more effective at low false-positive rates. These findings highlight the importance of protecting the privacy of multi-modal foundational models, which were previously assumed to be less susceptible to MIAs due to less overfitting. Our code is available at https://github.com/ruoxi-jia-group/CLIP-MIA.
1710.11248
Justin Fu
Justin Fu, Katie Luo, Sergey Levine
Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose adverserial inverse reinforcement learning (AIRL), a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.
[ { "created": "Mon, 30 Oct 2017 21:22:28 GMT", "version": "v1" }, { "created": "Mon, 13 Aug 2018 18:33:24 GMT", "version": "v2" } ]
2018-08-15
[ [ "Fu", "Justin", "" ], [ "Luo", "Katie", "" ], [ "Levine", "Sergey", "" ] ]
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose adverserial inverse reinforcement learning (AIRL), a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.
2206.03483
Martin Gauch
Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner
Few-Shot Learning by Dimensionality Reduction in Gradient Space
Accepted at Conference on Lifelong Learning Agents (CoLLAs) 2022. Code: https://github.com/ml-jku/subgd Blog post: https://ml-jku.github.io/subgd
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:1043-1064 (2022)
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce SubGD, a novel few-shot learning method which is based on the recent finding that stochastic gradient descent updates tend to live in a low-dimensional parameter subspace. In experimental and theoretical analyses, we show that models confined to a suitable predefined subspace generalize well for few-shot learning. A suitable subspace fulfills three criteria across the given tasks: it (a) allows to reduce the training error by gradient flow, (b) leads to models that generalize well, and (c) can be identified by stochastic gradient descent. SubGD identifies these subspaces from an eigendecomposition of the auto-correlation matrix of update directions across different tasks. Demonstrably, we can identify low-dimensional suitable subspaces for few-shot learning of dynamical systems, which have varying properties described by one or few parameters of the analytical system description. Such systems are ubiquitous among real-world applications in science and engineering. We experimentally corroborate the advantages of SubGD on three distinct dynamical systems problem settings, significantly outperforming popular few-shot learning methods both in terms of sample efficiency and performance.
[ { "created": "Tue, 7 Jun 2022 17:58:35 GMT", "version": "v1" } ]
2023-01-30
[ [ "Gauch", "Martin", "" ], [ "Beck", "Maximilian", "" ], [ "Adler", "Thomas", "" ], [ "Kotsur", "Dmytro", "" ], [ "Fiel", "Stefan", "" ], [ "Eghbal-zadeh", "Hamid", "" ], [ "Brandstetter", "Johannes", "" ], [ "Kofler", "Johannes", "" ], [ "Holzleitner", "Markus", "" ], [ "Zellinger", "Werner", "" ], [ "Klotz", "Daniel", "" ], [ "Hochreiter", "Sepp", "" ], [ "Lehner", "Sebastian", "" ] ]
We introduce SubGD, a novel few-shot learning method which is based on the recent finding that stochastic gradient descent updates tend to live in a low-dimensional parameter subspace. In experimental and theoretical analyses, we show that models confined to a suitable predefined subspace generalize well for few-shot learning. A suitable subspace fulfills three criteria across the given tasks: it (a) allows to reduce the training error by gradient flow, (b) leads to models that generalize well, and (c) can be identified by stochastic gradient descent. SubGD identifies these subspaces from an eigendecomposition of the auto-correlation matrix of update directions across different tasks. Demonstrably, we can identify low-dimensional suitable subspaces for few-shot learning of dynamical systems, which have varying properties described by one or few parameters of the analytical system description. Such systems are ubiquitous among real-world applications in science and engineering. We experimentally corroborate the advantages of SubGD on three distinct dynamical systems problem settings, significantly outperforming popular few-shot learning methods both in terms of sample efficiency and performance.
2104.10865
Shin Yoo Dr
Jinhan Kim, Juyoung Jeon, Shin Hong, Shin Yoo
Predictive Mutation Analysis via Natural Language Channel in Source Code
null
null
10.1145/3510417
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutation analysis can provide valuable insights into both System Under Test (SUT) and its test suite. However, it is not scalable due to the cost of building and testing a large number of mutants. Predictive Mutation Testing (PMT) has been proposed to reduce the cost of mutation testing, but it can only provide statistical inference about whether a mutant will be killed or not by the entire test suite. We propose Seshat, a Predictive Mutation Analysis (PMA) technique that can accurately predict the entire kill matrix, not just the mutation score of the given test suite. Seshat exploits the natural language channel in code, and learns the relationship between the syntactic and semantic concepts of each test case and the mutants it can kill, from a given kill matrix. The learnt model can later be used to predict the kill matrices for subsequent versions of the program, even after both the source and test code have changed significantly. Empirical evaluation using the programs in the Defects4J shows that Seshat can predict kill matrices with the average F-score of 0.83 for versions that are up to years apart. This is an improvement of F-score by 0.14 and 0.45 point over the state-of-the-art predictive mutation testing technique, and a simple coverage based heuristic, respectively. Seshat also performs as well as PMT for the prediction of mutation scores only. Once Seshat trains its model using a concrete mutation analysis, the subsequent predictions made by Seshat are on average 39 times faster than actual test-based analysis.
[ { "created": "Thu, 22 Apr 2021 05:09:43 GMT", "version": "v1" }, { "created": "Tue, 4 Jan 2022 02:33:16 GMT", "version": "v2" } ]
2022-09-15
[ [ "Kim", "Jinhan", "" ], [ "Jeon", "Juyoung", "" ], [ "Hong", "Shin", "" ], [ "Yoo", "Shin", "" ] ]
Mutation analysis can provide valuable insights into both System Under Test (SUT) and its test suite. However, it is not scalable due to the cost of building and testing a large number of mutants. Predictive Mutation Testing (PMT) has been proposed to reduce the cost of mutation testing, but it can only provide statistical inference about whether a mutant will be killed or not by the entire test suite. We propose Seshat, a Predictive Mutation Analysis (PMA) technique that can accurately predict the entire kill matrix, not just the mutation score of the given test suite. Seshat exploits the natural language channel in code, and learns the relationship between the syntactic and semantic concepts of each test case and the mutants it can kill, from a given kill matrix. The learnt model can later be used to predict the kill matrices for subsequent versions of the program, even after both the source and test code have changed significantly. Empirical evaluation using the programs in the Defects4J shows that Seshat can predict kill matrices with the average F-score of 0.83 for versions that are up to years apart. This is an improvement of F-score by 0.14 and 0.45 point over the state-of-the-art predictive mutation testing technique, and a simple coverage based heuristic, respectively. Seshat also performs as well as PMT for the prediction of mutation scores only. Once Seshat trains its model using a concrete mutation analysis, the subsequent predictions made by Seshat are on average 39 times faster than actual test-based analysis.
2308.01218
Manuel Valle Torre
Manuel Valle Torre, Catharine Oertel, Marcus Specht
The Sequence Matters in Learning -- A Systematic Literature Review
This version is for personal use and not for redistribution. The final version was published as part of the proceedings of the 14th Learning Analytics and Knowledge Conference (LAK '24). March 18--22, 2024, Kyoto, Japan
null
10.1145/3636555.3636880
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Describing and analysing learner behaviour using sequential data and analysis is becoming more and more popular in Learning Analytics. Nevertheless, we found a variety of definitions of learning sequences, as well as choices regarding data aggregation and the methods implemented for analysis. Furthermore, sequences are used to study different educational settings and serve as a base for various interventions. In this literature review, the authors aim to generate an overview of these aspects to describe the current state of using sequence analysis in educational support and learning analytics. The 74 included articles were selected based on the criteria that they conduct empirical research on an educational environment using sequences of learning actions as the main focus of their analysis. The results enable us to highlight different learning tasks where sequences are analysed, identify data mapping strategies for different types of sequence actions, differentiate techniques based on purpose and scope, and identify educational interventions based on the outcomes of sequence analysis.
[ { "created": "Wed, 2 Aug 2023 15:22:01 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2023 18:04:35 GMT", "version": "v2" } ]
2023-12-13
[ [ "Torre", "Manuel Valle", "" ], [ "Oertel", "Catharine", "" ], [ "Specht", "Marcus", "" ] ]
Describing and analysing learner behaviour using sequential data and analysis is becoming more and more popular in Learning Analytics. Nevertheless, we found a variety of definitions of learning sequences, as well as choices regarding data aggregation and the methods implemented for analysis. Furthermore, sequences are used to study different educational settings and serve as a base for various interventions. In this literature review, the authors aim to generate an overview of these aspects to describe the current state of using sequence analysis in educational support and learning analytics. The 74 included articles were selected based on the criteria that they conduct empirical research on an educational environment using sequences of learning actions as the main focus of their analysis. The results enable us to highlight different learning tasks where sequences are analysed, identify data mapping strategies for different types of sequence actions, differentiate techniques based on purpose and scope, and identify educational interventions based on the outcomes of sequence analysis.
2211.16199
Haitz S\'aez de Oc\'ariz Borde
Haitz S\'aez de Oc\'ariz Borde, Anees Kazi, Federico Barbero, Pietro Li\`o
Latent Graph Inference using Product Manifolds
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks usually rely on the assumption that the graph topology is available to the network as well as optimal for the downstream task. Latent graph inference allows models to dynamically learn the intrinsic graph structure of problems where the connectivity patterns of data may not be directly accessible. In this work, we generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning. The original dDGM architecture used the Euclidean plane to encode latent features based on which the latent graphs were generated. By incorporating Riemannian geometry into the model and generating more complex embedding spaces, we can improve the performance of the latent graph inference system. In particular, we propose a computationally tractable approach to produce product manifolds of constant curvature model spaces that can encode latent features of varying structure. The latent representations mapped onto the inferred product manifold are used to compute richer similarity measures that are leveraged by the latent graph learning model to obtain optimized latent graphs. Moreover, the curvature of the product manifold is learned during training alongside the rest of the network parameters and based on the downstream task, rather than it being a static embedding space. Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
[ { "created": "Sat, 26 Nov 2022 22:13:06 GMT", "version": "v1" }, { "created": "Fri, 21 Apr 2023 21:50:38 GMT", "version": "v2" }, { "created": "Tue, 27 Jun 2023 17:17:50 GMT", "version": "v3" } ]
2023-06-28
[ [ "Borde", "Haitz Sáez de Ocáriz", "" ], [ "Kazi", "Anees", "" ], [ "Barbero", "Federico", "" ], [ "Liò", "Pietro", "" ] ]
Graph Neural Networks usually rely on the assumption that the graph topology is available to the network as well as optimal for the downstream task. Latent graph inference allows models to dynamically learn the intrinsic graph structure of problems where the connectivity patterns of data may not be directly accessible. In this work, we generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning. The original dDGM architecture used the Euclidean plane to encode latent features based on which the latent graphs were generated. By incorporating Riemannian geometry into the model and generating more complex embedding spaces, we can improve the performance of the latent graph inference system. In particular, we propose a computationally tractable approach to produce product manifolds of constant curvature model spaces that can encode latent features of varying structure. The latent representations mapped onto the inferred product manifold are used to compute richer similarity measures that are leveraged by the latent graph learning model to obtain optimized latent graphs. Moreover, the curvature of the product manifold is learned during training alongside the rest of the network parameters and based on the downstream task, rather than it being a static embedding space. Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
2403.02372
Babak Salimi
Alireza Pirhadi, Mohammad Hossein Moslemi, Alexander Cloninger, Mostafa Milani, Babak Salimi
OTClean: Data Cleaning for Conditional Independence Violations using Optimal Transport
null
null
null
null
cs.LG cs.AI cs.DB
http://creativecommons.org/licenses/by/4.0/
Ensuring Conditional Independence (CI) constraints is pivotal for the development of fair and trustworthy machine learning models. In this paper, we introduce \sys, a framework that harnesses optimal transport theory for data repair under CI constraints. Optimal transport theory provides a rigorous framework for measuring the discrepancy between probability distributions, thereby ensuring control over data utility. We formulate the data repair problem concerning CIs as a Quadratically Constrained Linear Program (QCLP) and propose an alternating method for its solution. However, this approach faces scalability issues due to the computational cost associated with computing optimal transport distances, such as the Wasserstein distance. To overcome these scalability challenges, we reframe our problem as a regularized optimization problem, enabling us to develop an iterative algorithm inspired by Sinkhorn's matrix scaling algorithm, which efficiently addresses high-dimensional and large-scale data. Through extensive experiments, we demonstrate the efficacy and efficiency of our proposed methods, showcasing their practical utility in real-world data cleaning and preprocessing tasks. Furthermore, we provide comparisons with traditional approaches, highlighting the superiority of our techniques in terms of preserving data utility while ensuring adherence to the desired CI constraints.
[ { "created": "Mon, 4 Mar 2024 18:23:55 GMT", "version": "v1" } ]
2024-03-06
[ [ "Pirhadi", "Alireza", "" ], [ "Moslemi", "Mohammad Hossein", "" ], [ "Cloninger", "Alexander", "" ], [ "Milani", "Mostafa", "" ], [ "Salimi", "Babak", "" ] ]
Ensuring Conditional Independence (CI) constraints is pivotal for the development of fair and trustworthy machine learning models. In this paper, we introduce \sys, a framework that harnesses optimal transport theory for data repair under CI constraints. Optimal transport theory provides a rigorous framework for measuring the discrepancy between probability distributions, thereby ensuring control over data utility. We formulate the data repair problem concerning CIs as a Quadratically Constrained Linear Program (QCLP) and propose an alternating method for its solution. However, this approach faces scalability issues due to the computational cost associated with computing optimal transport distances, such as the Wasserstein distance. To overcome these scalability challenges, we reframe our problem as a regularized optimization problem, enabling us to develop an iterative algorithm inspired by Sinkhorn's matrix scaling algorithm, which efficiently addresses high-dimensional and large-scale data. Through extensive experiments, we demonstrate the efficacy and efficiency of our proposed methods, showcasing their practical utility in real-world data cleaning and preprocessing tasks. Furthermore, we provide comparisons with traditional approaches, highlighting the superiority of our techniques in terms of preserving data utility while ensuring adherence to the desired CI constraints.
2209.15147
Zhibin Zou
Zhibin Zou
Optimizing towards the best insertion-based error-tolerating joints
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an optimization-based design process that can generate the best insertion-based joints with respect to different errors, including manipulation error, manufacturing error, and sensing error. We separate the analysis into two stages, the insertion and the after-insertion stability. Each sub-process is discretized into different modes of contacts. The transitions among the contact modes form a directed graph and the connectivity of the graph is achieved and maintained through the manipulation of the socket edge-angle and peg contact-point locations. The analysis starts in 2D with the assumption of point-edge contacts. During the optimization, the edges of the socket are rotated and the points on the peg are moved along the edges to ensure the successful insertion and the stability after insertion. We show in simulation that our proposed method can generate insertion-based joints that are tolerant to the given errors. and we present a few simple 3D projections to show that the analysis is still effective beyond 2D cases.
[ { "created": "Fri, 30 Sep 2022 00:21:33 GMT", "version": "v1" }, { "created": "Fri, 11 Nov 2022 14:44:12 GMT", "version": "v2" } ]
2022-11-14
[ [ "Zou", "Zhibin", "" ] ]
We present an optimization-based design process that can generate the best insertion-based joints with respect to different errors, including manipulation error, manufacturing error, and sensing error. We separate the analysis into two stages, the insertion and the after-insertion stability. Each sub-process is discretized into different modes of contacts. The transitions among the contact modes form a directed graph and the connectivity of the graph is achieved and maintained through the manipulation of the socket edge-angle and peg contact-point locations. The analysis starts in 2D with the assumption of point-edge contacts. During the optimization, the edges of the socket are rotated and the points on the peg are moved along the edges to ensure the successful insertion and the stability after insertion. We show in simulation that our proposed method can generate insertion-based joints that are tolerant to the given errors. and we present a few simple 3D projections to show that the analysis is still effective beyond 2D cases.
1204.6364
Rushdi Shams Mr
Rushdi Shams, Adel Elsayed, Quazi Mah-Zereen Akter
A Corpus-based Evaluation of a Domain-specific Text to Knowledge Mapping Prototype
Journal of Computers, Academy Publishers 2010
null
null
null
cs.CL
http://creativecommons.org/licenses/by/3.0/
The aim of this paper is to evaluate a Text to Knowledge Mapping (TKM) Prototype. The prototype is domain-specific, the purpose of which is to map instructional text onto a knowledge domain. The context of the knowledge domain is DC electrical circuit. During development, the prototype has been tested with a limited data set from the domain. The prototype reached a stage where it needs to be evaluated with a representative linguistic data set called corpus. A corpus is a collection of text drawn from typical sources which can be used as a test data set to evaluate NLP systems. As there is no available corpus for the domain, we developed and annotated a representative corpus. The evaluation of the prototype considers two of its major components- lexical components and knowledge model. Evaluation on lexical components enriches the lexical resources of the prototype like vocabulary and grammar structures. This leads the prototype to parse a reasonable amount of sentences in the corpus. While dealing with the lexicon was straight forward, the identification and extraction of appropriate semantic relations was much more involved. It was necessary, therefore, to manually develop a conceptual structure for the domain to formulate a domain-specific framework of semantic relations. The framework of semantic relationsthat has resulted from this study consisted of 55 relations, out of which 42 have inverse relations. We also conducted rhetorical analysis on the corpus to prove its representativeness in conveying semantic. Finally, we conducted a topical and discourse analysis on the corpus to analyze the coverage of discourse by the prototype.
[ { "created": "Sat, 28 Apr 2012 03:52:21 GMT", "version": "v1" } ]
2012-05-01
[ [ "Shams", "Rushdi", "" ], [ "Elsayed", "Adel", "" ], [ "Akter", "Quazi Mah-Zereen", "" ] ]
The aim of this paper is to evaluate a Text to Knowledge Mapping (TKM) Prototype. The prototype is domain-specific, the purpose of which is to map instructional text onto a knowledge domain. The context of the knowledge domain is DC electrical circuit. During development, the prototype has been tested with a limited data set from the domain. The prototype reached a stage where it needs to be evaluated with a representative linguistic data set called corpus. A corpus is a collection of text drawn from typical sources which can be used as a test data set to evaluate NLP systems. As there is no available corpus for the domain, we developed and annotated a representative corpus. The evaluation of the prototype considers two of its major components- lexical components and knowledge model. Evaluation on lexical components enriches the lexical resources of the prototype like vocabulary and grammar structures. This leads the prototype to parse a reasonable amount of sentences in the corpus. While dealing with the lexicon was straight forward, the identification and extraction of appropriate semantic relations was much more involved. It was necessary, therefore, to manually develop a conceptual structure for the domain to formulate a domain-specific framework of semantic relations. The framework of semantic relationsthat has resulted from this study consisted of 55 relations, out of which 42 have inverse relations. We also conducted rhetorical analysis on the corpus to prove its representativeness in conveying semantic. Finally, we conducted a topical and discourse analysis on the corpus to analyze the coverage of discourse by the prototype.
2401.12866
Jeremias D\"otterl
Ralf Bruns, Jeremias D\"otterl, J\"urgen Dunkel, Sascha Ossowski
Evaluating Collaborative and Autonomous Agents in Data-Stream-Supported Coordination of Mobile Crowdsourcing
null
Sensors 2023, 23(2), 614
10.3390/s23020614
null
cs.AI cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
[ { "created": "Tue, 23 Jan 2024 16:00:45 GMT", "version": "v1" } ]
2024-01-24
[ [ "Bruns", "Ralf", "" ], [ "Dötterl", "Jeremias", "" ], [ "Dunkel", "Jürgen", "" ], [ "Ossowski", "Sascha", "" ] ]
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
1209.4405
You Qingshan
Qingshan You and Qun Wan and Yipeng Liu
Strongly Convex Programming for Principal Component Pursuit
10 pages
null
null
null
cs.IT math.IT math.NA
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper, we address strongly convex programming for princi- pal component pursuit with reduced linear measurements, which decomposes a superposition of a low-rank matrix and a sparse matrix from a small set of linear measurements. We first provide sufficient conditions under which the strongly convex models lead to the exact low-rank and sparse matrix recov- ery; Second, we also give suggestions on how to choose suitable parameters in practical algorithms.
[ { "created": "Thu, 20 Sep 2012 01:37:26 GMT", "version": "v1" } ]
2012-09-21
[ [ "You", "Qingshan", "" ], [ "Wan", "Qun", "" ], [ "Liu", "Yipeng", "" ] ]
In this paper, we address strongly convex programming for princi- pal component pursuit with reduced linear measurements, which decomposes a superposition of a low-rank matrix and a sparse matrix from a small set of linear measurements. We first provide sufficient conditions under which the strongly convex models lead to the exact low-rank and sparse matrix recov- ery; Second, we also give suggestions on how to choose suitable parameters in practical algorithms.
1608.06787
Natasha Alechina
Natasha Alechina, Mehdi Dastani, and Brian Logan
Expressibility of norms in temporal logic
3 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this short note we address the issue of expressing norms (such as obligations and prohibitions) in temporal logic. In particular, we address the argument from [Governatori 2015] that norms cannot be expressed in Linear Time Temporal Logic (LTL).
[ { "created": "Wed, 24 Aug 2016 12:01:36 GMT", "version": "v1" } ]
2016-08-25
[ [ "Alechina", "Natasha", "" ], [ "Dastani", "Mehdi", "" ], [ "Logan", "Brian", "" ] ]
In this short note we address the issue of expressing norms (such as obligations and prohibitions) in temporal logic. In particular, we address the argument from [Governatori 2015] that norms cannot be expressed in Linear Time Temporal Logic (LTL).
2009.06389
Sahib Singh
Tom Farrand, Fatemehsadat Mireshghallah, Sahib Singh, Andrew Trask
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy
5 pages, 5 figures
null
null
null
cs.LG cs.AI cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute. Data is often crowd-sourced and contains sensitive information about its contributors, which leaks into models that are trained on it. To achieve rigorous privacy guarantees, differentially private training mechanisms are used. However, it has recently been shown that differential privacy can exacerbate existing biases in the data and have disparate impacts on the accuracy of different subgroups of data. In this paper, we aim to study these effects within differentially private deep learning. Specifically, we aim to study how different levels of imbalance in the data affect the accuracy and the fairness of the decisions made by the model, given different levels of privacy. We demonstrate that even small imbalances and loose privacy guarantees can cause disparate impacts.
[ { "created": "Thu, 10 Sep 2020 18:35:49 GMT", "version": "v1" }, { "created": "Fri, 25 Sep 2020 16:00:29 GMT", "version": "v2" }, { "created": "Sat, 3 Oct 2020 11:55:05 GMT", "version": "v3" } ]
2020-10-06
[ [ "Farrand", "Tom", "" ], [ "Mireshghallah", "Fatemehsadat", "" ], [ "Singh", "Sahib", "" ], [ "Trask", "Andrew", "" ] ]
Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute. Data is often crowd-sourced and contains sensitive information about its contributors, which leaks into models that are trained on it. To achieve rigorous privacy guarantees, differentially private training mechanisms are used. However, it has recently been shown that differential privacy can exacerbate existing biases in the data and have disparate impacts on the accuracy of different subgroups of data. In this paper, we aim to study these effects within differentially private deep learning. Specifically, we aim to study how different levels of imbalance in the data affect the accuracy and the fairness of the decisions made by the model, given different levels of privacy. We demonstrate that even small imbalances and loose privacy guarantees can cause disparate impacts.
2107.02153
Jungsoo Park
Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi
FaVIQ: FAct Verification from Information-seeking Questions
ACL 2022(long). Data & Code available at https://faviq.github.io
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims. Existing claims are either authored by crowdworkers, thereby introducing subtle biases that are difficult to control for, or manually verified by professional fact checkers, causing them to be expensive and limited in scale. In this paper, we construct a large-scale challenging fact verification dataset called FAVIQ, consisting of 188k claims derived from an existing corpus of ambiguous information-seeking questions. The ambiguities in the questions enable automatically constructing true and false claims that reflect user confusions (e.g., the year of the movie being filmed vs. being released). Claims in FAVIQ are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification. Our experiments show that the state-of-the-art models are far from solving our new task. Moreover, training on our data helps in professional fact-checking, outperforming models trained on the widely used dataset FEVER or in-domain data by up to 17% absolute. Altogether, our data will serve as a challenging benchmark for natural language understanding and support future progress in professional fact checking.
[ { "created": "Mon, 5 Jul 2021 17:31:44 GMT", "version": "v1" }, { "created": "Tue, 15 Mar 2022 07:38:56 GMT", "version": "v2" } ]
2022-03-16
[ [ "Park", "Jungsoo", "" ], [ "Min", "Sewon", "" ], [ "Kang", "Jaewoo", "" ], [ "Zettlemoyer", "Luke", "" ], [ "Hajishirzi", "Hannaneh", "" ] ]
Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims. Existing claims are either authored by crowdworkers, thereby introducing subtle biases that are difficult to control for, or manually verified by professional fact checkers, causing them to be expensive and limited in scale. In this paper, we construct a large-scale challenging fact verification dataset called FAVIQ, consisting of 188k claims derived from an existing corpus of ambiguous information-seeking questions. The ambiguities in the questions enable automatically constructing true and false claims that reflect user confusions (e.g., the year of the movie being filmed vs. being released). Claims in FAVIQ are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification. Our experiments show that the state-of-the-art models are far from solving our new task. Moreover, training on our data helps in professional fact-checking, outperforming models trained on the widely used dataset FEVER or in-domain data by up to 17% absolute. Altogether, our data will serve as a challenging benchmark for natural language understanding and support future progress in professional fact checking.
2203.11938
Vladislav Golyanik
Navami Kairanda and Edith Tretschk and Mohamed Elgharib and Christian Theobalt and Vladislav Golyanik
{\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model
11 pages, 8 figures and one table; Computer Vision and Pattern Recognition (CVPR) 2022
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera while assuming a 3D state known in advance (a template). This is an important yet challenging problem due to the under-constrained nature of the monocular setting. Existing SfT techniques predominantly use geometric and simplified deformation models, which often limits their reconstruction abilities. In contrast to previous works, this paper proposes a new SfT approach explaining 2D observations through physical simulations accounting for forces and material properties. Our differentiable physics simulator regularises the surface evolution and optimises the material elastic properties such as bending coefficients, stretching stiffness and density. We use a differentiable renderer to minimise the dense reprojection error between the estimated 3D states and the input images and recover the deformation parameters using an adaptive gradient-based optimisation. For the evaluation, we record with an RGB-D camera challenging real surfaces exposed to physical forces with various material properties and textures. Our approach significantly reduces the 3D reconstruction error compared to multiple competing methods. For the source code and data, see https://4dqv.mpi-inf.mpg.de/phi-SfT/.
[ { "created": "Tue, 22 Mar 2022 17:59:57 GMT", "version": "v1" } ]
2022-03-23
[ [ "Kairanda", "Navami", "" ], [ "Tretschk", "Edith", "" ], [ "Elgharib", "Mohamed", "" ], [ "Theobalt", "Christian", "" ], [ "Golyanik", "Vladislav", "" ] ]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera while assuming a 3D state known in advance (a template). This is an important yet challenging problem due to the under-constrained nature of the monocular setting. Existing SfT techniques predominantly use geometric and simplified deformation models, which often limits their reconstruction abilities. In contrast to previous works, this paper proposes a new SfT approach explaining 2D observations through physical simulations accounting for forces and material properties. Our differentiable physics simulator regularises the surface evolution and optimises the material elastic properties such as bending coefficients, stretching stiffness and density. We use a differentiable renderer to minimise the dense reprojection error between the estimated 3D states and the input images and recover the deformation parameters using an adaptive gradient-based optimisation. For the evaluation, we record with an RGB-D camera challenging real surfaces exposed to physical forces with various material properties and textures. Our approach significantly reduces the 3D reconstruction error compared to multiple competing methods. For the source code and data, see https://4dqv.mpi-inf.mpg.de/phi-SfT/.
2104.05999
Pieter Boom
Pieter D. Boom, Ashley Seepujak, Odysseas Kosmas, Lee Margetts and Andrey Jivkov
Parallelized Discrete Exterior Calculus for Three-Dimensional Elliptic Problems
null
null
10.1016/j.cpc.2022.108456
null
cs.MS physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A formulation of elliptic boundary value problems is used to develop the first discrete exterior calculus (DEC) library for massively parallel computations with 3D domains. This can be used for steady-state analysis of any physical process driven by the gradient of a scalar quantity, e.g. temperature, concentration, pressure or electric potential, and is easily extendable to transient analysis. In addition to offering this library to the community, we demonstrate one important benefit from the DEC formulation: effortless introduction of strong heterogeneities and discontinuities. These are typical for real materials, but challenging for widely used domain discretization schemes, such as finite elements. Specifically, we demonstrate the efficiency of the method for calculating the evolution of thermal conductivity of a solid with a growing crack population. Future development of the library will deal with transient problems, and more importantly with processes driven by gradients of vector quantities.
[ { "created": "Tue, 13 Apr 2021 08:06:48 GMT", "version": "v1" } ]
2022-07-20
[ [ "Boom", "Pieter D.", "" ], [ "Seepujak", "Ashley", "" ], [ "Kosmas", "Odysseas", "" ], [ "Margetts", "Lee", "" ], [ "Jivkov", "Andrey", "" ] ]
A formulation of elliptic boundary value problems is used to develop the first discrete exterior calculus (DEC) library for massively parallel computations with 3D domains. This can be used for steady-state analysis of any physical process driven by the gradient of a scalar quantity, e.g. temperature, concentration, pressure or electric potential, and is easily extendable to transient analysis. In addition to offering this library to the community, we demonstrate one important benefit from the DEC formulation: effortless introduction of strong heterogeneities and discontinuities. These are typical for real materials, but challenging for widely used domain discretization schemes, such as finite elements. Specifically, we demonstrate the efficiency of the method for calculating the evolution of thermal conductivity of a solid with a growing crack population. Future development of the library will deal with transient problems, and more importantly with processes driven by gradients of vector quantities.
1108.4478
Mehdi Karimi
Mehdi Karimi and Amir H. Banihashemi
An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-Pinsker (ZP) trapping sets, known to dominate the performance of LDPC codes in the error floor region over different channels and for different iterative decoding algorithms. Simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithm. In particular, the algorithm is significantly faster than the existing search algorithms for dominant trapping sets.
[ { "created": "Tue, 23 Aug 2011 02:17:45 GMT", "version": "v1" }, { "created": "Fri, 13 Apr 2012 18:58:38 GMT", "version": "v2" } ]
2012-04-16
[ [ "Karimi", "Mehdi", "" ], [ "Banihashemi", "Amir H.", "" ] ]
This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-Pinsker (ZP) trapping sets, known to dominate the performance of LDPC codes in the error floor region over different channels and for different iterative decoding algorithms. Simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithm. In particular, the algorithm is significantly faster than the existing search algorithms for dominant trapping sets.
2304.11087
Ebenezer Isaac
Ebenezer R. H. P. Isaac and Jim Reno
AI Product Security: A Primer for Developers
10 pages, 1 figure
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Not too long ago, AI security used to mean the research and practice of how AI can empower cybersecurity, that is, AI for security. Ever since Ian Goodfellow and his team popularized adversarial attacks on machine learning, security for AI became an important concern and also part of AI security. It is imperative to understand the threats to machine learning products and avoid common pitfalls in AI product development. This article is addressed to developers, designers, managers and researchers of AI software products.
[ { "created": "Tue, 18 Apr 2023 05:22:34 GMT", "version": "v1" } ]
2023-04-24
[ [ "Isaac", "Ebenezer R. H. P.", "" ], [ "Reno", "Jim", "" ] ]
Not too long ago, AI security used to mean the research and practice of how AI can empower cybersecurity, that is, AI for security. Ever since Ian Goodfellow and his team popularized adversarial attacks on machine learning, security for AI became an important concern and also part of AI security. It is imperative to understand the threats to machine learning products and avoid common pitfalls in AI product development. This article is addressed to developers, designers, managers and researchers of AI software products.
2212.04242
Olesya Mryglod
Olesya Mryglod
One for all and all for one: on the role of a conference in a scientist's life
9 pages, 5 figures, 2 tables
Condensed Matter Physics, 2023, vol. 26, No. 1, 13801
10.5488/CMP.26.13801
null
cs.DL physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
The quantitative description of the scientific conference MECO (Middle European Cooperation in Statistical Physics) based on bibliographic records is presented in the paper. Statistics of contributions and participants, co-authorship patterns at the levels of authors and countries, typical proportions of newcomers and permanent participants as well as other characteristics of the scientific event are discussed. The results of this case study contribute to better understanding of the ways of formalization and assessment of conferences and their role in individual academic careers. To highlight the latter, the change of perspective is used: in addition to the general analysis of the conference data, an ego-centric approach is used to emphasize the role of a particular participant for the conference and, vice versa, the role of MECO in the researcher's professional life. This paper is part of the special CMP issue dedicated to the anniversary of Bertrand Berche - a well-known physicist, an active member of the community of authors and editors of the journal, long time collaborator and dear friend of the author.
[ { "created": "Thu, 8 Dec 2022 12:54:28 GMT", "version": "v1" }, { "created": "Mon, 6 Mar 2023 11:05:44 GMT", "version": "v2" } ]
2023-03-07
[ [ "Mryglod", "Olesya", "" ] ]
The quantitative description of the scientific conference MECO (Middle European Cooperation in Statistical Physics) based on bibliographic records is presented in the paper. Statistics of contributions and participants, co-authorship patterns at the levels of authors and countries, typical proportions of newcomers and permanent participants as well as other characteristics of the scientific event are discussed. The results of this case study contribute to better understanding of the ways of formalization and assessment of conferences and their role in individual academic careers. To highlight the latter, the change of perspective is used: in addition to the general analysis of the conference data, an ego-centric approach is used to emphasize the role of a particular participant for the conference and, vice versa, the role of MECO in the researcher's professional life. This paper is part of the special CMP issue dedicated to the anniversary of Bertrand Berche - a well-known physicist, an active member of the community of authors and editors of the journal, long time collaborator and dear friend of the author.
2308.04026
Jiaju Lin
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
submit to EMNLP2023 demo track
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
[ { "created": "Tue, 8 Aug 2023 03:59:28 GMT", "version": "v1" } ]
2023-08-09
[ [ "Lin", "Jiaju", "" ], [ "Zhao", "Haoran", "" ], [ "Zhang", "Aochi", "" ], [ "Wu", "Yiting", "" ], [ "Ping", "Huqiuyue", "" ], [ "Chen", "Qin", "" ] ]
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
1907.06775
Lukas Iffl\"ander
Lukas Iffl\"ander, Alexandra Dmitrienko, Christoph Hagen, Michael Jobst, Samuel Kounev
Hands Off my Database: Ransomware Detection in Databases through Dynamic Analysis of Query Sequences
null
null
null
null
cs.CR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ransomware is an emerging threat which imposed a \$ 5 billion loss in 2017 and is predicted to hit \$ 11.5 billion in 2019. While initially targeting PC (client) platforms, ransomware recently made the leap to server-side databases - starting in January 2017 with the MongoDB Apocalypse attack, followed by other attack waves targeting a wide range of DB types such as MongoDB, MySQL, ElasticSearch, Cassandra, Hadoop, and CouchDB. While previous research has developed countermeasures against client-side ransomware (e.g., CryptoDrop and ShieldFS), the problem of server-side ransomware has received zero attention so far. In our work, we aim to bridge this gap and present DIMAQS (Dynamic Identification of Malicious Query Sequences), a novel anti-ransomware solution for databases. DIMAQS performs runtime monitoring of incoming queries and pattern matching using Colored Petri Nets (CPNs) for attack detection. Our system design exhibits several novel techniques to enable efficient detection of malicious query sequences globally (i.e., without limiting detection to distinct user connections). Our proof-of-concept implementation targets MySQL servers. The evaluation shows high efficiency with no false positives and no false negatives and very moderate performance overhead of under 5%. We will publish our data sets and implementation allowing the community to reproduce our tests and compare to our results.
[ { "created": "Mon, 15 Jul 2019 22:22:38 GMT", "version": "v1" } ]
2019-07-17
[ [ "Iffländer", "Lukas", "" ], [ "Dmitrienko", "Alexandra", "" ], [ "Hagen", "Christoph", "" ], [ "Jobst", "Michael", "" ], [ "Kounev", "Samuel", "" ] ]
Ransomware is an emerging threat which imposed a \$ 5 billion loss in 2017 and is predicted to hit \$ 11.5 billion in 2019. While initially targeting PC (client) platforms, ransomware recently made the leap to server-side databases - starting in January 2017 with the MongoDB Apocalypse attack, followed by other attack waves targeting a wide range of DB types such as MongoDB, MySQL, ElasticSearch, Cassandra, Hadoop, and CouchDB. While previous research has developed countermeasures against client-side ransomware (e.g., CryptoDrop and ShieldFS), the problem of server-side ransomware has received zero attention so far. In our work, we aim to bridge this gap and present DIMAQS (Dynamic Identification of Malicious Query Sequences), a novel anti-ransomware solution for databases. DIMAQS performs runtime monitoring of incoming queries and pattern matching using Colored Petri Nets (CPNs) for attack detection. Our system design exhibits several novel techniques to enable efficient detection of malicious query sequences globally (i.e., without limiting detection to distinct user connections). Our proof-of-concept implementation targets MySQL servers. The evaluation shows high efficiency with no false positives and no false negatives and very moderate performance overhead of under 5%. We will publish our data sets and implementation allowing the community to reproduce our tests and compare to our results.
1911.07107
He Wang
He Wang, Feixiang He, Zhexi Peng, Yongliang Yang, Tianjia Shao, Kun Zhou, David Hogg
SMART: Skeletal Motion Action Recognition aTtack
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial attack has inspired great interest in computer vision, by showing that classification-based solutions are prone to imperceptible attack in many tasks. In this paper, we propose a method, SMART, to attack action recognizers which rely on 3D skeletal motions. Our method involves an innovative perceptual loss which ensures the imperceptibility of the attack. Empirical studies demonstrate that SMART is effective in both white-box and black-box scenarios. Its generalizability is evidenced on a variety of action recognizers and datasets. Its versatility is shown in different attacking strategies. Its deceitfulness is proven in extensive perceptual studies. Finally, SMART shows that adversarial attack on 3D skeletal motion, one type of time-series data, is significantly different from traditional adversarial attack problems.
[ { "created": "Sat, 16 Nov 2019 22:25:29 GMT", "version": "v1" }, { "created": "Thu, 21 Nov 2019 13:15:49 GMT", "version": "v2" }, { "created": "Tue, 10 Mar 2020 13:12:04 GMT", "version": "v3" } ]
2020-03-11
[ [ "Wang", "He", "" ], [ "He", "Feixiang", "" ], [ "Peng", "Zhexi", "" ], [ "Yang", "Yongliang", "" ], [ "Shao", "Tianjia", "" ], [ "Zhou", "Kun", "" ], [ "Hogg", "David", "" ] ]
Adversarial attack has inspired great interest in computer vision, by showing that classification-based solutions are prone to imperceptible attack in many tasks. In this paper, we propose a method, SMART, to attack action recognizers which rely on 3D skeletal motions. Our method involves an innovative perceptual loss which ensures the imperceptibility of the attack. Empirical studies demonstrate that SMART is effective in both white-box and black-box scenarios. Its generalizability is evidenced on a variety of action recognizers and datasets. Its versatility is shown in different attacking strategies. Its deceitfulness is proven in extensive perceptual studies. Finally, SMART shows that adversarial attack on 3D skeletal motion, one type of time-series data, is significantly different from traditional adversarial attack problems.
2108.03654
Mohamed Tarek
Mohamed Tarek and Tapabrata Ray
Approximation schemes for stochastic compliance-based topology optimization with many loading scenarios
arXiv admin note: substantial text overlap with arXiv:2103.04594
null
10.1007/s00158-022-03221-0
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
In this paper, approximation schemes are proposed for handling load uncertainty in compliance-based topology optimization problems, where the uncertainty is described in the form of a set of finitely many loading scenarios. Efficient approximate methods are proposed to approximately evaluate and differentiate either 1) the mean compliance, or 2) a class of scalar-valued function of the individual load compliances such as the weighted sum of the mean and standard deviation. The computational time complexities of the proposed algorithms are analyzed, compared to the exact approaches and then experimentally verified. Finally, some mean compliance minimization problems and some risk-averse compliance minimization problems are solved for verification.
[ { "created": "Sun, 8 Aug 2021 14:39:24 GMT", "version": "v1" } ]
2022-05-03
[ [ "Tarek", "Mohamed", "" ], [ "Ray", "Tapabrata", "" ] ]
In this paper, approximation schemes are proposed for handling load uncertainty in compliance-based topology optimization problems, where the uncertainty is described in the form of a set of finitely many loading scenarios. Efficient approximate methods are proposed to approximately evaluate and differentiate either 1) the mean compliance, or 2) a class of scalar-valued function of the individual load compliances such as the weighted sum of the mean and standard deviation. The computational time complexities of the proposed algorithms are analyzed, compared to the exact approaches and then experimentally verified. Finally, some mean compliance minimization problems and some risk-averse compliance minimization problems are solved for verification.
2202.09773
Lige Ding
Lige Ding, Dong Zhao, Zhaofeng Wang, Guang Wang, Chang Tan, Lei Fan and Huadong Ma
Learning to Help Emergency Vehicles Arrive Faster: A Cooperative Vehicle-Road Scheduling Approach
13 pages, 10 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ever-increasing heavy traffic congestion potentially impedes the accessibility of emergency vehicles (EVs), resulting in detrimental impacts on critical services and even safety of people's lives. Hence, it is significant to propose an efficient scheduling approach to help EVs arrive faster. Existing vehicle-centric scheduling approaches aim to recommend the optimal paths for EVs based on the current traffic status while the road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection. With the intuition that real-time vehicle-road information interaction and strategy coordination can bring more benefits, we propose LEVID, a LEarning-based cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module, which interact with each other and make decisions iteratively. The real-time route planning module adapts the artificial potential field method to address the real-time changes of traffic signals and avoid falling into a local optimum. The collaborative traffic signal control module leverages a graph attention reinforcement learning framework to extract the latent features of different intersections and abstract their interplay to learn cooperative policies. Extensive experiments based on multiple real-world datasets show that our approach outperforms the state-of-the-art baselines.
[ { "created": "Sun, 20 Feb 2022 10:25:15 GMT", "version": "v1" } ]
2022-02-22
[ [ "Ding", "Lige", "" ], [ "Zhao", "Dong", "" ], [ "Wang", "Zhaofeng", "" ], [ "Wang", "Guang", "" ], [ "Tan", "Chang", "" ], [ "Fan", "Lei", "" ], [ "Ma", "Huadong", "" ] ]
The ever-increasing heavy traffic congestion potentially impedes the accessibility of emergency vehicles (EVs), resulting in detrimental impacts on critical services and even safety of people's lives. Hence, it is significant to propose an efficient scheduling approach to help EVs arrive faster. Existing vehicle-centric scheduling approaches aim to recommend the optimal paths for EVs based on the current traffic status while the road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection. With the intuition that real-time vehicle-road information interaction and strategy coordination can bring more benefits, we propose LEVID, a LEarning-based cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module, which interact with each other and make decisions iteratively. The real-time route planning module adapts the artificial potential field method to address the real-time changes of traffic signals and avoid falling into a local optimum. The collaborative traffic signal control module leverages a graph attention reinforcement learning framework to extract the latent features of different intersections and abstract their interplay to learn cooperative policies. Extensive experiments based on multiple real-world datasets show that our approach outperforms the state-of-the-art baselines.
1401.1882
YuLi Sun
Yuli Sun and Jinxu Tao
Image reconstruction from few views by L0-norm optimization
11 pages,5 figures, 1 table
null
10.1088/1674-1056/23/7/078703
null
cs.IT cs.CV math.IT
http://creativecommons.org/licenses/by-nc-sa/3.0/
The L1-norm of the gradient-magnitude images (GMI), which is the well-known total variation (TV) model, is widely used as regularization in the few views CT reconstruction. As the L1-norm TV regularization is tending to uniformly penalize the image gradient and the low-contrast structures are sometimes over smoothed, we proposed a new algorithm based on the L0-norm of the GMI to deal with the few views problem. To rise to the challenges introduced by the L0-norm DGT, the algorithm uses a pseudo-inverse transform of DGT and adapts an iterative hard thresholding (IHT) algorithm, whose convergence and effective efficiency have been theoretically proven. The simulation indicates that the algorithm proposed in this paper can obviously improve the reconstruction quality.
[ { "created": "Thu, 9 Jan 2014 02:08:06 GMT", "version": "v1" } ]
2019-05-01
[ [ "Sun", "Yuli", "" ], [ "Tao", "Jinxu", "" ] ]
The L1-norm of the gradient-magnitude images (GMI), which is the well-known total variation (TV) model, is widely used as regularization in the few views CT reconstruction. As the L1-norm TV regularization is tending to uniformly penalize the image gradient and the low-contrast structures are sometimes over smoothed, we proposed a new algorithm based on the L0-norm of the GMI to deal with the few views problem. To rise to the challenges introduced by the L0-norm DGT, the algorithm uses a pseudo-inverse transform of DGT and adapts an iterative hard thresholding (IHT) algorithm, whose convergence and effective efficiency have been theoretically proven. The simulation indicates that the algorithm proposed in this paper can obviously improve the reconstruction quality.
1508.03931
Michael Goodrich
Michael T. Goodrich and Timothy Johnson and Manuel Torres
Knuthian Drawings of Series-Parallel Flowcharts
Full version
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by a classic paper by Knuth, we revisit the problem of drawing flowcharts of loop-free algorithms, that is, degree-three series-parallel digraphs. Our drawing algorithms show that it is possible to produce Knuthian drawings of degree-three series-parallel digraphs with good aspect ratios and small numbers of edge bends.
[ { "created": "Mon, 17 Aug 2015 05:59:27 GMT", "version": "v1" } ]
2015-08-18
[ [ "Goodrich", "Michael T.", "" ], [ "Johnson", "Timothy", "" ], [ "Torres", "Manuel", "" ] ]
Inspired by a classic paper by Knuth, we revisit the problem of drawing flowcharts of loop-free algorithms, that is, degree-three series-parallel digraphs. Our drawing algorithms show that it is possible to produce Knuthian drawings of degree-three series-parallel digraphs with good aspect ratios and small numbers of edge bends.
cs/0110037
Nancy Mazur
Nancy Mazur (1), Peter Ross (2), Gerda Janssens (1) and Maurice Bruynooghe (1) ((1) Dept. of Computer Science K.U.Leuven, (2) Mission Critical)
Practical Aspects for a Working Compile Time Garbage Collection System for Mercury
15 pages. A version of this paper will appear in Proceeding of the Seventeenth International Conference on Logic Programming (ICLP2001)
null
null
null
cs.PL
null
Compile-time garbage collection (CTGC) is still a very uncommon feature within compilers. In previous work we have developed a compile-time structure reuse system for Mercury, a logic programming language. This system indicates which datastructures can safely be reused at run-time. As preliminary experiments were promising, we have continued this work and have now a working and well performing near-to-ship CTGC-system built into the Melbourne Mercury Compiler (MMC). In this paper we present the multiple design decisions leading to this system, we report the results of using CTGC for a set of benchmarks, including a real-world program, and finally we discuss further possible improvements. Benchmarks show substantial memory savings and a noticeable reduction in execution time.
[ { "created": "Wed, 17 Oct 2001 16:20:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mazur", "Nancy", "" ], [ "Ross", "Peter", "" ], [ "Janssens", "Gerda", "" ], [ "Bruynooghe", "Maurice", "" ] ]
Compile-time garbage collection (CTGC) is still a very uncommon feature within compilers. In previous work we have developed a compile-time structure reuse system for Mercury, a logic programming language. This system indicates which datastructures can safely be reused at run-time. As preliminary experiments were promising, we have continued this work and have now a working and well performing near-to-ship CTGC-system built into the Melbourne Mercury Compiler (MMC). In this paper we present the multiple design decisions leading to this system, we report the results of using CTGC for a set of benchmarks, including a real-world program, and finally we discuss further possible improvements. Benchmarks show substantial memory savings and a noticeable reduction in execution time.
2311.12647
Jean-Luc Reding
Hendrik Meyer zum Felde, Jean-Luc Reding, Michael Lux
D-GATE: Decentralized Geolocation and Time Enforcement for Usage Control
null
2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Delft, Netherlands, 2023, pp. 386-395
10.1109/EuroSPW59978.2023.00049
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of cloud environments, data providers entrust their data to data consumers in order to allow further computing on their own IT infrastructure. Usage control measures allow the data provider to restrict the usage of its data even on the data consumer's system. Two of these restrictions can be the geographic location and time limitations. Current solutions that could be used to enforce such constraints can be easily manipulated. These include solutions based on the system time, organizational agreements, GPS-based techniques or simple delay measurements to derive the distance to known reference servers. With D-GATE, we propose a reliable solution that uses trusted execution environments and relies on a decentralized mesh of reference nodes, so-called GeoClients. Here, participants periodically measure the lowest network delay to each other to geolocate themselves. For data providers, it is thus possible to technically attest usage control with time and geolocation constraints without depending on centralized reference systems.
[ { "created": "Tue, 21 Nov 2023 14:48:12 GMT", "version": "v1" } ]
2023-11-22
[ [ "Felde", "Hendrik Meyer zum", "" ], [ "Reding", "Jean-Luc", "" ], [ "Lux", "Michael", "" ] ]
In the context of cloud environments, data providers entrust their data to data consumers in order to allow further computing on their own IT infrastructure. Usage control measures allow the data provider to restrict the usage of its data even on the data consumer's system. Two of these restrictions can be the geographic location and time limitations. Current solutions that could be used to enforce such constraints can be easily manipulated. These include solutions based on the system time, organizational agreements, GPS-based techniques or simple delay measurements to derive the distance to known reference servers. With D-GATE, we propose a reliable solution that uses trusted execution environments and relies on a decentralized mesh of reference nodes, so-called GeoClients. Here, participants periodically measure the lowest network delay to each other to geolocate themselves. For data providers, it is thus possible to technically attest usage control with time and geolocation constraints without depending on centralized reference systems.
1901.00100
Zhiri Tang
Zhiri Tang, Ruohua Zhu, Peng Lin, Jin He, Hao Wang, Qijun Huang, Sheng Chang, Qiming Ma
A Hardware Friendly Unsupervised Memristive Neural Network with Weight Sharing Mechanism
10 pages, 11 figures
Neurocomputing 2019
10.1016/j.neucom.2018.12.049
null
cs.ET cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Memristive neural networks (MNNs), which use memristors as neurons or synapses, have become a hot research topic recently. However, most memristors are not compatible with mainstream integrated circuit technology and their stabilities in large-scale are not very well so far. In this paper, a hardware friendly MNN circuit is introduced, in which the memristive characteristics are implemented by digital integrated circuit. Through this method, spike timing dependent plasticity (STDP) and unsupervised learning are realized. A weight sharing mechanism is proposed to bridge the gap of network scale and hardware resource. Experiment results show the hardware resource is significantly saved with it, maintaining good recognition accuracy and high speed. Moreover, the tendency of resource increase is slower than the expansion of network scale, which infers our method's potential on large scale neuromorphic network's realization.
[ { "created": "Tue, 1 Jan 2019 05:56:34 GMT", "version": "v1" } ]
2019-01-03
[ [ "Tang", "Zhiri", "" ], [ "Zhu", "Ruohua", "" ], [ "Lin", "Peng", "" ], [ "He", "Jin", "" ], [ "Wang", "Hao", "" ], [ "Huang", "Qijun", "" ], [ "Chang", "Sheng", "" ], [ "Ma", "Qiming", "" ] ]
Memristive neural networks (MNNs), which use memristors as neurons or synapses, have become a hot research topic recently. However, most memristors are not compatible with mainstream integrated circuit technology and their stabilities in large-scale are not very well so far. In this paper, a hardware friendly MNN circuit is introduced, in which the memristive characteristics are implemented by digital integrated circuit. Through this method, spike timing dependent plasticity (STDP) and unsupervised learning are realized. A weight sharing mechanism is proposed to bridge the gap of network scale and hardware resource. Experiment results show the hardware resource is significantly saved with it, maintaining good recognition accuracy and high speed. Moreover, the tendency of resource increase is slower than the expansion of network scale, which infers our method's potential on large scale neuromorphic network's realization.
1807.04979
Guojun Yin
Guojun Yin, Lu Sheng, Bin Liu, Nenghai Yu, Xiaogang Wang, Jing Shao, Chen Change Loy
Zoom-Net: Mining Deep Feature Interactions for Visual Relationship Recognition
22 pages, 9 figures, accepted by ECCV 2018, the source code will be released on https://github.com/gjyin91/ZoomNet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing visual relationships <subject-predicate-object> among any pair of localized objects is pivotal for image understanding. Previous studies have shown remarkable progress in exploiting linguistic priors or external textual information to improve the performance. In this work, we investigate an orthogonal perspective based on feature interactions. We show that by encouraging deep message propagation and interactions between local object features and global predicate features, one can achieve compelling performance in recognizing complex relationships without using any linguistic priors. To this end, we present two new pooling cells to encourage feature interactions: (i) Contrastive ROI Pooling Cell, which has a unique deROI pooling that inversely pools local object features to the corresponding area of global predicate features. (ii) Pyramid ROI Pooling Cell, which broadcasts global predicate features to reinforce local object features.The two cells constitute a Spatiality-Context-Appearance Module (SCA-M), which can be further stacked consecutively to form our final Zoom-Net.We further shed light on how one could resolve ambiguous and noisy object and predicate annotations by Intra-Hierarchical trees (IH-tree). Extensive experiments conducted on Visual Genome dataset demonstrate the effectiveness of our feature-oriented approach compared to state-of-the-art methods (Acc@1 11.42% from 8.16%) that depend on explicit modeling of linguistic interactions. We further show that SCA-M can be incorporated seamlessly into existing approaches to improve the performance by a large margin. The source code will be released on https://github.com/gjyin91/ZoomNet.
[ { "created": "Fri, 13 Jul 2018 09:20:39 GMT", "version": "v1" } ]
2018-07-16
[ [ "Yin", "Guojun", "" ], [ "Sheng", "Lu", "" ], [ "Liu", "Bin", "" ], [ "Yu", "Nenghai", "" ], [ "Wang", "Xiaogang", "" ], [ "Shao", "Jing", "" ], [ "Loy", "Chen Change", "" ] ]
Recognizing visual relationships <subject-predicate-object> among any pair of localized objects is pivotal for image understanding. Previous studies have shown remarkable progress in exploiting linguistic priors or external textual information to improve the performance. In this work, we investigate an orthogonal perspective based on feature interactions. We show that by encouraging deep message propagation and interactions between local object features and global predicate features, one can achieve compelling performance in recognizing complex relationships without using any linguistic priors. To this end, we present two new pooling cells to encourage feature interactions: (i) Contrastive ROI Pooling Cell, which has a unique deROI pooling that inversely pools local object features to the corresponding area of global predicate features. (ii) Pyramid ROI Pooling Cell, which broadcasts global predicate features to reinforce local object features.The two cells constitute a Spatiality-Context-Appearance Module (SCA-M), which can be further stacked consecutively to form our final Zoom-Net.We further shed light on how one could resolve ambiguous and noisy object and predicate annotations by Intra-Hierarchical trees (IH-tree). Extensive experiments conducted on Visual Genome dataset demonstrate the effectiveness of our feature-oriented approach compared to state-of-the-art methods (Acc@1 11.42% from 8.16%) that depend on explicit modeling of linguistic interactions. We further show that SCA-M can be incorporated seamlessly into existing approaches to improve the performance by a large margin. The source code will be released on https://github.com/gjyin91/ZoomNet.
2109.12105
Xing Niu
Xing Niu, Georgiana Dinu, Prashant Mathur, Anna Currey
Faithful Target Attribute Prediction in Neural Machine Translation
Withdrawn from Findings of ACL 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The training data used in NMT is rarely controlled with respect to specific attributes, such as word casing or gender, which can cause errors in translations. We argue that predicting the target word and attributes simultaneously is an effective way to ensure that translations are more faithful to the training data distribution with respect to these attributes. Experimental results on two tasks, uppercased input translation and gender prediction, show that this strategy helps mirror the training data distribution in testing. It also facilitates data augmentation on the task of uppercased input translation.
[ { "created": "Fri, 24 Sep 2021 17:55:07 GMT", "version": "v1" } ]
2021-09-27
[ [ "Niu", "Xing", "" ], [ "Dinu", "Georgiana", "" ], [ "Mathur", "Prashant", "" ], [ "Currey", "Anna", "" ] ]
The training data used in NMT is rarely controlled with respect to specific attributes, such as word casing or gender, which can cause errors in translations. We argue that predicting the target word and attributes simultaneously is an effective way to ensure that translations are more faithful to the training data distribution with respect to these attributes. Experimental results on two tasks, uppercased input translation and gender prediction, show that this strategy helps mirror the training data distribution in testing. It also facilitates data augmentation on the task of uppercased input translation.
2010.00587
Quanquan Gu
Jiafan He and Dongruo Zhou and Quanquan Gu
Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs
33 pages, 1 figure, 1 table. In NeurIPS 2021
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the reinforcement learning problem for discounted Markov Decision Processes (MDPs) under the tabular setting. We propose a model-based algorithm named UCBVI-$\gamma$, which is based on the \emph{optimism in the face of uncertainty principle} and the Bernstein-type bonus. We show that UCBVI-$\gamma$ achieves an $\tilde{O}\big({\sqrt{SAT}}/{(1-\gamma)^{1.5}}\big)$ regret, where $S$ is the number of states, $A$ is the number of actions, $\gamma$ is the discount factor and $T$ is the number of steps. In addition, we construct a class of hard MDPs and show that for any algorithm, the expected regret is at least $\tilde{\Omega}\big({\sqrt{SAT}}/{(1-\gamma)^{1.5}}\big)$. Our upper bound matches the minimax lower bound up to logarithmic factors, which suggests that UCBVI-$\gamma$ is nearly minimax optimal for discounted MDPs.
[ { "created": "Thu, 1 Oct 2020 17:57:47 GMT", "version": "v1" }, { "created": "Thu, 18 Feb 2021 16:34:04 GMT", "version": "v2" }, { "created": "Mon, 3 Jan 2022 02:23:25 GMT", "version": "v3" } ]
2022-01-04
[ [ "He", "Jiafan", "" ], [ "Zhou", "Dongruo", "" ], [ "Gu", "Quanquan", "" ] ]
We study the reinforcement learning problem for discounted Markov Decision Processes (MDPs) under the tabular setting. We propose a model-based algorithm named UCBVI-$\gamma$, which is based on the \emph{optimism in the face of uncertainty principle} and the Bernstein-type bonus. We show that UCBVI-$\gamma$ achieves an $\tilde{O}\big({\sqrt{SAT}}/{(1-\gamma)^{1.5}}\big)$ regret, where $S$ is the number of states, $A$ is the number of actions, $\gamma$ is the discount factor and $T$ is the number of steps. In addition, we construct a class of hard MDPs and show that for any algorithm, the expected regret is at least $\tilde{\Omega}\big({\sqrt{SAT}}/{(1-\gamma)^{1.5}}\big)$. Our upper bound matches the minimax lower bound up to logarithmic factors, which suggests that UCBVI-$\gamma$ is nearly minimax optimal for discounted MDPs.
2311.15395
Jann Goschenhofer
Jann Goschenhofer, Bernd Bischl, Zsolt Kira
ConstraintMatch for Semi-constrained Clustering
null
2023 International Joint Conference on Neural Networks (IJCNN)
null
null
cs.LG cs.CV stat.ML
http://creativecommons.org/licenses/by/4.0/
Constrained clustering allows the training of classification models using pairwise constraints only, which are weak and relatively easy to mine, while still yielding full-supervision-level model performance. While they perform well even in the absence of the true underlying class labels, constrained clustering models still require large amounts of binary constraint annotations for training. In this paper, we propose a semi-supervised context whereby a large amount of \textit{unconstrained} data is available alongside a smaller set of constraints, and propose \textit{ConstraintMatch} to leverage such unconstrained data. While a great deal of progress has been made in semi-supervised learning using full labels, there are a number of challenges that prevent a naive application of the resulting methods in the constraint-based label setting. Therefore, we reason about and analyze these challenges, specifically 1) proposing a \textit{pseudo-constraining} mechanism to overcome the confirmation bias, a major weakness of pseudo-labeling, 2) developing new methods for pseudo-labeling towards the selection of \textit{informative} unconstrained samples, 3) showing that this also allows the use of pairwise loss functions for the initial and auxiliary losses which facilitates semi-constrained model training. In extensive experiments, we demonstrate the effectiveness of ConstraintMatch over relevant baselines in both the regular clustering and overclustering scenarios on five challenging benchmarks and provide analyses of its several components.
[ { "created": "Sun, 26 Nov 2023 19:31:52 GMT", "version": "v1" } ]
2023-11-28
[ [ "Goschenhofer", "Jann", "" ], [ "Bischl", "Bernd", "" ], [ "Kira", "Zsolt", "" ] ]
Constrained clustering allows the training of classification models using pairwise constraints only, which are weak and relatively easy to mine, while still yielding full-supervision-level model performance. While they perform well even in the absence of the true underlying class labels, constrained clustering models still require large amounts of binary constraint annotations for training. In this paper, we propose a semi-supervised context whereby a large amount of \textit{unconstrained} data is available alongside a smaller set of constraints, and propose \textit{ConstraintMatch} to leverage such unconstrained data. While a great deal of progress has been made in semi-supervised learning using full labels, there are a number of challenges that prevent a naive application of the resulting methods in the constraint-based label setting. Therefore, we reason about and analyze these challenges, specifically 1) proposing a \textit{pseudo-constraining} mechanism to overcome the confirmation bias, a major weakness of pseudo-labeling, 2) developing new methods for pseudo-labeling towards the selection of \textit{informative} unconstrained samples, 3) showing that this also allows the use of pairwise loss functions for the initial and auxiliary losses which facilitates semi-constrained model training. In extensive experiments, we demonstrate the effectiveness of ConstraintMatch over relevant baselines in both the regular clustering and overclustering scenarios on five challenging benchmarks and provide analyses of its several components.
1606.03802
Pedro Ribeiro Mendes J\'unior
Pedro Ribeiro Mendes J\'unior, Terrance E. Boult, Jacques Wainer, and Anderson Rocha
Open-Set Support Vector Machines
Version accepted for publication in IEEE Transactions on Systems, Man, and Cybernetics: Systems
null
10.1109/TSMC.2021.3074496
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. In such cases, we need to think of robust classification methods able to deal with the "unknown" and properly reject samples belonging to classes never seen during training. Notwithstanding, existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the well-known Support Vector Machines (SVM) classifier and introduce the Open-Set Support Vector Machines (OSSVM), which is suitable for recognition in open-set setups. OSSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk.
[ { "created": "Mon, 13 Jun 2016 03:46:17 GMT", "version": "v1" }, { "created": "Tue, 21 Apr 2020 22:45:04 GMT", "version": "v10" }, { "created": "Mon, 21 Feb 2022 20:21:30 GMT", "version": "v11" }, { "created": "Tue, 1 Nov 2016 22:54:27 GMT", "version": "v2" }, { "created": "Mon, 16 Oct 2017 16:30:28 GMT", "version": "v3" }, { "created": "Thu, 1 Mar 2018 22:47:18 GMT", "version": "v4" }, { "created": "Tue, 26 Jun 2018 17:33:55 GMT", "version": "v5" }, { "created": "Mon, 5 Nov 2018 13:30:56 GMT", "version": "v6" }, { "created": "Wed, 14 Nov 2018 15:09:14 GMT", "version": "v7" }, { "created": "Thu, 2 May 2019 13:20:51 GMT", "version": "v8" }, { "created": "Wed, 13 Nov 2019 18:44:31 GMT", "version": "v9" } ]
2022-02-23
[ [ "Júnior", "Pedro Ribeiro Mendes", "" ], [ "Boult", "Terrance E.", "" ], [ "Wainer", "Jacques", "" ], [ "Rocha", "Anderson", "" ] ]
Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. In such cases, we need to think of robust classification methods able to deal with the "unknown" and properly reject samples belonging to classes never seen during training. Notwithstanding, existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the well-known Support Vector Machines (SVM) classifier and introduce the Open-Set Support Vector Machines (OSSVM), which is suitable for recognition in open-set setups. OSSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk.
2010.00378
Angelica I. Aviles-Rivero
Angelica I Aviles-Rivero, Philip Sellars, Carola-Bibiane Sch\"onlieb, Nicolas Papadakis
GraphXCOVID: Explainable Deep Graph Diffusion Pseudo-Labelling for Identifying COVID-19 on Chest X-rays
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Can one learn to diagnose COVID-19 under extreme minimal supervision? Since the outbreak of the novel COVID-19 there has been a rush for developing Artificial Intelligence techniques for expert-level disease identification on Chest X-ray data. In particular, the use of deep supervised learning has become the go-to paradigm. However, the performance of such models is heavily dependent on the availability of a large and representative labelled dataset. The creation of which is a heavily expensive and time consuming task, and especially imposes a great challenge for a novel disease. Semi-supervised learning has shown the ability to match the incredible performance of supervised models whilst requiring a small fraction of the labelled examples. This makes the semi-supervised paradigm an attractive option for identifying COVID-19. In this work, we introduce a graph based deep semi-supervised framework for classifying COVID-19 from chest X-rays. Our framework introduces an optimisation model for graph diffusion that reinforces the natural relation among the tiny labelled set and the vast unlabelled data. We then connect the diffusion prediction output as pseudo-labels that are used in an iterative scheme in a deep net. We demonstrate, through our experiments, that our model is able to outperform the current leading supervised model with a tiny fraction of the labelled examples. Finally, we provide attention maps to accommodate the radiologist's mental model, better fitting their perceptual and cognitive abilities. These visualisation aims to assist the radiologist in judging whether the diagnostic is correct or not, and in consequence to accelerate the decision.
[ { "created": "Wed, 30 Sep 2020 15:38:24 GMT", "version": "v1" }, { "created": "Sun, 4 Jul 2021 18:30:43 GMT", "version": "v2" } ]
2021-07-06
[ [ "Aviles-Rivero", "Angelica I", "" ], [ "Sellars", "Philip", "" ], [ "Schönlieb", "Carola-Bibiane", "" ], [ "Papadakis", "Nicolas", "" ] ]
Can one learn to diagnose COVID-19 under extreme minimal supervision? Since the outbreak of the novel COVID-19 there has been a rush for developing Artificial Intelligence techniques for expert-level disease identification on Chest X-ray data. In particular, the use of deep supervised learning has become the go-to paradigm. However, the performance of such models is heavily dependent on the availability of a large and representative labelled dataset. The creation of which is a heavily expensive and time consuming task, and especially imposes a great challenge for a novel disease. Semi-supervised learning has shown the ability to match the incredible performance of supervised models whilst requiring a small fraction of the labelled examples. This makes the semi-supervised paradigm an attractive option for identifying COVID-19. In this work, we introduce a graph based deep semi-supervised framework for classifying COVID-19 from chest X-rays. Our framework introduces an optimisation model for graph diffusion that reinforces the natural relation among the tiny labelled set and the vast unlabelled data. We then connect the diffusion prediction output as pseudo-labels that are used in an iterative scheme in a deep net. We demonstrate, through our experiments, that our model is able to outperform the current leading supervised model with a tiny fraction of the labelled examples. Finally, we provide attention maps to accommodate the radiologist's mental model, better fitting their perceptual and cognitive abilities. These visualisation aims to assist the radiologist in judging whether the diagnostic is correct or not, and in consequence to accelerate the decision.
1508.06967
Bechir Hamdaoui
Bechir Hamdaoui
A Simple Algorithm for Coloring m-Clique Holes
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An m-clique hole is a sequence $\phi=(\Phi_1,\Phi_2,\dots,\Phi_m)$ of $m$ distinct cliques such that $|\Phi_i| \leq m$ for all $i=1,2,\ldots,m$, and whose clique graph is a hole on $m$ vertices. That is, $\phi$ is an m-clique hole if for all $i\neq j$, $i,j=1,2,\ldots,m$, $\Phi_i \cap \Phi_{j} \neq \emptyset$ if and only if $(j-1)~\mbox{mod}~m = (j+1)~\mbox{mod}~m = i~\mbox{mod}~m$. This paper derives a sufficient and necessary condition on m-colorability of m-clique holes, and proposes a coloring algorithm that colors m-clique holes with exactly m colors.
[ { "created": "Thu, 27 Aug 2015 18:50:53 GMT", "version": "v1" } ]
2015-08-28
[ [ "Hamdaoui", "Bechir", "" ] ]
An m-clique hole is a sequence $\phi=(\Phi_1,\Phi_2,\dots,\Phi_m)$ of $m$ distinct cliques such that $|\Phi_i| \leq m$ for all $i=1,2,\ldots,m$, and whose clique graph is a hole on $m$ vertices. That is, $\phi$ is an m-clique hole if for all $i\neq j$, $i,j=1,2,\ldots,m$, $\Phi_i \cap \Phi_{j} \neq \emptyset$ if and only if $(j-1)~\mbox{mod}~m = (j+1)~\mbox{mod}~m = i~\mbox{mod}~m$. This paper derives a sufficient and necessary condition on m-colorability of m-clique holes, and proposes a coloring algorithm that colors m-clique holes with exactly m colors.
2401.11231
Yuhang Pi
Yuhang Pi and Zhifang Zhang
Two-Insertion/Deletion/Substitution Correcting Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, the emergence of DNA storage systems has led to a widespread focus on the research of codes correcting insertions, deletions, and classic substitutions. During the initial investigation, Levenshtein discovered the VT codes are precisely capable of correcting single insertion/deletion and then extended the VT construction to single-insertion/deletion/substitution ($1$-ins/del/sub) correcting codes. Inspired by this, we generalize the recent findings of $1$-del $1$-sub correcting codes with redundancy $6\log_{2}n+O(1)$ to more general $2$-ins/del/sub correcting codes without increasing the redundancy. Our key technique is to apply higher-order VT syndromes to distinct objects and accomplish a systematic classification of all error patterns.
[ { "created": "Sat, 20 Jan 2024 13:46:16 GMT", "version": "v1" } ]
2024-01-23
[ [ "Pi", "Yuhang", "" ], [ "Zhang", "Zhifang", "" ] ]
In recent years, the emergence of DNA storage systems has led to a widespread focus on the research of codes correcting insertions, deletions, and classic substitutions. During the initial investigation, Levenshtein discovered the VT codes are precisely capable of correcting single insertion/deletion and then extended the VT construction to single-insertion/deletion/substitution ($1$-ins/del/sub) correcting codes. Inspired by this, we generalize the recent findings of $1$-del $1$-sub correcting codes with redundancy $6\log_{2}n+O(1)$ to more general $2$-ins/del/sub correcting codes without increasing the redundancy. Our key technique is to apply higher-order VT syndromes to distinct objects and accomplish a systematic classification of all error patterns.
1807.00284
Yong Xia
Benteng Ma, Yong Xia
Autonomous Deep Learning: A Genetic DCNN Designer for Image Classification
null
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed the breakthrough success of deep convolutional neural networks (DCNNs) in image classification and other vision applications. Although freeing users from the troublesome handcrafted feature extraction by providing a uniform feature extraction-classification framework, DCNNs still require a handcrafted design of their architectures. In this paper, we propose the genetic DCNN designer, an autonomous learning algorithm can generate a DCNN architecture automatically based on the data available for a specific image classification problem. We first partition a DCNN into multiple stacked meta convolutional blocks and fully connected blocks, each containing the operations of convolution, pooling, fully connection, batch normalization, activation and drop out, and thus convert the architecture into an integer vector. Then, we use refined evolutionary operations, including selection, mutation and crossover to evolve a population of DCNN architectures. Our results on the MNIST, Fashion-MNIST, EMNISTDigit, EMNIST-Letter, CIFAR10 and CIFAR100 datasets suggest that the proposed genetic DCNN designer is able to produce automatically DCNN architectures, whose performance is comparable to, if not better than, that of stateof- the-art DCNN models
[ { "created": "Sun, 1 Jul 2018 07:11:54 GMT", "version": "v1" } ]
2018-07-03
[ [ "Ma", "Benteng", "" ], [ "Xia", "Yong", "" ] ]
Recent years have witnessed the breakthrough success of deep convolutional neural networks (DCNNs) in image classification and other vision applications. Although freeing users from the troublesome handcrafted feature extraction by providing a uniform feature extraction-classification framework, DCNNs still require a handcrafted design of their architectures. In this paper, we propose the genetic DCNN designer, an autonomous learning algorithm can generate a DCNN architecture automatically based on the data available for a specific image classification problem. We first partition a DCNN into multiple stacked meta convolutional blocks and fully connected blocks, each containing the operations of convolution, pooling, fully connection, batch normalization, activation and drop out, and thus convert the architecture into an integer vector. Then, we use refined evolutionary operations, including selection, mutation and crossover to evolve a population of DCNN architectures. Our results on the MNIST, Fashion-MNIST, EMNISTDigit, EMNIST-Letter, CIFAR10 and CIFAR100 datasets suggest that the proposed genetic DCNN designer is able to produce automatically DCNN architectures, whose performance is comparable to, if not better than, that of stateof- the-art DCNN models
2109.02529
Andrea Piazzoni
Andrea Piazzoni, Jim Cherian, Mohamed Azhar, Jing Yew Yap, James Lee Wei Shung, Roshan Vijay
ViSTA: a Framework for Virtual Scenario-based Testing of Autonomous Vehicles
Accepted at 2021 IEEE Autonomous Driving AI Test Challenge - AITest 2021 8 pages, 4 figures
null
10.1109/AITEST52744.2021.00035
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present ViSTA, a framework for Virtual Scenario-based Testing of Autonomous Vehicles (AV), developed as part of the 2021 IEEE Autonomous Test Driving AI Test Challenge. Scenario-based virtual testing aims to construct specific challenges posed for the AV to overcome, albeit in virtual test environments that may not necessarily resemble the real world. This approach is aimed at identifying specific issues that arise safety concerns before an actual deployment of the AV on the road. In this paper, we describe a comprehensive test case generation approach that facilitates the design of special-purpose scenarios with meaningful parameters to form test cases, both in automated and manual ways, leveraging the strength and weaknesses of either. Furthermore, we describe how to automate the execution of test cases, and analyze the performance of the AV under these test cases.
[ { "created": "Mon, 6 Sep 2021 15:12:17 GMT", "version": "v1" }, { "created": "Tue, 7 Sep 2021 04:46:01 GMT", "version": "v2" } ]
2021-11-22
[ [ "Piazzoni", "Andrea", "" ], [ "Cherian", "Jim", "" ], [ "Azhar", "Mohamed", "" ], [ "Yap", "Jing Yew", "" ], [ "Shung", "James Lee Wei", "" ], [ "Vijay", "Roshan", "" ] ]
In this paper, we present ViSTA, a framework for Virtual Scenario-based Testing of Autonomous Vehicles (AV), developed as part of the 2021 IEEE Autonomous Test Driving AI Test Challenge. Scenario-based virtual testing aims to construct specific challenges posed for the AV to overcome, albeit in virtual test environments that may not necessarily resemble the real world. This approach is aimed at identifying specific issues that arise safety concerns before an actual deployment of the AV on the road. In this paper, we describe a comprehensive test case generation approach that facilitates the design of special-purpose scenarios with meaningful parameters to form test cases, both in automated and manual ways, leveraging the strength and weaknesses of either. Furthermore, we describe how to automate the execution of test cases, and analyze the performance of the AV under these test cases.
2003.12154
Fatemehsadat Mireshghallah
Fatemehsadat Mireshghallah, Mohammadkazem Taram, Ali Jalali, Ahmed Taha Elthakeb, Dean Tullsen, Hadi Esmaeilzadeh
Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy
This paper is presented at the 2021 Web conference (WWW 2021)
null
null
null
cs.LG cs.CR cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When receiving machine learning services from the cloud, the provider does not need to receive all features; in fact, only a subset of the features are necessary for the target prediction task. Discerning this subset is the key problem of this work. We formulate this problem as a gradient-based perturbation maximization method that discovers this subset in the input feature space with respect to the functionality of the prediction model used by the provider. After identifying the subset, our framework, Cloak, suppresses the rest of the features using utility-preserving constant values that are discovered through a separate gradient-based optimization process. We show that Cloak does not necessarily require collaboration from the service provider beyond its normal service, and can be applied in scenarios where we only have black-box access to the service provider's model. We theoretically guarantee that Cloak's optimizations reduce the upper bound of the Mutual Information (MI) between the data and the sifted representations that are sent out. Experimental results show that Cloak reduces the mutual information between the input and the sifted representations by 85.01% with only a negligible reduction in utility (1.42%). In addition, we show that Cloak greatly diminishes adversaries' ability to learn and infer non-conducive features.
[ { "created": "Thu, 26 Mar 2020 20:45:09 GMT", "version": "v1" }, { "created": "Sat, 20 Feb 2021 05:02:25 GMT", "version": "v2" } ]
2021-02-23
[ [ "Mireshghallah", "Fatemehsadat", "" ], [ "Taram", "Mohammadkazem", "" ], [ "Jalali", "Ali", "" ], [ "Elthakeb", "Ahmed Taha", "" ], [ "Tullsen", "Dean", "" ], [ "Esmaeilzadeh", "Hadi", "" ] ]
When receiving machine learning services from the cloud, the provider does not need to receive all features; in fact, only a subset of the features are necessary for the target prediction task. Discerning this subset is the key problem of this work. We formulate this problem as a gradient-based perturbation maximization method that discovers this subset in the input feature space with respect to the functionality of the prediction model used by the provider. After identifying the subset, our framework, Cloak, suppresses the rest of the features using utility-preserving constant values that are discovered through a separate gradient-based optimization process. We show that Cloak does not necessarily require collaboration from the service provider beyond its normal service, and can be applied in scenarios where we only have black-box access to the service provider's model. We theoretically guarantee that Cloak's optimizations reduce the upper bound of the Mutual Information (MI) between the data and the sifted representations that are sent out. Experimental results show that Cloak reduces the mutual information between the input and the sifted representations by 85.01% with only a negligible reduction in utility (1.42%). In addition, we show that Cloak greatly diminishes adversaries' ability to learn and infer non-conducive features.
1909.13096
Kanglin Yin
Kanglin Yin, Qingfeng Du
On Representing Resilience Requirements of Microservice Architecture Systems
This manuscript is draft only, not intended for publication
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Together with the spread of DevOps practices and container technologies, Microserivce Architecture has become a mainstream architecture style in recent years. Resilience is a key characteristic in Microservice Architecture Systems(MSA Systems), and it shows the ability to cope with various kinds of system disturbances which cause degradations of services. However, due to lack of consensus definition of resilience in the software field, although a lot of work has been done on resilience for MSA Systems, developers still don't have a clear idea on how resilient an MSA System should be, and what resilience mechanisms are needed. In this paper, by referring to existing systematic studies on resilience in other scientific areas, the definition of microservice resilience is provided and a Microservice Resilience Measurement Model is proposed to measure service resilience. And a requirement model to represent resilience requirements of MSA Systems is given. The requirement model uses elements in KAOS to represent notions in the measurement model, and decompose service resilience goals into system behaviors that can be executed by system components. As a proof of concept, a case study is conducted on an MSA System to illustrate how the proposed models are applied.
[ { "created": "Sat, 28 Sep 2019 13:42:14 GMT", "version": "v1" }, { "created": "Mon, 22 Jun 2020 18:01:02 GMT", "version": "v2" }, { "created": "Fri, 9 Oct 2020 16:37:49 GMT", "version": "v3" } ]
2020-10-12
[ [ "Yin", "Kanglin", "" ], [ "Du", "Qingfeng", "" ] ]
Together with the spread of DevOps practices and container technologies, Microserivce Architecture has become a mainstream architecture style in recent years. Resilience is a key characteristic in Microservice Architecture Systems(MSA Systems), and it shows the ability to cope with various kinds of system disturbances which cause degradations of services. However, due to lack of consensus definition of resilience in the software field, although a lot of work has been done on resilience for MSA Systems, developers still don't have a clear idea on how resilient an MSA System should be, and what resilience mechanisms are needed. In this paper, by referring to existing systematic studies on resilience in other scientific areas, the definition of microservice resilience is provided and a Microservice Resilience Measurement Model is proposed to measure service resilience. And a requirement model to represent resilience requirements of MSA Systems is given. The requirement model uses elements in KAOS to represent notions in the measurement model, and decompose service resilience goals into system behaviors that can be executed by system components. As a proof of concept, a case study is conducted on an MSA System to illustrate how the proposed models are applied.
2309.03216
Bikram Koirala
Bikram Koirala, Behnood Rasti, Zakaria Bnoulkacem, Andrea de Lima Ribeiro, Yuleika Madriz, Erik Herrmann, Arthur Gestels, Thomas De Kerf, Sandra Lorenz, Margret Fuchs, Koen Janssens, Gunther Steenackers, Richard Gloaguen, and Paul Scheunders
A Multisensor Hyperspectral Benchmark Dataset For Unmixing of Intimate Mixtures
Currently, this paper is under review in IEEE
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optical hyperspectral cameras capture the spectral reflectance of materials. Since many materials behave as heterogeneous intimate mixtures with which each photon interacts differently, the relationship between spectral reflectance and material composition is very complex. Quantitative validation of spectral unmixing algorithms requires high-quality ground truth fractional abundance data, which are very difficult to obtain. In this work, we generated a comprehensive laboratory ground truth dataset of intimately mixed mineral powders. For this, five clay powders (Kaolin, Roof clay, Red clay, mixed clay, and Calcium hydroxide) were mixed homogeneously to prepare 325 samples of 60 binary, 150 ternary, 100 quaternary, and 15 quinary mixtures. Thirteen different hyperspectral sensors have been used to acquire the reflectance spectra of these mixtures in the visible, near, short, mid, and long-wavelength infrared regions (350-15385) nm. {\color{black} Overlaps in wavelength regions due to the operational ranges of each sensor} and variations in acquisition conditions {\color{black} resulted in} a large amount of spectral variability. Ground truth composition is given by construction, but to verify that the generated samples are sufficiently homogeneous, XRD and XRF elemental analysis is performed. We believe these data will be beneficial for validating advanced methods for nonlinear unmixing and material composition estimation, including studying spectral variability and training supervised unmixing approaches. The datasets can be downloaded from the following link: https://github.com/VisionlabUA/Multisensor_datasets.
[ { "created": "Wed, 30 Aug 2023 11:48:36 GMT", "version": "v1" } ]
2023-09-08
[ [ "Koirala", "Bikram", "" ], [ "Rasti", "Behnood", "" ], [ "Bnoulkacem", "Zakaria", "" ], [ "Ribeiro", "Andrea de Lima", "" ], [ "Madriz", "Yuleika", "" ], [ "Herrmann", "Erik", "" ], [ "Gestels", "Arthur", "" ], [ "De Kerf", "Thomas", "" ], [ "Lorenz", "Sandra", "" ], [ "Fuchs", "Margret", "" ], [ "Janssens", "Koen", "" ], [ "Steenackers", "Gunther", "" ], [ "Gloaguen", "Richard", "" ], [ "Scheunders", "Paul", "" ] ]
Optical hyperspectral cameras capture the spectral reflectance of materials. Since many materials behave as heterogeneous intimate mixtures with which each photon interacts differently, the relationship between spectral reflectance and material composition is very complex. Quantitative validation of spectral unmixing algorithms requires high-quality ground truth fractional abundance data, which are very difficult to obtain. In this work, we generated a comprehensive laboratory ground truth dataset of intimately mixed mineral powders. For this, five clay powders (Kaolin, Roof clay, Red clay, mixed clay, and Calcium hydroxide) were mixed homogeneously to prepare 325 samples of 60 binary, 150 ternary, 100 quaternary, and 15 quinary mixtures. Thirteen different hyperspectral sensors have been used to acquire the reflectance spectra of these mixtures in the visible, near, short, mid, and long-wavelength infrared regions (350-15385) nm. {\color{black} Overlaps in wavelength regions due to the operational ranges of each sensor} and variations in acquisition conditions {\color{black} resulted in} a large amount of spectral variability. Ground truth composition is given by construction, but to verify that the generated samples are sufficiently homogeneous, XRD and XRF elemental analysis is performed. We believe these data will be beneficial for validating advanced methods for nonlinear unmixing and material composition estimation, including studying spectral variability and training supervised unmixing approaches. The datasets can be downloaded from the following link: https://github.com/VisionlabUA/Multisensor_datasets.
2403.17089
Ben Wang
Ben Wang
GOLF: Goal-Oriented Long-term liFe tasks supported by human-AI collaboration
null
Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2024)
10.1145/3626772.3657655
null
cs.HC cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of ChatGPT and similar large language models (LLMs) has revolutionized the human-AI interaction and information-seeking process. Leveraging LLMs as an alternative to search engines, users can now access summarized information tailored to their queries, significantly reducing the cognitive load associated with navigating vast information resources. This shift underscores the potential of LLMs in redefining information access paradigms. Drawing on the foundation of task-focused information retrieval and LLMs' task planning ability, this research extends the scope of LLM capabilities beyond routine task automation to support users in navigating long-term and significant life tasks. It introduces the GOLF framework (Goal-Oriented Long-term liFe tasks), which focuses on enhancing LLMs' ability to assist in significant life decisions through goal orientation and long-term planning. The methodology encompasses a comprehensive simulation study to test the framework's efficacy, followed by model and human evaluations to develop a dataset benchmark for long-term life tasks, and experiments across different models and settings. By shifting the focus from short-term tasks to the broader spectrum of long-term life goals, this research underscores the transformative potential of LLMs in enhancing human decision-making processes and task management, marking a significant step forward in the evolution of human-AI collaboration.
[ { "created": "Mon, 25 Mar 2024 18:25:10 GMT", "version": "v1" }, { "created": "Wed, 17 Apr 2024 15:00:58 GMT", "version": "v2" } ]
2024-04-18
[ [ "Wang", "Ben", "" ] ]
The advent of ChatGPT and similar large language models (LLMs) has revolutionized the human-AI interaction and information-seeking process. Leveraging LLMs as an alternative to search engines, users can now access summarized information tailored to their queries, significantly reducing the cognitive load associated with navigating vast information resources. This shift underscores the potential of LLMs in redefining information access paradigms. Drawing on the foundation of task-focused information retrieval and LLMs' task planning ability, this research extends the scope of LLM capabilities beyond routine task automation to support users in navigating long-term and significant life tasks. It introduces the GOLF framework (Goal-Oriented Long-term liFe tasks), which focuses on enhancing LLMs' ability to assist in significant life decisions through goal orientation and long-term planning. The methodology encompasses a comprehensive simulation study to test the framework's efficacy, followed by model and human evaluations to develop a dataset benchmark for long-term life tasks, and experiments across different models and settings. By shifting the focus from short-term tasks to the broader spectrum of long-term life goals, this research underscores the transformative potential of LLMs in enhancing human decision-making processes and task management, marking a significant step forward in the evolution of human-AI collaboration.
2205.03809
Kanishka Wijewardena
Kanishka P. Wijewardena, Steven A. Grosz, Kai Cao, Anil K. Jain
Fingerprint Template Invertibility: Minutiae vs. Deep Templates
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much of the success of fingerprint recognition is attributed to minutiae-based fingerprint representation. It was believed that minutiae templates could not be inverted to obtain a high fidelity fingerprint image, but this assumption has been shown to be false. The success of deep learning has resulted in alternative fingerprint representations (embeddings), in the hope that they might offer better recognition accuracy as well as non-invertibility of deep network-based templates. We evaluate whether deep fingerprint templates suffer from the same reconstruction attacks as the minutiae templates. We show that while a deep template can be inverted to produce a fingerprint image that could be matched to its source image, deep templates are more resistant to reconstruction attacks than minutiae templates. In particular, reconstructed fingerprint images from minutiae templates yield a TAR of about 100.0% (98.3%) @ FAR of 0.01% for type-I (type-II) attacks using a state-of-the-art commercial fingerprint matcher, when tested on NIST SD4. The corresponding attack performance for reconstructed fingerprint images from deep templates using the same commercial matcher yields a TAR of less than 1% for both type-I and type-II attacks; however, when the reconstructed images are matched using the same deep network, they achieve a TAR of 85.95% (68.10%) for type-I (type-II) attacks. Furthermore, what is missing from previous fingerprint template inversion studies is an evaluation of the black-box attack performance, which we perform using 3 different state-of-the-art fingerprint matchers. We conclude that fingerprint images generated by inverting minutiae templates are highly susceptible to both white-box and black-box attack evaluations, while fingerprint images generated by deep templates are resistant to black-box evaluations and comparatively less susceptible to white-box evaluations.
[ { "created": "Sun, 8 May 2022 07:50:41 GMT", "version": "v1" } ]
2022-05-10
[ [ "Wijewardena", "Kanishka P.", "" ], [ "Grosz", "Steven A.", "" ], [ "Cao", "Kai", "" ], [ "Jain", "Anil K.", "" ] ]
Much of the success of fingerprint recognition is attributed to minutiae-based fingerprint representation. It was believed that minutiae templates could not be inverted to obtain a high fidelity fingerprint image, but this assumption has been shown to be false. The success of deep learning has resulted in alternative fingerprint representations (embeddings), in the hope that they might offer better recognition accuracy as well as non-invertibility of deep network-based templates. We evaluate whether deep fingerprint templates suffer from the same reconstruction attacks as the minutiae templates. We show that while a deep template can be inverted to produce a fingerprint image that could be matched to its source image, deep templates are more resistant to reconstruction attacks than minutiae templates. In particular, reconstructed fingerprint images from minutiae templates yield a TAR of about 100.0% (98.3%) @ FAR of 0.01% for type-I (type-II) attacks using a state-of-the-art commercial fingerprint matcher, when tested on NIST SD4. The corresponding attack performance for reconstructed fingerprint images from deep templates using the same commercial matcher yields a TAR of less than 1% for both type-I and type-II attacks; however, when the reconstructed images are matched using the same deep network, they achieve a TAR of 85.95% (68.10%) for type-I (type-II) attacks. Furthermore, what is missing from previous fingerprint template inversion studies is an evaluation of the black-box attack performance, which we perform using 3 different state-of-the-art fingerprint matchers. We conclude that fingerprint images generated by inverting minutiae templates are highly susceptible to both white-box and black-box attack evaluations, while fingerprint images generated by deep templates are resistant to black-box evaluations and comparatively less susceptible to white-box evaluations.
1603.04322
Fariba Karimi Dr.
Fariba Karimi, Claudia Wagner, Florian Lemmerich, Mohsen Jadidi, Markus Strohmaier
Inferring Gender from Names on the Web: A Comparative Evaluation of Gender Detection Methods
2 pages, WWW conference
null
10.1145/2872518.2889385
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational social scientists often harness the Web as a "societal observatory" where data about human social behavior is collected. This data enables novel investigations of psychological, anthropological and sociological research questions. However, in the absence of demographic information, such as gender, many relevant research questions cannot be addressed. To tackle this problem, researchers often rely on automated methods to infer gender from name information provided on the web. However, little is known about the accuracy of existing gender-detection methods and how biased they are against certain sub-populations. In this paper, we address this question by systematically comparing several gender detection methods on a random sample of scientists for whom we know their full name, their gender and the country of their workplace. We further suggest a novel method that employs web-based image retrieval and gender recognition in facial images in order to augment name-based approaches. Our findings show that the performance of name-based gender detection approaches can be biased towards countries of origin and such biases can be reduced by combining name-based an image-based gender detection methods.
[ { "created": "Mon, 14 Mar 2016 16:14:46 GMT", "version": "v1" } ]
2016-03-15
[ [ "Karimi", "Fariba", "" ], [ "Wagner", "Claudia", "" ], [ "Lemmerich", "Florian", "" ], [ "Jadidi", "Mohsen", "" ], [ "Strohmaier", "Markus", "" ] ]
Computational social scientists often harness the Web as a "societal observatory" where data about human social behavior is collected. This data enables novel investigations of psychological, anthropological and sociological research questions. However, in the absence of demographic information, such as gender, many relevant research questions cannot be addressed. To tackle this problem, researchers often rely on automated methods to infer gender from name information provided on the web. However, little is known about the accuracy of existing gender-detection methods and how biased they are against certain sub-populations. In this paper, we address this question by systematically comparing several gender detection methods on a random sample of scientists for whom we know their full name, their gender and the country of their workplace. We further suggest a novel method that employs web-based image retrieval and gender recognition in facial images in order to augment name-based approaches. Our findings show that the performance of name-based gender detection approaches can be biased towards countries of origin and such biases can be reduced by combining name-based an image-based gender detection methods.
2306.13840
Brando Miranda
Alycia Lee, Brando Miranda, Sudharsan Sundar, Sanmi Koyejo
Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data
null
Proceedings of the 40th International Conference on Machine Learning DMLR 2023
null
null
cs.CL cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current trends to pre-train capable Large Language Models (LLMs) mostly focus on scaling of model and dataset size. However, the quality of pre-training data is an important factor for training powerful LLMs, yet it is a nebulous concept that has not been fully characterized. Therefore, we use the recently proposed Task2Vec diversity coefficient to ground and understand formal aspects of data quality, to go beyond scale alone. Specifically, we measure the diversity coefficient of publicly available pre-training datasets to demonstrate that their formal diversity is high when compared to theoretical lower and upper bounds. In addition, to build confidence in the diversity coefficient, we conduct interpretability experiments and find that the coefficient aligns with intuitive properties of diversity, e.g., it increases as the number of latent concepts increases. We conclude the diversity coefficient is reliable, show it's high for publicly available LLM datasets, and conjecture it can be used to build useful diverse datasets for LLMs.
[ { "created": "Sat, 24 Jun 2023 02:25:56 GMT", "version": "v1" }, { "created": "Tue, 26 Sep 2023 23:29:05 GMT", "version": "v2" } ]
2023-09-28
[ [ "Lee", "Alycia", "" ], [ "Miranda", "Brando", "" ], [ "Sundar", "Sudharsan", "" ], [ "Koyejo", "Sanmi", "" ] ]
Current trends to pre-train capable Large Language Models (LLMs) mostly focus on scaling of model and dataset size. However, the quality of pre-training data is an important factor for training powerful LLMs, yet it is a nebulous concept that has not been fully characterized. Therefore, we use the recently proposed Task2Vec diversity coefficient to ground and understand formal aspects of data quality, to go beyond scale alone. Specifically, we measure the diversity coefficient of publicly available pre-training datasets to demonstrate that their formal diversity is high when compared to theoretical lower and upper bounds. In addition, to build confidence in the diversity coefficient, we conduct interpretability experiments and find that the coefficient aligns with intuitive properties of diversity, e.g., it increases as the number of latent concepts increases. We conclude the diversity coefficient is reliable, show it's high for publicly available LLM datasets, and conjecture it can be used to build useful diverse datasets for LLMs.
2104.09792
Gilad Kutiel
Iftah Gamzu, Hila Gonen, Gilad Kutiel, Ran Levy, Eugene Agichtein
Identifying Helpful Sentences in Product Reviews
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years online shopping has gained momentum and became an important venue for customers wishing to save time and simplify their shopping process. A key advantage of shopping online is the ability to read what other customers are saying about products of interest. In this work, we aim to maintain this advantage in situations where extreme brevity is needed, for example, when shopping by voice. We suggest a novel task of extracting a single representative helpful sentence from a set of reviews for a given product. The selected sentence should meet two conditions: first, it should be helpful for a purchase decision and second, the opinion it expresses should be supported by multiple reviewers. This task is closely related to the task of Multi Document Summarization in the product reviews domain but differs in its objective and its level of conciseness. We collect a dataset in English of sentence helpfulness scores via crowd-sourcing and demonstrate its reliability despite the inherent subjectivity involved. Next, we describe a complete model that extracts representative helpful sentences with positive and negative sentiment towards the product and demonstrate that it outperforms several baselines.
[ { "created": "Tue, 20 Apr 2021 07:09:22 GMT", "version": "v1" }, { "created": "Wed, 5 May 2021 06:36:15 GMT", "version": "v2" }, { "created": "Sun, 11 Jul 2021 08:37:56 GMT", "version": "v3" } ]
2021-07-13
[ [ "Gamzu", "Iftah", "" ], [ "Gonen", "Hila", "" ], [ "Kutiel", "Gilad", "" ], [ "Levy", "Ran", "" ], [ "Agichtein", "Eugene", "" ] ]
In recent years online shopping has gained momentum and became an important venue for customers wishing to save time and simplify their shopping process. A key advantage of shopping online is the ability to read what other customers are saying about products of interest. In this work, we aim to maintain this advantage in situations where extreme brevity is needed, for example, when shopping by voice. We suggest a novel task of extracting a single representative helpful sentence from a set of reviews for a given product. The selected sentence should meet two conditions: first, it should be helpful for a purchase decision and second, the opinion it expresses should be supported by multiple reviewers. This task is closely related to the task of Multi Document Summarization in the product reviews domain but differs in its objective and its level of conciseness. We collect a dataset in English of sentence helpfulness scores via crowd-sourcing and demonstrate its reliability despite the inherent subjectivity involved. Next, we describe a complete model that extracts representative helpful sentences with positive and negative sentiment towards the product and demonstrate that it outperforms several baselines.
1802.06512
Ekaterina Bayguzina
Ekaterina Bayguzina and Bruno Clerckx
Asymmetric Modulation Design for Wireless Information and Power Transfer with Nonlinear Energy Harvesting
Submitted for publication. This version incorporates the conference version "Modulation Design for Wireless Information and Power Transfer with Nonlinear Energy Harvester Modeling" (available as v1)
null
10.1109/TWC.2019.2937024
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Far-field wireless power transfer (WPT) and simultaneous wireless information and power transfer (SWIPT) have become increasingly important in radio frequency (RF) and communication communities recently. The problem of modulation design for SWIPT has however been scarcely addressed. In this paper, a modulation scheme based on asymmetric phase-shift keying (PSK) is considered, which improves the SWIPT rate-energy tradeoff region significantly. The nonlinear rectifier model, which accurately models the energy harvester, is adopted for evaluating the output direct current (DC) power at the receiver. The harvested DC power is maximized under an average power constraint at the transmitter and a constraint on the rate of information transmitted via a multi-carrier signal over a flat fading channel. As a consequence of the rectifier nonlinearity, this work highlights that asymmetric PSK modulation provides benefits over conventional symmetric PSK modulation in SWIPT and opens the door to systematic modulation design tailored for SWIPT.
[ { "created": "Mon, 19 Feb 2018 03:40:02 GMT", "version": "v1" }, { "created": "Sun, 15 Sep 2019 19:39:02 GMT", "version": "v2" } ]
2019-10-25
[ [ "Bayguzina", "Ekaterina", "" ], [ "Clerckx", "Bruno", "" ] ]
Far-field wireless power transfer (WPT) and simultaneous wireless information and power transfer (SWIPT) have become increasingly important in radio frequency (RF) and communication communities recently. The problem of modulation design for SWIPT has however been scarcely addressed. In this paper, a modulation scheme based on asymmetric phase-shift keying (PSK) is considered, which improves the SWIPT rate-energy tradeoff region significantly. The nonlinear rectifier model, which accurately models the energy harvester, is adopted for evaluating the output direct current (DC) power at the receiver. The harvested DC power is maximized under an average power constraint at the transmitter and a constraint on the rate of information transmitted via a multi-carrier signal over a flat fading channel. As a consequence of the rectifier nonlinearity, this work highlights that asymmetric PSK modulation provides benefits over conventional symmetric PSK modulation in SWIPT and opens the door to systematic modulation design tailored for SWIPT.
2108.01289
Jisoo Mok
Jisoo Mok, Byunggook Na, Hyeokjun Choe, Sungroh Yoon
AdvRush: Searching for Adversarially Robust Neural Architectures
ICCV 2021
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep neural networks continue to awe the world with their remarkable performance. Their predictions, however, are prone to be corrupted by adversarial examples that are imperceptible to humans. Current efforts to improve the robustness of neural networks against adversarial examples are focused on developing robust training methods, which update the weights of a neural network in a more robust direction. In this work, we take a step beyond training of the weight parameters and consider the problem of designing an adversarially robust neural architecture with high intrinsic robustness. We propose AdvRush, a novel adversarial robustness-aware neural architecture search algorithm, based upon a finding that independent of the training method, the intrinsic robustness of a neural network can be represented with the smoothness of its input loss landscape. Through a regularizer that favors a candidate architecture with a smoother input loss landscape, AdvRush successfully discovers an adversarially robust neural architecture. Along with a comprehensive theoretical motivation for AdvRush, we conduct an extensive amount of experiments to demonstrate the efficacy of AdvRush on various benchmark datasets. Notably, on CIFAR-10, AdvRush achieves 55.91% robust accuracy under FGSM attack after standard training and 50.04% robust accuracy under AutoAttack after 7-step PGD adversarial training.
[ { "created": "Tue, 3 Aug 2021 04:27:33 GMT", "version": "v1" }, { "created": "Tue, 10 Aug 2021 06:34:49 GMT", "version": "v2" } ]
2021-08-11
[ [ "Mok", "Jisoo", "" ], [ "Na", "Byunggook", "" ], [ "Choe", "Hyeokjun", "" ], [ "Yoon", "Sungroh", "" ] ]
Deep neural networks continue to awe the world with their remarkable performance. Their predictions, however, are prone to be corrupted by adversarial examples that are imperceptible to humans. Current efforts to improve the robustness of neural networks against adversarial examples are focused on developing robust training methods, which update the weights of a neural network in a more robust direction. In this work, we take a step beyond training of the weight parameters and consider the problem of designing an adversarially robust neural architecture with high intrinsic robustness. We propose AdvRush, a novel adversarial robustness-aware neural architecture search algorithm, based upon a finding that independent of the training method, the intrinsic robustness of a neural network can be represented with the smoothness of its input loss landscape. Through a regularizer that favors a candidate architecture with a smoother input loss landscape, AdvRush successfully discovers an adversarially robust neural architecture. Along with a comprehensive theoretical motivation for AdvRush, we conduct an extensive amount of experiments to demonstrate the efficacy of AdvRush on various benchmark datasets. Notably, on CIFAR-10, AdvRush achieves 55.91% robust accuracy under FGSM attack after standard training and 50.04% robust accuracy under AutoAttack after 7-step PGD adversarial training.
2211.13554
Fernando Alonso-Fernandez
Fernando Alonso-Fernandez, Julian Fierrez, Daniel Ramos, Joaquin Gonzalez-Rodriguez
Quality-Based Conditional Processing in Multi-Biometrics: Application to Sensor Interoperability
Published at IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans
null
10.1109/TSMCA.2010.2047498
null
cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As biometric technology is increasingly deployed, it will be common to replace parts of operational systems with newer designs. The cost and inconvenience of reacquiring enrolled users when a new vendor solution is incorporated makes this approach difficult and many applications will require to deal with information from different sources regularly. These interoperability problems can dramatically affect the performance of biometric systems and thus, they need to be overcome. Here, we describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign, whose aim was to compare fusion algorithms when biometric signals were generated using several biometric devices in mismatched conditions. Quality measures from the raw biometric data are available to allow system adjustment to changing quality conditions due to device changes. This system adjustment is referred to as quality-based conditional processing. The proposed fusion approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios. This allows the easy and efficient combination of matching scores from different devices assuming low dependence among modalities. In our system, quality information is used to switch between different system modules depending on the data source (the sensor in our case) and to reject channels with low quality data during the fusion. We compare our fusion approach to a set of rule-based fusion schemes over normalized scores. Results show that the proposed approach outperforms all the rule-based fusion schemes. We also show that with the quality-based channel rejection scheme, an overall improvement of 25% in the equal error rate is obtained.
[ { "created": "Thu, 24 Nov 2022 12:11:22 GMT", "version": "v1" } ]
2022-11-28
[ [ "Alonso-Fernandez", "Fernando", "" ], [ "Fierrez", "Julian", "" ], [ "Ramos", "Daniel", "" ], [ "Gonzalez-Rodriguez", "Joaquin", "" ] ]
As biometric technology is increasingly deployed, it will be common to replace parts of operational systems with newer designs. The cost and inconvenience of reacquiring enrolled users when a new vendor solution is incorporated makes this approach difficult and many applications will require to deal with information from different sources regularly. These interoperability problems can dramatically affect the performance of biometric systems and thus, they need to be overcome. Here, we describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign, whose aim was to compare fusion algorithms when biometric signals were generated using several biometric devices in mismatched conditions. Quality measures from the raw biometric data are available to allow system adjustment to changing quality conditions due to device changes. This system adjustment is referred to as quality-based conditional processing. The proposed fusion approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios. This allows the easy and efficient combination of matching scores from different devices assuming low dependence among modalities. In our system, quality information is used to switch between different system modules depending on the data source (the sensor in our case) and to reject channels with low quality data during the fusion. We compare our fusion approach to a set of rule-based fusion schemes over normalized scores. Results show that the proposed approach outperforms all the rule-based fusion schemes. We also show that with the quality-based channel rejection scheme, an overall improvement of 25% in the equal error rate is obtained.
2105.06284
Min Lin
Kong Huaicong, Lin Min, Wang Zining, Ouyang Jian, Cheng Julian
Ergodic Capacity of High Throughput Satellite Systems With Mixed FSO-RF Transmission
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a high throughput satellite system, where the feeder link uses free-space optical (FSO) and the user link uses radio frequency (RF) communication. In particular, we first propose a transmit diversity using Alamouti space time block coding to mitigate the atmospheric turbulence in the feeder link. Then, based on the concept of average virtual signal-to-interference-plus-noise ratio and one-bit feedback, we propose a beamforming algorithm for the user link to maximize the ergodic capacity (EC). Moreover, by assuming that the FSO links follow the Malaga distribution whereas RF links undergo the shadowed-Rician fading, we derive a closed-form EC expression of the considered system. Finally, numerical simulations validate the accuracy of our theoretical analysis, and show that the proposed schemes can achieve higher capacity compared with the reference schemes.
[ { "created": "Thu, 13 May 2021 13:29:24 GMT", "version": "v1" } ]
2021-05-14
[ [ "Huaicong", "Kong", "" ], [ "Min", "Lin", "" ], [ "Zining", "Wang", "" ], [ "Jian", "Ouyang", "" ], [ "Julian", "Cheng", "" ] ]
We study a high throughput satellite system, where the feeder link uses free-space optical (FSO) and the user link uses radio frequency (RF) communication. In particular, we first propose a transmit diversity using Alamouti space time block coding to mitigate the atmospheric turbulence in the feeder link. Then, based on the concept of average virtual signal-to-interference-plus-noise ratio and one-bit feedback, we propose a beamforming algorithm for the user link to maximize the ergodic capacity (EC). Moreover, by assuming that the FSO links follow the Malaga distribution whereas RF links undergo the shadowed-Rician fading, we derive a closed-form EC expression of the considered system. Finally, numerical simulations validate the accuracy of our theoretical analysis, and show that the proposed schemes can achieve higher capacity compared with the reference schemes.
2404.01136
Dongxu Chang
Dongxu Chang, Qingqing Peng, Zhiming Ma, Guanghui Wang, Dawei Yin
Density Evolution Analysis of Generalized Low-density Parity-check Codes under a Posteriori Probability Decoder
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, the performance of generalized low-density parity-check (GLDPC) codes under the a posteriori probability (APP) decoder is analyzed. We explore the concentration, symmetry, and monotonicity properties of GLDPC codes under the APP decoder, extending the applicability of density evolution to GLDPC codes. On the binary memoryless symmetric channels, using the BEC and BI-AWGN channels as two examples, we demonstrate that with an appropriate proportion of generalized constraint (GC) nodes, GLDPC codes can reduce the original gap to capacity compared to their original LDPC counterparts. Additionally, on the BI-AWGN channel, we apply and improve the Gaussian approximation algorithm in the density evolution of GLDPC codes. By adopting Gaussian mixture distributions to approximate the message distributions from variable nodes and Gaussian distributions for those from constraint nodes, the precision of the channel parameter threshold can be significantly enhanced while maintaining a low computational complexity similar to that of Gaussian approximations. Furthermore, we identify a class of subcodes that can greatly simplify the performance analysis and practical decoding of GLDPC codes, which we refer to as message-invariant subcodes. Using the aforementioned techniques, our simulation experiments provide empirical evidence that GLDPC codes, when decoded with the APP decoder and equipped with the right fraction of GC nodes, can achieve substantial performance improvements compared to low-density parity-check (LDPC) codes.
[ { "created": "Mon, 1 Apr 2024 14:30:04 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2024 13:40:19 GMT", "version": "v2" } ]
2024-08-07
[ [ "Chang", "Dongxu", "" ], [ "Peng", "Qingqing", "" ], [ "Ma", "Zhiming", "" ], [ "Wang", "Guanghui", "" ], [ "Yin", "Dawei", "" ] ]
In this study, the performance of generalized low-density parity-check (GLDPC) codes under the a posteriori probability (APP) decoder is analyzed. We explore the concentration, symmetry, and monotonicity properties of GLDPC codes under the APP decoder, extending the applicability of density evolution to GLDPC codes. On the binary memoryless symmetric channels, using the BEC and BI-AWGN channels as two examples, we demonstrate that with an appropriate proportion of generalized constraint (GC) nodes, GLDPC codes can reduce the original gap to capacity compared to their original LDPC counterparts. Additionally, on the BI-AWGN channel, we apply and improve the Gaussian approximation algorithm in the density evolution of GLDPC codes. By adopting Gaussian mixture distributions to approximate the message distributions from variable nodes and Gaussian distributions for those from constraint nodes, the precision of the channel parameter threshold can be significantly enhanced while maintaining a low computational complexity similar to that of Gaussian approximations. Furthermore, we identify a class of subcodes that can greatly simplify the performance analysis and practical decoding of GLDPC codes, which we refer to as message-invariant subcodes. Using the aforementioned techniques, our simulation experiments provide empirical evidence that GLDPC codes, when decoded with the APP decoder and equipped with the right fraction of GC nodes, can achieve substantial performance improvements compared to low-density parity-check (LDPC) codes.
0710.4693
EDA Publishing Association
Ananta K. Majhi, Mohamed Azimane, Guido Gronthoud, Maurice Lousberg, Stefan Eichenberger, Fred Bowen
Memory Testing Under Different Stress Conditions: An Industrial Evaluation
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)
null
null
cs.AR
null
This paper presents the effectiveness of various stress conditions (mainly voltage and frequency) on detecting the resistive shorts and open defects in deep sub-micron embedded memories in an industrial environment. Simulation studies on very-low voltage, high voltage and at-speed testing show the need of the stress conditions for high quality products; i.e., low defect-per-million (DPM) level, which is driving the semiconductor market today. The above test conditions have been validated to screen out bad devices on real silicon (a test-chip) built on CMOS 0.18 um technology. IFA (inductive fault analysis) based simulation technique leads to an efficient fault coverage and DPM estimator, which helps the customers upfront to make decisions on test algorithm implementations under different stress conditions in order to reduce the number of test escapes.
[ { "created": "Thu, 25 Oct 2007 09:14:05 GMT", "version": "v1" } ]
2011-11-09
[ [ "Majhi", "Ananta K.", "" ], [ "Azimane", "Mohamed", "" ], [ "Gronthoud", "Guido", "" ], [ "Lousberg", "Maurice", "" ], [ "Eichenberger", "Stefan", "" ], [ "Bowen", "Fred", "" ] ]
This paper presents the effectiveness of various stress conditions (mainly voltage and frequency) on detecting the resistive shorts and open defects in deep sub-micron embedded memories in an industrial environment. Simulation studies on very-low voltage, high voltage and at-speed testing show the need of the stress conditions for high quality products; i.e., low defect-per-million (DPM) level, which is driving the semiconductor market today. The above test conditions have been validated to screen out bad devices on real silicon (a test-chip) built on CMOS 0.18 um technology. IFA (inductive fault analysis) based simulation technique leads to an efficient fault coverage and DPM estimator, which helps the customers upfront to make decisions on test algorithm implementations under different stress conditions in order to reduce the number of test escapes.
1802.01194
Joshua Garland
Joshua Garland, Andrew M. Berdahl, Jie Sun and Erik Bollt
Anatomy of Leadership in Collective Behaviour
13 pages, 3 figures
null
10.1063/1.5024395
null
cs.MA physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the mechanics behind the coordinated movement of mobile animal groups (collective motion) provides key insights into their biology and ecology, while also yielding algorithms for bio-inspired technologies and autonomous systems. It is becoming increasingly clear that many mobile animal groups are composed of heterogeneous individuals with differential levels and types of influence over group behaviors. The ability to infer this differential influence, or leadership, is critical to understanding group functioning in these collective animal systems. Due to the broad interpretation of leadership, many different measures and mathematical tools are used to describe and infer "leadership", e.g., position, causality, influence, information flow. But a key question remains: which, if any, of these concepts actually describes leadership? We argue that instead of asserting a single definition or notion of leadership, the complex interaction rules and dynamics typical of a group implies that leadership itself is not merely a binary classification (leader or follower), but rather, a complex combination of many different components. In this paper we develop an anatomy of leadership, identify several principle components and provide a general mathematical framework for discussing leadership. With the intricacies of this taxonomy in mind we present a set of leadership-oriented toy models that should be used as a proving ground for leadership inference methods going forward. We believe this multifaceted approach to leadership will enable a broader understanding of leadership and its inference from data in mobile animal groups and beyond.
[ { "created": "Sun, 4 Feb 2018 21:21:48 GMT", "version": "v1" }, { "created": "Thu, 26 Apr 2018 20:02:04 GMT", "version": "v2" } ]
2018-08-15
[ [ "Garland", "Joshua", "" ], [ "Berdahl", "Andrew M.", "" ], [ "Sun", "Jie", "" ], [ "Bollt", "Erik", "" ] ]
Understanding the mechanics behind the coordinated movement of mobile animal groups (collective motion) provides key insights into their biology and ecology, while also yielding algorithms for bio-inspired technologies and autonomous systems. It is becoming increasingly clear that many mobile animal groups are composed of heterogeneous individuals with differential levels and types of influence over group behaviors. The ability to infer this differential influence, or leadership, is critical to understanding group functioning in these collective animal systems. Due to the broad interpretation of leadership, many different measures and mathematical tools are used to describe and infer "leadership", e.g., position, causality, influence, information flow. But a key question remains: which, if any, of these concepts actually describes leadership? We argue that instead of asserting a single definition or notion of leadership, the complex interaction rules and dynamics typical of a group implies that leadership itself is not merely a binary classification (leader or follower), but rather, a complex combination of many different components. In this paper we develop an anatomy of leadership, identify several principle components and provide a general mathematical framework for discussing leadership. With the intricacies of this taxonomy in mind we present a set of leadership-oriented toy models that should be used as a proving ground for leadership inference methods going forward. We believe this multifaceted approach to leadership will enable a broader understanding of leadership and its inference from data in mobile animal groups and beyond.
2206.04465
Arunkumar A
Arunkumar A and Umesh S
Joint Encoder-Decoder Self-Supervised Pre-training for ASR
Submitted to Interspeech 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised learning (SSL) has shown tremendous success in various speech-related downstream tasks, including Automatic Speech Recognition (ASR). The output embeddings of the SSL model are treated as powerful short-time representations of the speech signal. However, in the ASR task, the main objective is to get the correct sequence of acoustic units, characters, or byte-pair encodings (BPEs). Usually, encoder-decoder architecture works exceptionally well for a sequence-to-sequence task like ASR. Therefore, in this paper, we propose a new paradigm that exploits the power of a decoder during self-supervised learning. We use Hidden Unit BERT (HuBERT) SSL framework to compute the conventional masked prediction loss for the encoder. In addition, we have introduced a decoder in the SSL framework and proposed a target preparation strategy for the decoder. Finally, we use a multitask SSL setup wherein we jointly optimize both the encoder and decoder losses. We hypothesize that the presence of a decoder in the SSL model helps it learn an acoustic unit-based language model, which might improve the performance of an ASR downstream task. We compare our proposed SSL model with HuBERT and show up to 25% relative improvement in performance on ASR by finetuning on various LibriSpeech subsets.
[ { "created": "Thu, 9 Jun 2022 12:45:29 GMT", "version": "v1" } ]
2022-06-10
[ [ "A", "Arunkumar", "" ], [ "S", "Umesh", "" ] ]
Self-supervised learning (SSL) has shown tremendous success in various speech-related downstream tasks, including Automatic Speech Recognition (ASR). The output embeddings of the SSL model are treated as powerful short-time representations of the speech signal. However, in the ASR task, the main objective is to get the correct sequence of acoustic units, characters, or byte-pair encodings (BPEs). Usually, encoder-decoder architecture works exceptionally well for a sequence-to-sequence task like ASR. Therefore, in this paper, we propose a new paradigm that exploits the power of a decoder during self-supervised learning. We use Hidden Unit BERT (HuBERT) SSL framework to compute the conventional masked prediction loss for the encoder. In addition, we have introduced a decoder in the SSL framework and proposed a target preparation strategy for the decoder. Finally, we use a multitask SSL setup wherein we jointly optimize both the encoder and decoder losses. We hypothesize that the presence of a decoder in the SSL model helps it learn an acoustic unit-based language model, which might improve the performance of an ASR downstream task. We compare our proposed SSL model with HuBERT and show up to 25% relative improvement in performance on ASR by finetuning on various LibriSpeech subsets.
2212.05023
Julian Suk
Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, Jelmer M. Wolterink
Mesh Neural Networks for SE(3)-Equivariant Hemodynamics Estimation on the Artery Wall
Published in "Computers in Biology and Medicine"
null
10.1016/j.compbiomed.2024.108328
null
cs.LG cs.CV math.GR physics.flu-dyn
http://creativecommons.org/licenses/by/4.0/
Computational fluid dynamics (CFD) is a valuable asset for patient-specific cardiovascular-disease diagnosis and prognosis, but its high computational demands hamper its adoption in practice. Machine-learning methods that estimate blood flow in individual patients could accelerate or replace CFD simulation to overcome these limitations. In this work, we consider the estimation of vector-valued quantities on the wall of three-dimensional geometric artery models. We employ group equivariant graph convolution in an end-to-end SE(3)-equivariant neural network that operates directly on triangular surface meshes and makes efficient use of training data. We run experiments on a large dataset of synthetic coronary arteries and find that our method estimates directional wall shear stress (WSS) with an approximation error of 7.6% and normalised mean absolute error (NMAE) of 0.4% while up to two orders of magnitude faster than CFD. Furthermore, we show that our method is powerful enough to accurately predict transient, vector-valued WSS over the cardiac cycle while conditioned on a range of different inflow boundary conditions. These results demonstrate the potential of our proposed method as a plugin replacement for CFD in the personalised prediction of hemodynamic vector and scalar fields.
[ { "created": "Fri, 9 Dec 2022 18:16:06 GMT", "version": "v1" }, { "created": "Fri, 14 Jun 2024 18:34:21 GMT", "version": "v2" } ]
2024-06-18
[ [ "Suk", "Julian", "" ], [ "de Haan", "Pim", "" ], [ "Lippe", "Phillip", "" ], [ "Brune", "Christoph", "" ], [ "Wolterink", "Jelmer M.", "" ] ]
Computational fluid dynamics (CFD) is a valuable asset for patient-specific cardiovascular-disease diagnosis and prognosis, but its high computational demands hamper its adoption in practice. Machine-learning methods that estimate blood flow in individual patients could accelerate or replace CFD simulation to overcome these limitations. In this work, we consider the estimation of vector-valued quantities on the wall of three-dimensional geometric artery models. We employ group equivariant graph convolution in an end-to-end SE(3)-equivariant neural network that operates directly on triangular surface meshes and makes efficient use of training data. We run experiments on a large dataset of synthetic coronary arteries and find that our method estimates directional wall shear stress (WSS) with an approximation error of 7.6% and normalised mean absolute error (NMAE) of 0.4% while up to two orders of magnitude faster than CFD. Furthermore, we show that our method is powerful enough to accurately predict transient, vector-valued WSS over the cardiac cycle while conditioned on a range of different inflow boundary conditions. These results demonstrate the potential of our proposed method as a plugin replacement for CFD in the personalised prediction of hemodynamic vector and scalar fields.
2111.00452
Amirmohammad Radmehr
Amirmohammad Radmehr, Milad Asgari, Mehdi Tale Masouleh
Experimental Study on the Imitation of the Human Head-and-Eye Pose Using the 3-DOF Agile Eye Parallel Robot with ROS and Mediapipe Framework
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a method to mimic a human face and eyes is proposed which can be regarded as a combination of computer vision techniques and neural network concepts. From a mechanical standpoint, a 3-DOF spherical parallel robot is used which imitates the human head movement. In what concerns eye movement, a 2-DOF mechanism is attached to the end-effector of the 3-DOF spherical parallel mechanism. In order to have robust and reliable results for the imitation, meaningful information should be extracted from the face mesh for obtaining the pose of a face, i.e., the roll, yaw, and pitch angles. To this end, two methods are proposed where each of them has its own pros and cons. The first method consists in resorting to the so-called Mediapipe library which is a machine learning solution for high-fidelity body pose tracking, introduced by Google. As the second method, a model is trained by a linear regression model for a gathered dataset of face pictures in different poses. In addition, a 3-DOF Agile Eye parallel robot is utilized to show the ability of this robot to be used as a system which is similar to a human head for performing a 3-DOF rotational motion pattern. Furthermore, a 3D printed face and a 2-DOF eye mechanism are fabricated to display the whole system more stylish way. Experimental tests, which are done based on a ROS platform, demonstrate the effectiveness of the proposed methods for tracking the human head and eye movement.
[ { "created": "Sun, 31 Oct 2021 10:11:45 GMT", "version": "v1" }, { "created": "Thu, 4 Nov 2021 11:51:26 GMT", "version": "v2" } ]
2021-11-05
[ [ "Radmehr", "Amirmohammad", "" ], [ "Asgari", "Milad", "" ], [ "Masouleh", "Mehdi Tale", "" ] ]
In this paper, a method to mimic a human face and eyes is proposed which can be regarded as a combination of computer vision techniques and neural network concepts. From a mechanical standpoint, a 3-DOF spherical parallel robot is used which imitates the human head movement. In what concerns eye movement, a 2-DOF mechanism is attached to the end-effector of the 3-DOF spherical parallel mechanism. In order to have robust and reliable results for the imitation, meaningful information should be extracted from the face mesh for obtaining the pose of a face, i.e., the roll, yaw, and pitch angles. To this end, two methods are proposed where each of them has its own pros and cons. The first method consists in resorting to the so-called Mediapipe library which is a machine learning solution for high-fidelity body pose tracking, introduced by Google. As the second method, a model is trained by a linear regression model for a gathered dataset of face pictures in different poses. In addition, a 3-DOF Agile Eye parallel robot is utilized to show the ability of this robot to be used as a system which is similar to a human head for performing a 3-DOF rotational motion pattern. Furthermore, a 3D printed face and a 2-DOF eye mechanism are fabricated to display the whole system more stylish way. Experimental tests, which are done based on a ROS platform, demonstrate the effectiveness of the proposed methods for tracking the human head and eye movement.