id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2303.01272
Sondre S{\o}rb{\o}
Sondre S{\o}rb{\o} and Massimiliano Ruocco
Navigating the Metric Maze: A Taxonomy of Evaluation Metrics for Anomaly Detection in Time Series
29 pages, 28 figures and tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of time series anomaly detection is constantly advancing, with several methods available, making it a challenge to determine the most appropriate method for a specific domain. The evaluation of these methods is facilitated by the use of metrics, which vary widely in their properties. Despite the existence of new evaluation metrics, there is limited agreement on which metrics are best suited for specific scenarios and domain, and the most commonly used metrics have faced criticism in the literature. This paper provides a comprehensive overview of the metrics used for the evaluation of time series anomaly detection methods, and also defines a taxonomy of these based on how they are calculated. By defining a set of properties for evaluation metrics and a set of specific case studies and experiments, twenty metrics are analyzed and discussed in detail, highlighting the unique suitability of each for specific tasks. Through extensive experimentation and analysis, this paper argues that the choice of evaluation metric must be made with care, taking into account the specific requirements of the task at hand.
[ { "created": "Thu, 2 Mar 2023 13:58:06 GMT", "version": "v1" } ]
2023-03-03
[ [ "Sørbø", "Sondre", "" ], [ "Ruocco", "Massimiliano", "" ] ]
The field of time series anomaly detection is constantly advancing, with several methods available, making it a challenge to determine the most appropriate method for a specific domain. The evaluation of these methods is facilitated by the use of metrics, which vary widely in their properties. Despite the existence of new evaluation metrics, there is limited agreement on which metrics are best suited for specific scenarios and domain, and the most commonly used metrics have faced criticism in the literature. This paper provides a comprehensive overview of the metrics used for the evaluation of time series anomaly detection methods, and also defines a taxonomy of these based on how they are calculated. By defining a set of properties for evaluation metrics and a set of specific case studies and experiments, twenty metrics are analyzed and discussed in detail, highlighting the unique suitability of each for specific tasks. Through extensive experimentation and analysis, this paper argues that the choice of evaluation metric must be made with care, taking into account the specific requirements of the task at hand.
2202.10545
Yohan Beugin
Fran\c{c}ois Homps, Yohan Beugin, Romain Vuillemot
ReViVD: Exploration and Filtering of Trajectories in an Immersive Environment using 3D Shapes
Accepted at IEEE Conference on Virtual Reality and 3D User Interfaces (VR) 2020
2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
10.1109/VR46266.2020.00096
null
cs.HC cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present ReViVD, a tool for exploring and filtering large trajectory-based datasets using virtual reality. ReViVD's novelty lies in using simple 3D shapes -- such as cuboids, spheres and cylinders -- as queries for users to select and filter groups of trajectories. Building on this simple paradigm, more complex queries can be created by combining previously made selection groups through a system of user-created Boolean operations. We demonstrate the use of ReViVD in different application domains, from GPS position tracking to simulated data (e.g., turbulent particle flows and traffic simulation). Our results show the ease of use and expressiveness of the 3D geometric shapes in a broad range of exploratory tasks. ReViVD was found to be particularly useful for progressively refining selections to isolate outlying behaviors. It also acts as a powerful communication tool for conveying the structure of normally abstract datasets to an audience.
[ { "created": "Mon, 21 Feb 2022 21:58:41 GMT", "version": "v1" } ]
2022-02-23
[ [ "Homps", "François", "" ], [ "Beugin", "Yohan", "" ], [ "Vuillemot", "Romain", "" ] ]
We present ReViVD, a tool for exploring and filtering large trajectory-based datasets using virtual reality. ReViVD's novelty lies in using simple 3D shapes -- such as cuboids, spheres and cylinders -- as queries for users to select and filter groups of trajectories. Building on this simple paradigm, more complex queries can be created by combining previously made selection groups through a system of user-created Boolean operations. We demonstrate the use of ReViVD in different application domains, from GPS position tracking to simulated data (e.g., turbulent particle flows and traffic simulation). Our results show the ease of use and expressiveness of the 3D geometric shapes in a broad range of exploratory tasks. ReViVD was found to be particularly useful for progressively refining selections to isolate outlying behaviors. It also acts as a powerful communication tool for conveying the structure of normally abstract datasets to an audience.
2408.07724
Jacob Miller
Kiran Smelser, Jacob Miller, Stephen Kobourov
"Normalized Stress" is Not Normalized: How to Interpret Stress Correctly
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high dimensional data. Complex, high dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure projection accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale invariant and show that it accurately captures expected behavior on a small benchmark.
[ { "created": "Wed, 14 Aug 2024 13:42:47 GMT", "version": "v1" } ]
2024-08-16
[ [ "Smelser", "Kiran", "" ], [ "Miller", "Jacob", "" ], [ "Kobourov", "Stephen", "" ] ]
Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high dimensional data. Complex, high dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure projection accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale invariant and show that it accurately captures expected behavior on a small benchmark.
2002.00084
Seokki Lee
Seokki Lee, Bertram Ludaescher, Boris Glavic
Approximate Summaries for Why and Why-not Provenance (Extended Version)
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Why and why-not provenance have been studied extensively in recent years. However, why-not provenance, and to a lesser degree why provenance, can be very large resulting in severe scalability and usability challenges. In this paper, we introduce a novel approximate summarization technique for provenance which overcomes these challenges. Our approach uses patterns to encode (why-not) provenance concisely. We develop techniques for efficiently computing provenance summaries balancing informativeness, conciseness, and completeness. To achieve scalability, we integrate sampling techniques into provenance capture and summarization. Our approach is the first to scale to large datasets and to generate comprehensive and meaningful summaries.
[ { "created": "Fri, 31 Jan 2020 22:47:43 GMT", "version": "v1" }, { "created": "Mon, 27 Apr 2020 16:29:09 GMT", "version": "v2" } ]
2020-04-28
[ [ "Lee", "Seokki", "" ], [ "Ludaescher", "Bertram", "" ], [ "Glavic", "Boris", "" ] ]
Why and why-not provenance have been studied extensively in recent years. However, why-not provenance, and to a lesser degree why provenance, can be very large resulting in severe scalability and usability challenges. In this paper, we introduce a novel approximate summarization technique for provenance which overcomes these challenges. Our approach uses patterns to encode (why-not) provenance concisely. We develop techniques for efficiently computing provenance summaries balancing informativeness, conciseness, and completeness. To achieve scalability, we integrate sampling techniques into provenance capture and summarization. Our approach is the first to scale to large datasets and to generate comprehensive and meaningful summaries.
2010.10649
Wei-Fan Chen
Wei-Fan Chen, Khalid Al-Khatib, Benno Stein and Henning Wachsmuth
Detecting Media Bias in News Articles using Gaussian Bias Distributions
null
EMNLP 2020 Findings
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Media plays an important role in shaping public opinion. Biased media can influence people in undesirable directions and hence should be unmasked as such. We observe that featurebased and neural text classification approaches which rely only on the distribution of low-level lexical information fail to detect media bias. This weakness becomes most noticeable for articles on new events, where words appear in new contexts and hence their "bias predictiveness" is unclear. In this paper, we therefore study how second-order information about biased statements in an article helps to improve detection effectiveness. In particular, we utilize the probability distributions of the frequency, positions, and sequential order of lexical and informational sentence-level bias in a Gaussian Mixture Model. On an existing media bias dataset, we find that the frequency and positions of biased statements strongly impact article-level bias, whereas their exact sequential order is secondary. Using a standard model for sentence-level bias detection, we provide empirical evidence that article-level bias detectors that use second-order information clearly outperform those without.
[ { "created": "Tue, 20 Oct 2020 22:20:49 GMT", "version": "v1" } ]
2020-10-22
[ [ "Chen", "Wei-Fan", "" ], [ "Al-Khatib", "Khalid", "" ], [ "Stein", "Benno", "" ], [ "Wachsmuth", "Henning", "" ] ]
Media plays an important role in shaping public opinion. Biased media can influence people in undesirable directions and hence should be unmasked as such. We observe that featurebased and neural text classification approaches which rely only on the distribution of low-level lexical information fail to detect media bias. This weakness becomes most noticeable for articles on new events, where words appear in new contexts and hence their "bias predictiveness" is unclear. In this paper, we therefore study how second-order information about biased statements in an article helps to improve detection effectiveness. In particular, we utilize the probability distributions of the frequency, positions, and sequential order of lexical and informational sentence-level bias in a Gaussian Mixture Model. On an existing media bias dataset, we find that the frequency and positions of biased statements strongly impact article-level bias, whereas their exact sequential order is secondary. Using a standard model for sentence-level bias detection, we provide empirical evidence that article-level bias detectors that use second-order information clearly outperform those without.
2004.14648
Jifan Chen
Jifan Chen and Greg Durrett
Robust Question Answering Through Sub-part Alignment
NAACL 2021
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current textual question answering models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns in the data, so they fail to generalize to out-of-distribution settings. To make a more robust and understandable QA system, we model question answering as an alignment problem. We decompose both the question and context into smaller units based on off-the-shelf semantic representations (here, semantic roles), and align the question to a subgraph of the context in order to find the answer. We formulate our model as a structured SVM, with alignment scores computed via BERT, and we can train end-to-end despite using beam search for approximate inference. Our explicit use of alignments allows us to explore a set of constraints with which we can prohibit certain types of bad model behavior arising in cross-domain settings. Furthermore, by investigating differences in scores across different potential answers, we can seek to understand what particular aspects of the input lead the model to choose the answer without relying on post-hoc explanation techniques. We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets. The results show that our model is more robust cross-domain than the standard BERT QA model, and constraints derived from alignment scores allow us to effectively trade off coverage and accuracy.
[ { "created": "Thu, 30 Apr 2020 09:10:57 GMT", "version": "v1" }, { "created": "Fri, 1 May 2020 23:58:37 GMT", "version": "v2" }, { "created": "Mon, 19 Apr 2021 20:43:55 GMT", "version": "v3" } ]
2021-04-21
[ [ "Chen", "Jifan", "" ], [ "Durrett", "Greg", "" ] ]
Current textual question answering models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns in the data, so they fail to generalize to out-of-distribution settings. To make a more robust and understandable QA system, we model question answering as an alignment problem. We decompose both the question and context into smaller units based on off-the-shelf semantic representations (here, semantic roles), and align the question to a subgraph of the context in order to find the answer. We formulate our model as a structured SVM, with alignment scores computed via BERT, and we can train end-to-end despite using beam search for approximate inference. Our explicit use of alignments allows us to explore a set of constraints with which we can prohibit certain types of bad model behavior arising in cross-domain settings. Furthermore, by investigating differences in scores across different potential answers, we can seek to understand what particular aspects of the input lead the model to choose the answer without relying on post-hoc explanation techniques. We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets. The results show that our model is more robust cross-domain than the standard BERT QA model, and constraints derived from alignment scores allow us to effectively trade off coverage and accuracy.
2311.14741
Oliver Bendel
Oliver Bendel and Karim N'diaye
@ve: A Chatbot for Latin
15 pages
null
null
null
cs.CL cs.AI cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dead, extinct, and endangered languages have been preserved primarily through audio conservation and the collection and digitization of scripts and have been promoted through targeted language acquisition efforts. Another possibility would be to build conversational agents that can master these languages. This would provide an artificial, active conversational partner which has knowledge of the vocabulary and grammar, and one learns with it in a different way. The chatbot @ve, with which one can communicate in Latin, was developed in 2022/2023 based on GPT-3.0. It was additionally equipped with a manually created knowledge base. After conceptual groundwork, this paper presents the preparation and implementation of the project. In addition, it summarizes the test that a Latin expert conducted with the chatbot. A critical discussion elaborates advantages and disadvantages. @ve could be a new tool for teaching Latin in a memorable and entertaining way through dialogue. However, the present implementation is still too prone to glitches for stand-alone use - i.e., without the accompaniment of a teacher. The use of GPT-4 could be a solution as well as the extension of the knowledge base. In conclusion, it can be argued that conversational agents are an innovative approach to promoting and preserving languages.
[ { "created": "Wed, 22 Nov 2023 09:06:11 GMT", "version": "v1" } ]
2023-11-28
[ [ "Bendel", "Oliver", "" ], [ "N'diaye", "Karim", "" ] ]
Dead, extinct, and endangered languages have been preserved primarily through audio conservation and the collection and digitization of scripts and have been promoted through targeted language acquisition efforts. Another possibility would be to build conversational agents that can master these languages. This would provide an artificial, active conversational partner which has knowledge of the vocabulary and grammar, and one learns with it in a different way. The chatbot @ve, with which one can communicate in Latin, was developed in 2022/2023 based on GPT-3.0. It was additionally equipped with a manually created knowledge base. After conceptual groundwork, this paper presents the preparation and implementation of the project. In addition, it summarizes the test that a Latin expert conducted with the chatbot. A critical discussion elaborates advantages and disadvantages. @ve could be a new tool for teaching Latin in a memorable and entertaining way through dialogue. However, the present implementation is still too prone to glitches for stand-alone use - i.e., without the accompaniment of a teacher. The use of GPT-4 could be a solution as well as the extension of the knowledge base. In conclusion, it can be argued that conversational agents are an innovative approach to promoting and preserving languages.
1606.04011
Tianhua Xu
Tianhua Xu, Gunnar Jacobsen, Jie Li, Mark Leeson, Sergei Popov
Dynamic physical layer equalization in optical communication networks
null
null
null
null
cs.IT math.IT physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In optical transport networks, signal lightpaths between two terminal nodes can be different due to current network conditions. Thus the transmission distance and accumulated dispersion in the lightpath cannot be predicted. Therefore, the adaptive compensation of dynamic dispersion is necessary in such networks to enable flexible routing and switching. In this paper, we present a detailed analysis on the adaptive dispersion compensation using the least-mean-square (LMS) algorithm in coherent optical communication networks. It is found that the variable-step-size LMS equalizer can achieve the same performance with a lower complexity, compared to the traditional LMS algorithm.
[ { "created": "Mon, 13 Jun 2016 16:46:26 GMT", "version": "v1" }, { "created": "Tue, 16 May 2017 22:39:54 GMT", "version": "v2" }, { "created": "Thu, 14 Dec 2017 16:27:52 GMT", "version": "v3" }, { "created": "Mon, 7 May 2018 19:07:27 GMT", "version": "v4" } ]
2018-05-09
[ [ "Xu", "Tianhua", "" ], [ "Jacobsen", "Gunnar", "" ], [ "Li", "Jie", "" ], [ "Leeson", "Mark", "" ], [ "Popov", "Sergei", "" ] ]
In optical transport networks, signal lightpaths between two terminal nodes can be different due to current network conditions. Thus the transmission distance and accumulated dispersion in the lightpath cannot be predicted. Therefore, the adaptive compensation of dynamic dispersion is necessary in such networks to enable flexible routing and switching. In this paper, we present a detailed analysis on the adaptive dispersion compensation using the least-mean-square (LMS) algorithm in coherent optical communication networks. It is found that the variable-step-size LMS equalizer can achieve the same performance with a lower complexity, compared to the traditional LMS algorithm.
2009.08685
Trista Chen
Yu-Sheng Lin, Hung Chang Lu, Yang-Bin Tsao, Yi-Min Chih, Wei-Chao Chen, Shao-Yi Chien
GrateTile: Efficient Sparse Tensor Tiling for CNN Processing
To be published at IEEE Workshop on Signal Processing System (SiPS 2020)
null
null
null
cs.LG cs.AR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose GrateTile, an efficient, hardwarefriendly data storage scheme for sparse CNN feature maps (activations). It divides data into uneven-sized subtensors and, with small indexing overhead, stores them in a compressed yet randomly accessible format. This design enables modern CNN accelerators to fetch and decompressed sub-tensors on-the-fly in a tiled processing manner. GrateTile is suitable for architectures that favor aligned, coalesced data access, and only requires minimal changes to the overall architectural design. We simulate GrateTile with state-of-the-art CNNs and show an average of 55% DRAM bandwidth reduction while using only 0.6% of feature map size for indexing storage.
[ { "created": "Fri, 18 Sep 2020 08:31:41 GMT", "version": "v1" } ]
2020-09-21
[ [ "Lin", "Yu-Sheng", "" ], [ "Lu", "Hung Chang", "" ], [ "Tsao", "Yang-Bin", "" ], [ "Chih", "Yi-Min", "" ], [ "Chen", "Wei-Chao", "" ], [ "Chien", "Shao-Yi", "" ] ]
We propose GrateTile, an efficient, hardwarefriendly data storage scheme for sparse CNN feature maps (activations). It divides data into uneven-sized subtensors and, with small indexing overhead, stores them in a compressed yet randomly accessible format. This design enables modern CNN accelerators to fetch and decompressed sub-tensors on-the-fly in a tiled processing manner. GrateTile is suitable for architectures that favor aligned, coalesced data access, and only requires minimal changes to the overall architectural design. We simulate GrateTile with state-of-the-art CNNs and show an average of 55% DRAM bandwidth reduction while using only 0.6% of feature map size for indexing storage.
1902.00526
Amir Saeidi
Amir Saeidi (Utrecht University, Netherlands), Jurriaan Hage (Utrecht University, Netherlands), Ravi Khadka (Utrecht University, Netherlands), Slinger Jansen (Utrecht University, Netherlands)
Applications of Multi-view Learning Approaches for Software Comprehension
null
The Art, Science, and Engineering of Programming, 2019, Vol. 3, Issue 3, Article 14
10.22152/programming-journal.org/2019/3/14
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Program comprehension concerns the ability of an individual to make an understanding of an existing software system to extend or transform it. Software systems comprise of data that are noisy and missing, which makes program understanding even more difficult. A software system consists of various views including the module dependency graph, execution logs, evolutionary information and the vocabulary used in the source code, that collectively defines the software system. Each of these views contain unique and complementary information; together which can more accurately describe the data. In this paper, we investigate various techniques for combining different sources of information to improve the performance of a program comprehension task. We employ state-of-the-art techniques from learning to 1) find a suitable similarity function for each view, and 2) compare different multi-view learning techniques to decompose a software system into high-level units and give component-level recommendations for refactoring of the system, as well as cross-view source code search. The experiments conducted on 10 relatively large Java software systems show that by fusing knowledge from different views, we can guarantee a lower bound on the quality of the modularization and even improve upon it. We proceed by integrating different sources of information to give a set of high-level recommendations as to how to refactor the software system. Furthermore, we demonstrate how learning a joint subspace allows for performing cross-modal retrieval across views, yielding results that are more aligned with what the user intends by the query. The multi-view approaches outlined in this paper can be employed for addressing problems in software engineering that can be encoded in terms of a learning problem, such as software bug prediction and feature location.
[ { "created": "Fri, 1 Feb 2019 19:07:49 GMT", "version": "v1" } ]
2019-02-05
[ [ "Saeidi", "Amir", "", "Utrecht University, Netherlands" ], [ "Hage", "Jurriaan", "", "Utrecht\n University, Netherlands" ], [ "Khadka", "Ravi", "", "Utrecht University, Netherlands" ], [ "Jansen", "Slinger", "", "Utrecht University, Netherlands" ] ]
Program comprehension concerns the ability of an individual to make an understanding of an existing software system to extend or transform it. Software systems comprise of data that are noisy and missing, which makes program understanding even more difficult. A software system consists of various views including the module dependency graph, execution logs, evolutionary information and the vocabulary used in the source code, that collectively defines the software system. Each of these views contain unique and complementary information; together which can more accurately describe the data. In this paper, we investigate various techniques for combining different sources of information to improve the performance of a program comprehension task. We employ state-of-the-art techniques from learning to 1) find a suitable similarity function for each view, and 2) compare different multi-view learning techniques to decompose a software system into high-level units and give component-level recommendations for refactoring of the system, as well as cross-view source code search. The experiments conducted on 10 relatively large Java software systems show that by fusing knowledge from different views, we can guarantee a lower bound on the quality of the modularization and even improve upon it. We proceed by integrating different sources of information to give a set of high-level recommendations as to how to refactor the software system. Furthermore, we demonstrate how learning a joint subspace allows for performing cross-modal retrieval across views, yielding results that are more aligned with what the user intends by the query. The multi-view approaches outlined in this paper can be employed for addressing problems in software engineering that can be encoded in terms of a learning problem, such as software bug prediction and feature location.
2202.11678
Andrew Wilson
Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
Bayesian Model Selection, the Marginal Likelihood, and Generalization
Extended version. Shorter ICML version available at arXiv:2202.11678v2
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How do we compare between hypotheses that are entirely consistent with observations? The marginal likelihood (aka Bayesian evidence), which represents the probability of generating our observations from a prior, provides a distinctive approach to this foundational question, automatically encoding Occam's razor. Although it has been observed that the marginal likelihood can overfit and is sensitive to prior assumptions, its limitations for hyperparameter learning and discrete model comparison have not been thoroughly investigated. We first revisit the appealing properties of the marginal likelihood for learning constraints and hypothesis testing. We then highlight the conceptual and practical issues in using the marginal likelihood as a proxy for generalization. Namely, we show how marginal likelihood can be negatively correlated with generalization, with implications for neural architecture search, and can lead to both underfitting and overfitting in hyperparameter learning. We also re-examine the connection between the marginal likelihood and PAC-Bayes bounds and use this connection to further elucidate the shortcomings of the marginal likelihood for model selection. We provide a partial remedy through a conditional marginal likelihood, which we show is more aligned with generalization, and practically valuable for large-scale hyperparameter learning, such as in deep kernel learning.
[ { "created": "Wed, 23 Feb 2022 18:38:16 GMT", "version": "v1" }, { "created": "Thu, 2 Jun 2022 17:10:24 GMT", "version": "v2" }, { "created": "Tue, 2 May 2023 01:27:39 GMT", "version": "v3" } ]
2023-05-03
[ [ "Lotfi", "Sanae", "" ], [ "Izmailov", "Pavel", "" ], [ "Benton", "Gregory", "" ], [ "Goldblum", "Micah", "" ], [ "Wilson", "Andrew Gordon", "" ] ]
How do we compare between hypotheses that are entirely consistent with observations? The marginal likelihood (aka Bayesian evidence), which represents the probability of generating our observations from a prior, provides a distinctive approach to this foundational question, automatically encoding Occam's razor. Although it has been observed that the marginal likelihood can overfit and is sensitive to prior assumptions, its limitations for hyperparameter learning and discrete model comparison have not been thoroughly investigated. We first revisit the appealing properties of the marginal likelihood for learning constraints and hypothesis testing. We then highlight the conceptual and practical issues in using the marginal likelihood as a proxy for generalization. Namely, we show how marginal likelihood can be negatively correlated with generalization, with implications for neural architecture search, and can lead to both underfitting and overfitting in hyperparameter learning. We also re-examine the connection between the marginal likelihood and PAC-Bayes bounds and use this connection to further elucidate the shortcomings of the marginal likelihood for model selection. We provide a partial remedy through a conditional marginal likelihood, which we show is more aligned with generalization, and practically valuable for large-scale hyperparameter learning, such as in deep kernel learning.
2011.04345
Tamara AlShammari
Tamara Alshammari and Sumudu Samarakoon and Anis Elgabli and Mehdi Bennis
BayGo: Joint Bayesian Learning and Information-Aware Graph Optimization
6 pages, 5 figures, conference
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article deals with the problem of distributed machine learning, in which agents update their models based on their local datasets, and aggregate the updated models collaboratively and in a fully decentralized manner. In this paper, we tackle the problem of information heterogeneity arising in multi-agent networks where the placement of informative agents plays a crucial role in the learning dynamics. Specifically, we propose BayGo, a novel fully decentralized joint Bayesian learning and graph optimization framework with proven fast convergence over a sparse graph. Under our framework, agents are able to learn and communicate with the most informative agent to their own learning. Unlike prior works, our framework assumes no prior knowledge of the data distribution across agents nor does it assume any knowledge of the true parameter of the system. The proposed alternating minimization based framework ensures global connectivity in a fully decentralized way while minimizing the number of communication links. We theoretically show that by optimizing the proposed objective function, the estimation error of the posterior probability distribution decreases exponentially at each iteration. Via extensive simulations, we show that our framework achieves faster convergence and higher accuracy compared to fully-connected and star topology graphs.
[ { "created": "Mon, 9 Nov 2020 11:16:55 GMT", "version": "v1" }, { "created": "Fri, 19 Feb 2021 19:47:14 GMT", "version": "v2" } ]
2021-02-23
[ [ "Alshammari", "Tamara", "" ], [ "Samarakoon", "Sumudu", "" ], [ "Elgabli", "Anis", "" ], [ "Bennis", "Mehdi", "" ] ]
This article deals with the problem of distributed machine learning, in which agents update their models based on their local datasets, and aggregate the updated models collaboratively and in a fully decentralized manner. In this paper, we tackle the problem of information heterogeneity arising in multi-agent networks where the placement of informative agents plays a crucial role in the learning dynamics. Specifically, we propose BayGo, a novel fully decentralized joint Bayesian learning and graph optimization framework with proven fast convergence over a sparse graph. Under our framework, agents are able to learn and communicate with the most informative agent to their own learning. Unlike prior works, our framework assumes no prior knowledge of the data distribution across agents nor does it assume any knowledge of the true parameter of the system. The proposed alternating minimization based framework ensures global connectivity in a fully decentralized way while minimizing the number of communication links. We theoretically show that by optimizing the proposed objective function, the estimation error of the posterior probability distribution decreases exponentially at each iteration. Via extensive simulations, we show that our framework achieves faster convergence and higher accuracy compared to fully-connected and star topology graphs.
1906.08885
Philipp Koehn
Vishrav Chaudhary and Yuqing Tang and Francisco Guzm\'an and Holger Schwenk and Philipp Koehn
Low-Resource Corpus Filtering using Multilingual Sentence Embeddings
6 pages, WMT 2019
Conference on Machine Translation (WMT) 2019
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we describe our submission to the WMT19 low-resource parallel corpus filtering shared task. Our main approach is based on the LASER toolkit (Language-Agnostic SEntence Representations), which uses an encoder-decoder architecture trained on a parallel corpus to obtain multilingual sentence representations. We then use the representations directly to score and filter the noisy parallel sentences without additionally training a scoring function. We contrast our approach to other promising methods and show that LASER yields strong results. Finally, we produce an ensemble of different scoring methods and obtain additional gains. Our submission achieved the best overall performance for both the Nepali-English and Sinhala-English 1M tasks by a margin of 1.3 and 1.4 BLEU respectively, as compared to the second best systems. Moreover, our experiments show that this technique is promising for low and even no-resource scenarios.
[ { "created": "Thu, 20 Jun 2019 22:39:44 GMT", "version": "v1" } ]
2019-06-24
[ [ "Chaudhary", "Vishrav", "" ], [ "Tang", "Yuqing", "" ], [ "Guzmán", "Francisco", "" ], [ "Schwenk", "Holger", "" ], [ "Koehn", "Philipp", "" ] ]
In this paper, we describe our submission to the WMT19 low-resource parallel corpus filtering shared task. Our main approach is based on the LASER toolkit (Language-Agnostic SEntence Representations), which uses an encoder-decoder architecture trained on a parallel corpus to obtain multilingual sentence representations. We then use the representations directly to score and filter the noisy parallel sentences without additionally training a scoring function. We contrast our approach to other promising methods and show that LASER yields strong results. Finally, we produce an ensemble of different scoring methods and obtain additional gains. Our submission achieved the best overall performance for both the Nepali-English and Sinhala-English 1M tasks by a margin of 1.3 and 1.4 BLEU respectively, as compared to the second best systems. Moreover, our experiments show that this technique is promising for low and even no-resource scenarios.
1807.11161
Hao Min Liu
Hao-Min Liu, Yi-Hsuan Yang
Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network
7 pages, 7 figures and 4 tables
null
null
null
cs.SD cs.AI cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on automatic music generation has seen great progress due to the development of deep neural networks. However, the generation of multi-instrument music of arbitrary genres still remains a challenge. Existing research either works on lead sheets or multi-track piano-rolls found in MIDIs, but both musical notations have their limits. In this work, we propose a new task called lead sheet arrangement to avoid such limits. A new recurrent convolutional generative model for the task is proposed, along with three new symbolic-domain harmonic features to facilitate learning from unpaired lead sheets and MIDIs. Our model can generate lead sheets and their arrangements of eight-bar long. Audio samples of the generated result can be found at https://drive.google.com/open?id=1c0FfODTpudmLvuKBbc23VBCgQizY6-Rk
[ { "created": "Mon, 30 Jul 2018 03:48:04 GMT", "version": "v1" } ]
2018-07-31
[ [ "Liu", "Hao-Min", "" ], [ "Yang", "Yi-Hsuan", "" ] ]
Research on automatic music generation has seen great progress due to the development of deep neural networks. However, the generation of multi-instrument music of arbitrary genres still remains a challenge. Existing research either works on lead sheets or multi-track piano-rolls found in MIDIs, but both musical notations have their limits. In this work, we propose a new task called lead sheet arrangement to avoid such limits. A new recurrent convolutional generative model for the task is proposed, along with three new symbolic-domain harmonic features to facilitate learning from unpaired lead sheets and MIDIs. Our model can generate lead sheets and their arrangements of eight-bar long. Audio samples of the generated result can be found at https://drive.google.com/open?id=1c0FfODTpudmLvuKBbc23VBCgQizY6-Rk
1903.07389
Hamid Karimi
Hamid Karimi and Jiliang Tang
Learning Hierarchical Discourse-level Structure for Fake News Detection
Accepted to 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics June 2-7, 2019 Minneapolis, USA
null
null
null
cs.CL cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourse-level Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.
[ { "created": "Wed, 27 Feb 2019 00:03:17 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2019 01:15:14 GMT", "version": "v2" }, { "created": "Tue, 2 Apr 2019 16:18:00 GMT", "version": "v3" }, { "created": "Thu, 4 Apr 2019 02:38:36 GMT", "version": "v4" }, { "created": "Fri, 5 Apr 2019 17:39:05 GMT", "version": "v5" }, { "created": "Wed, 10 Apr 2019 14:20:53 GMT", "version": "v6" } ]
2019-04-11
[ [ "Karimi", "Hamid", "" ], [ "Tang", "Jiliang", "" ] ]
On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourse-level Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.
0905.0197
Victor Marek
V.W. Marek and J.B. Remmel
An Application of Proof-Theory in Answer Set Programming
22 pages. Short version was published in ICLP08. New version slightly shorter than the previous version
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply proof-theoretic techniques in answer Set Programming. The main results include: 1. A characterization of continuity properties of Gelfond-Lifschitz operator for logic program. 2. A propositional characterization of stable models of logic programs (without referring to loop formulas.
[ { "created": "Sat, 2 May 2009 10:43:30 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2010 20:12:14 GMT", "version": "v2" } ]
2010-01-11
[ [ "Marek", "V. W.", "" ], [ "Remmel", "J. B.", "" ] ]
We apply proof-theoretic techniques in answer Set Programming. The main results include: 1. A characterization of continuity properties of Gelfond-Lifschitz operator for logic program. 2. A propositional characterization of stable models of logic programs (without referring to loop formulas.
1801.08024
Grigori Fursin
Grigori Fursin, Anton Lokhmotov, Dmitry Savenko and Eben Upton
A Collective Knowledge workflow for collaborative research into multi-objective autotuning and machine learning techniques
Interactive CK report: http://cKnowledge.org/rpi-crowd-tuning ; CK repository with artifacts: https://github.com/ctuning/ck-rpi-optimization-results ; FigShare data archive: https://doi.org/10.6084/m9.figshare.5789007.v2
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing efficient software and hardware has never been harder whether it is for a tiny IoT device or an Exascale supercomputer. Apart from the ever growing design and optimization complexity, there exist even more fundamental problems such as lack of interdisciplinary knowledge required for effective software/hardware co-design, and a growing technology transfer gap between academia and industry. We introduce our new educational initiative to tackle these problems by developing Collective Knowledge (CK), a unified experimental framework for computer systems research and development. We use CK to teach the community how to make their research artifacts and experimental workflows portable, reproducible, customizable and reusable while enabling sustainable R&D and facilitating technology transfer. We also demonstrate how to redesign multi-objective autotuning and machine learning as a portable and extensible CK workflow. Such workflows enable researchers to experiment with different applications, data sets and tools; crowdsource experimentation across diverse platforms; share experimental results, models, visualizations; gradually expose more design and optimization choices using a simple JSON API; and ultimately build upon each other's findings. As the first practical step, we have implemented customizable compiler autotuning, crowdsourced optimization of diverse workloads across Raspberry Pi 3 devices, reduced the execution time and code size by up to 40%, and applied machine learning to predict optimizations. We hope such approach will help teach students how to build upon each others' work to enable efficient and self-optimizing software/hardware/model stack for emerging workloads.
[ { "created": "Fri, 19 Jan 2018 15:30:39 GMT", "version": "v1" } ]
2018-01-25
[ [ "Fursin", "Grigori", "" ], [ "Lokhmotov", "Anton", "" ], [ "Savenko", "Dmitry", "" ], [ "Upton", "Eben", "" ] ]
Developing efficient software and hardware has never been harder whether it is for a tiny IoT device or an Exascale supercomputer. Apart from the ever growing design and optimization complexity, there exist even more fundamental problems such as lack of interdisciplinary knowledge required for effective software/hardware co-design, and a growing technology transfer gap between academia and industry. We introduce our new educational initiative to tackle these problems by developing Collective Knowledge (CK), a unified experimental framework for computer systems research and development. We use CK to teach the community how to make their research artifacts and experimental workflows portable, reproducible, customizable and reusable while enabling sustainable R&D and facilitating technology transfer. We also demonstrate how to redesign multi-objective autotuning and machine learning as a portable and extensible CK workflow. Such workflows enable researchers to experiment with different applications, data sets and tools; crowdsource experimentation across diverse platforms; share experimental results, models, visualizations; gradually expose more design and optimization choices using a simple JSON API; and ultimately build upon each other's findings. As the first practical step, we have implemented customizable compiler autotuning, crowdsourced optimization of diverse workloads across Raspberry Pi 3 devices, reduced the execution time and code size by up to 40%, and applied machine learning to predict optimizations. We hope such approach will help teach students how to build upon each others' work to enable efficient and self-optimizing software/hardware/model stack for emerging workloads.
1504.04123
Pietro Tesi
G. Battistelli and P. Tesi
Switching Control for Parameter Identifiability of Uncertain Systems
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the problem of identifying the parameters of an uncertain linear system by means of feedback control. The problem is approached by considering time-varying controllers. It is shown that even when the uncertainty set is not finite, parameter identifiability can be generically ensured by switching among a finite number of linear time-invariant controllers. The results are shown to have several implications, ranging from fault detection and isolation to adaptive and supervisory control. Practical aspects of the problem are also discussed in details.
[ { "created": "Thu, 16 Apr 2015 08:00:20 GMT", "version": "v1" } ]
2015-04-17
[ [ "Battistelli", "G.", "" ], [ "Tesi", "P.", "" ] ]
This paper considers the problem of identifying the parameters of an uncertain linear system by means of feedback control. The problem is approached by considering time-varying controllers. It is shown that even when the uncertainty set is not finite, parameter identifiability can be generically ensured by switching among a finite number of linear time-invariant controllers. The results are shown to have several implications, ranging from fault detection and isolation to adaptive and supervisory control. Practical aspects of the problem are also discussed in details.
2108.05196
Peter Zaspel
Drishti Maharjan and Peter Zaspel
Towards data-driven filters in Paraview
null
null
null
null
cs.LG cs.GR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent progress in scientific visualization has expanded the scope of visualization from being merely a way of presentation to an analysis and discovery tool. A given visualization result is usually generated by applying a series of transformations or filters to the underlying data. Nowadays, such filters use deterministic algorithms to process the data. In this work, we aim at extending this methodology towards data-driven filters, thus filters that expose the abilities of pre-trained machine learning models to the visualization system. The use of such data-driven filters is of particular interest in fields like segmentation, classification, etc., where machine learning models regularly outperform existing algorithmic approaches. To showcase this idea, we couple Paraview, the well-known flow visualization tool, with PyTorch, a deep learning framework. Paraview is extended by plugins that allow users to load pre-trained models of their choice in the form of newly developed filters. The filters transform the input data by feeding it into the model and then provide the model's output as input to the remaining visualization pipeline. A series of simplistic use cases for segmentation and classification on image and fluid data is presented to showcase the technical applicability of such data-driven transformations in Paraview for future complex analysis tasks.
[ { "created": "Wed, 11 Aug 2021 13:02:22 GMT", "version": "v1" }, { "created": "Thu, 12 Aug 2021 08:10:47 GMT", "version": "v2" } ]
2021-08-13
[ [ "Maharjan", "Drishti", "" ], [ "Zaspel", "Peter", "" ] ]
Recent progress in scientific visualization has expanded the scope of visualization from being merely a way of presentation to an analysis and discovery tool. A given visualization result is usually generated by applying a series of transformations or filters to the underlying data. Nowadays, such filters use deterministic algorithms to process the data. In this work, we aim at extending this methodology towards data-driven filters, thus filters that expose the abilities of pre-trained machine learning models to the visualization system. The use of such data-driven filters is of particular interest in fields like segmentation, classification, etc., where machine learning models regularly outperform existing algorithmic approaches. To showcase this idea, we couple Paraview, the well-known flow visualization tool, with PyTorch, a deep learning framework. Paraview is extended by plugins that allow users to load pre-trained models of their choice in the form of newly developed filters. The filters transform the input data by feeding it into the model and then provide the model's output as input to the remaining visualization pipeline. A series of simplistic use cases for segmentation and classification on image and fluid data is presented to showcase the technical applicability of such data-driven transformations in Paraview for future complex analysis tasks.
2306.02291
Shijie Chang
Shijie Chang, Zeqi Hao, Ben Kang, Xiaoqi Zhao, Jiawen Zhu, Zhenyu Chen, Lihe Zhang, Lu Zhang, Huchuan Lu
3rd Place Solution for PVUW2023 VSS Track: A Large Model for Semantic Segmentation on VSPW
3rd Place Solution for CVPR 2023 PVUW VSS Track
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce 3rd place solution for PVUW2023 VSS track. Semantic segmentation is a fundamental task in computer vision with numerous real-world applications. We have explored various image-level visual backbones and segmentation heads to tackle the problem of video semantic segmentation. Through our experimentation, we find that InternImage-H as the backbone and Mask2former as the segmentation head achieves the best performance. In addition, we explore two post-precessing methods: CascadePSP and Segment Anything Model (SAM). Ultimately, our approach obtains 62.60\% and 64.84\% mIoU on the VSPW test set1 and final test set, respectively, securing the third position in the PVUW2023 VSS track.
[ { "created": "Sun, 4 Jun 2023 07:50:38 GMT", "version": "v1" }, { "created": "Tue, 6 Jun 2023 01:49:09 GMT", "version": "v2" } ]
2023-06-07
[ [ "Chang", "Shijie", "" ], [ "Hao", "Zeqi", "" ], [ "Kang", "Ben", "" ], [ "Zhao", "Xiaoqi", "" ], [ "Zhu", "Jiawen", "" ], [ "Chen", "Zhenyu", "" ], [ "Zhang", "Lihe", "" ], [ "Zhang", "Lu", "" ], [ "Lu", "Huchuan", "" ] ]
In this paper, we introduce 3rd place solution for PVUW2023 VSS track. Semantic segmentation is a fundamental task in computer vision with numerous real-world applications. We have explored various image-level visual backbones and segmentation heads to tackle the problem of video semantic segmentation. Through our experimentation, we find that InternImage-H as the backbone and Mask2former as the segmentation head achieves the best performance. In addition, we explore two post-precessing methods: CascadePSP and Segment Anything Model (SAM). Ultimately, our approach obtains 62.60\% and 64.84\% mIoU on the VSPW test set1 and final test set, respectively, securing the third position in the PVUW2023 VSS track.
1509.08634
Takayuki Osogami
Takayuki Osogami and Makoto Otsuka
Learning dynamic Boltzmann machines with spike-timing dependent plasticity
Preliminary and substantially different version of the paper appeared in http://www.nature.com/articles/srep14149
null
null
null
cs.NE cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a particularly structured Boltzmann machine, which we refer to as a dynamic Boltzmann machine (DyBM), as a stochastic model of a multi-dimensional time-series. The DyBM can have infinitely many layers of units but allows exact and efficient inference and learning when its parameters have a proposed structure. This proposed structure is motivated by postulates and observations, from biological neural networks, that the synaptic weight is strengthened or weakened, depending on the timing of spikes (i.e., spike-timing dependent plasticity or STDP). We show that the learning rule of updating the parameters of the DyBM in the direction of maximizing the likelihood of given time-series can be interpreted as STDP with long term potentiation and long term depression. The learning rule has a guarantee of convergence and can be performed in a distributed matter (i.e., local in space) with limited memory (i.e., local in time).
[ { "created": "Tue, 29 Sep 2015 08:30:12 GMT", "version": "v1" } ]
2015-09-30
[ [ "Osogami", "Takayuki", "" ], [ "Otsuka", "Makoto", "" ] ]
We propose a particularly structured Boltzmann machine, which we refer to as a dynamic Boltzmann machine (DyBM), as a stochastic model of a multi-dimensional time-series. The DyBM can have infinitely many layers of units but allows exact and efficient inference and learning when its parameters have a proposed structure. This proposed structure is motivated by postulates and observations, from biological neural networks, that the synaptic weight is strengthened or weakened, depending on the timing of spikes (i.e., spike-timing dependent plasticity or STDP). We show that the learning rule of updating the parameters of the DyBM in the direction of maximizing the likelihood of given time-series can be interpreted as STDP with long term potentiation and long term depression. The learning rule has a guarantee of convergence and can be performed in a distributed matter (i.e., local in space) with limited memory (i.e., local in time).
1901.04713
Chien-Sheng Wu
Chien-Sheng Wu, Richard Socher, Caiming Xiong
Global-to-local Memory Pointer Networks for Task-Oriented Dialogue
ICLR 2019
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework. We propose the global-to-local memory pointer (GLMP) networks to address this issue. In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge. The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer. The decoder first generates a sketch response with unfilled slots. Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers. We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem. As a result, GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation.
[ { "created": "Tue, 15 Jan 2019 08:55:53 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2019 05:13:11 GMT", "version": "v2" } ]
2019-04-01
[ [ "Wu", "Chien-Sheng", "" ], [ "Socher", "Richard", "" ], [ "Xiong", "Caiming", "" ] ]
End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework. We propose the global-to-local memory pointer (GLMP) networks to address this issue. In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge. The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer. The decoder first generates a sketch response with unfilled slots. Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers. We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem. As a result, GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation.
2003.08915
Olivier Nicole
Olivier Nicole, Matthieu Lemerre, S\'ebastien Bardin, Xavier Rival
Automatically Proving Microkernels Free from Privilege Escalation from their Executable
19 pages, 11 figures, submitted to IEEE Symposium on Security and Privacy 2021
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Operating system kernels are the security keystone of most computer systems, as they provide the core protection mechanisms. Kernels are in particular responsible for their own security, i.e. they must prevent untrusted user tasks from reaching their level of privilege. We demonstrate that proving such absence of privilege escalation is a pre-requisite for any definitive security proof of the kernel. While prior OS kernel formal verifications were performed either on source code or crafted kernels, with manual or semi-automated methods requiring significant human efforts in annotations or proofs, we show that it is possible to compute such kernel security proofs using fully-automated methods and starting from the executable code of an existing microkernel with no modification, thus formally verifying absence of privilege escalation with high confidence for a low cost. We applied our method on two embedded microkernels, including the industrial kernel AnonymOS: with only 58 lines of annotation and less than 10 minutes of computation, our method finds a vulnerability in a first (buggy) version of AnonymOS and verifies absence of privilege escalation in a second (secure) version.
[ { "created": "Thu, 19 Mar 2020 17:28:36 GMT", "version": "v1" } ]
2020-03-20
[ [ "Nicole", "Olivier", "" ], [ "Lemerre", "Matthieu", "" ], [ "Bardin", "Sébastien", "" ], [ "Rival", "Xavier", "" ] ]
Operating system kernels are the security keystone of most computer systems, as they provide the core protection mechanisms. Kernels are in particular responsible for their own security, i.e. they must prevent untrusted user tasks from reaching their level of privilege. We demonstrate that proving such absence of privilege escalation is a pre-requisite for any definitive security proof of the kernel. While prior OS kernel formal verifications were performed either on source code or crafted kernels, with manual or semi-automated methods requiring significant human efforts in annotations or proofs, we show that it is possible to compute such kernel security proofs using fully-automated methods and starting from the executable code of an existing microkernel with no modification, thus formally verifying absence of privilege escalation with high confidence for a low cost. We applied our method on two embedded microkernels, including the industrial kernel AnonymOS: with only 58 lines of annotation and less than 10 minutes of computation, our method finds a vulnerability in a first (buggy) version of AnonymOS and verifies absence of privilege escalation in a second (secure) version.
1302.6809
Dan Geiger
Dan Geiger, Azaria Paz, Judea Pearl
On Testing Whether an Embedded Bayesian Network Represents a Probability Model
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-244-252
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Testing the validity of probabilistic models containing unmeasured (hidden) variables is shown to be a hard task. We show that the task of testing whether models are structurally incompatible with the data at hand, requires an exponential number of independence evaluations, each of the form: "X is conditionally independent of Y, given Z." In contrast, a linear number of such evaluations is required to test a standard Bayesian network (one per vertex). On the positive side, we show that if a network with hidden variables G has a tree skeleton, checking whether G represents a given probability model P requires the polynomial number of such independence evaluations. Moreover, we provide an algorithm that efficiently constructs a tree-structured Bayesian network (with hidden variables) that represents P if such a network exists, and further recognizes when such a network does not exist.
[ { "created": "Wed, 27 Feb 2013 14:16:13 GMT", "version": "v1" } ]
2013-02-28
[ [ "Geiger", "Dan", "" ], [ "Paz", "Azaria", "" ], [ "Pearl", "Judea", "" ] ]
Testing the validity of probabilistic models containing unmeasured (hidden) variables is shown to be a hard task. We show that the task of testing whether models are structurally incompatible with the data at hand, requires an exponential number of independence evaluations, each of the form: "X is conditionally independent of Y, given Z." In contrast, a linear number of such evaluations is required to test a standard Bayesian network (one per vertex). On the positive side, we show that if a network with hidden variables G has a tree skeleton, checking whether G represents a given probability model P requires the polynomial number of such independence evaluations. Moreover, we provide an algorithm that efficiently constructs a tree-structured Bayesian network (with hidden variables) that represents P if such a network exists, and further recognizes when such a network does not exist.
1906.09573
Yang Ai
Yang Ai, Zhen-Hua Ling
A Neural Vocoder with Hierarchical Generation of Amplitude and Phase Spectra for Statistical Parametric Speech Synthesis
Published in IEEE Transactions on Audio, Speech and Language Processing
null
10.1109/TASLP.2020.2970241
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a neural vocoder named HiNet which reconstructs speech waveforms from acoustic features by predicting amplitude and phase spectra hierarchically. Different from existing neural vocoders such as WaveNet, SampleRNN and WaveRNN which directly generate waveform samples using single neural networks, the HiNet vocoder is composed of an amplitude spectrum predictor (ASP) and a phase spectrum predictor (PSP). The ASP is a simple DNN model which predicts log amplitude spectra (LAS) from acoustic features. The predicted LAS are sent into the PSP for phase recovery. Considering the issue of phase warping and the difficulty of phase modeling, the PSP is constructed by concatenating a neural source-filter (NSF) waveform generator with a phase extractor. We also introduce generative adversarial networks (GANs) into both ASP and PSP. Finally, the outputs of ASP and PSP are combined to reconstruct speech waveforms by short-time Fourier synthesis. Since there are no autoregressive structures in both predictors, the HiNet vocoder can generate speech waveforms with high efficiency. Objective and subjective experimental results show that our proposed HiNet vocoder achieves better naturalness of reconstructed speech than the conventional STRAIGHT vocoder, a 16-bit WaveNet vocoder using open source implementation and an NSF vocoder with similar complexity to the PSP and obtains similar performance with a 16-bit WaveRNN vocoder. We also find that the performance of HiNet is insensitive to the complexity of the neural waveform generator in PSP to some extend. After simplifying its model structure, the time consumed for generating 1s waveforms of 16kHz speech using a GPU can be further reduced from 0.34s to 0.19s without significant quality degradation.
[ { "created": "Sun, 23 Jun 2019 10:01:33 GMT", "version": "v1" }, { "created": "Wed, 5 Feb 2020 11:05:33 GMT", "version": "v2" } ]
2020-02-06
[ [ "Ai", "Yang", "" ], [ "Ling", "Zhen-Hua", "" ] ]
This paper presents a neural vocoder named HiNet which reconstructs speech waveforms from acoustic features by predicting amplitude and phase spectra hierarchically. Different from existing neural vocoders such as WaveNet, SampleRNN and WaveRNN which directly generate waveform samples using single neural networks, the HiNet vocoder is composed of an amplitude spectrum predictor (ASP) and a phase spectrum predictor (PSP). The ASP is a simple DNN model which predicts log amplitude spectra (LAS) from acoustic features. The predicted LAS are sent into the PSP for phase recovery. Considering the issue of phase warping and the difficulty of phase modeling, the PSP is constructed by concatenating a neural source-filter (NSF) waveform generator with a phase extractor. We also introduce generative adversarial networks (GANs) into both ASP and PSP. Finally, the outputs of ASP and PSP are combined to reconstruct speech waveforms by short-time Fourier synthesis. Since there are no autoregressive structures in both predictors, the HiNet vocoder can generate speech waveforms with high efficiency. Objective and subjective experimental results show that our proposed HiNet vocoder achieves better naturalness of reconstructed speech than the conventional STRAIGHT vocoder, a 16-bit WaveNet vocoder using open source implementation and an NSF vocoder with similar complexity to the PSP and obtains similar performance with a 16-bit WaveRNN vocoder. We also find that the performance of HiNet is insensitive to the complexity of the neural waveform generator in PSP to some extend. After simplifying its model structure, the time consumed for generating 1s waveforms of 16kHz speech using a GPU can be further reduced from 0.34s to 0.19s without significant quality degradation.
1303.1571
Rodrigo de Lamare
Lei Wang and Rodrigo C. de Lamare
Reduced-rank Adaptive Constrained Constant Modulus Beamforming Algorithms based on Joint Iterative Optimization of Filters
4 figures
DSP, 2011
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a reduced-rank scheme for adaptive beamforming based on the constrained joint iterative optimization of filters. We employ this scheme to devise two novel reduced-rank adaptive algorithms according to the constant modulus (CM) criterion with different constraints. The first devised algorithm is formulated as a constrained joint iterative optimization of a projection matrix and a reduced-rank filter with respect to the CM criterion subject to a constraint on the array response. The constrained constant modulus (CCM) expressions for the projection matrix and the reduced-rank weight vector are derived, and a low-complexity adaptive algorithm is presented to jointly estimate them for implementation. The second proposed algorithm is extended from the first one and implemented according to the CM criterion subject to a constraint on the array response and an orthogonal constraint on the projection matrix. The Gram-Schmidt (GS) technique is employed to achieve this orthogonal constraint and improve the performance. Simulation results are given to show superior performance of the proposed algorithms in comparison with existing methods.
[ { "created": "Wed, 6 Mar 2013 23:12:24 GMT", "version": "v1" } ]
2013-03-08
[ [ "Wang", "Lei", "" ], [ "de Lamare", "Rodrigo C.", "" ] ]
This paper proposes a reduced-rank scheme for adaptive beamforming based on the constrained joint iterative optimization of filters. We employ this scheme to devise two novel reduced-rank adaptive algorithms according to the constant modulus (CM) criterion with different constraints. The first devised algorithm is formulated as a constrained joint iterative optimization of a projection matrix and a reduced-rank filter with respect to the CM criterion subject to a constraint on the array response. The constrained constant modulus (CCM) expressions for the projection matrix and the reduced-rank weight vector are derived, and a low-complexity adaptive algorithm is presented to jointly estimate them for implementation. The second proposed algorithm is extended from the first one and implemented according to the CM criterion subject to a constraint on the array response and an orthogonal constraint on the projection matrix. The Gram-Schmidt (GS) technique is employed to achieve this orthogonal constraint and improve the performance. Simulation results are given to show superior performance of the proposed algorithms in comparison with existing methods.
1812.01083
Jacqueline Brixey
Jacqueline Brixey, Ramesh Manuvinakurike, Nham Le, Tuan Lai, Walter Chang, Trung Bui
A System for Automated Image Editing from Natural Language Commands
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents the task of modifying images in an image editing program using natural language written commands. We utilize a corpus of over 6000 image edit text requests to alter real world images collected via crowdsourcing. A novel framework composed of actions and entities to map a user's natural language request to executable commands in an image editing program is described. We resolve previously labeled annotator disagreement through a voting process and complete annotation of the corpus. We experimented with different machine learning models and found that the LSTM, the SVM, and the bidirectional LSTM-CRF joint models are the best to detect image editing actions and associated entities in a given utterance.
[ { "created": "Mon, 3 Dec 2018 21:12:31 GMT", "version": "v1" } ]
2018-12-05
[ [ "Brixey", "Jacqueline", "" ], [ "Manuvinakurike", "Ramesh", "" ], [ "Le", "Nham", "" ], [ "Lai", "Tuan", "" ], [ "Chang", "Walter", "" ], [ "Bui", "Trung", "" ] ]
This work presents the task of modifying images in an image editing program using natural language written commands. We utilize a corpus of over 6000 image edit text requests to alter real world images collected via crowdsourcing. A novel framework composed of actions and entities to map a user's natural language request to executable commands in an image editing program is described. We resolve previously labeled annotator disagreement through a voting process and complete annotation of the corpus. We experimented with different machine learning models and found that the LSTM, the SVM, and the bidirectional LSTM-CRF joint models are the best to detect image editing actions and associated entities in a given utterance.
2109.04399
Corinna Hertweck
Corinna Hertweck and Tim R\"az
Gradual (In)Compatibility of Fairness Criteria
Code available on GitHub: https://github.com/hcorinna/gradual-compatibility, extended version of paper accepted to AAAI'22
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Impossibility results show that important fairness measures (independence, separation, sufficiency) cannot be satisfied at the same time under reasonable assumptions. This paper explores whether we can satisfy and/or improve these fairness measures simultaneously to a certain degree. We introduce information-theoretic formulations of the fairness measures and define degrees of fairness based on these formulations. The information-theoretic formulations suggest unexplored theoretical relations between the three fairness measures. In the experimental part, we use the information-theoretic expressions as regularizers to obtain fairness-regularized predictors for three standard datasets. Our experiments show that a) fairness regularization directly increases fairness measures, in line with existing work, and b) some fairness regularizations indirectly increase other fairness measures, as suggested by our theoretical findings. This establishes that it is possible to increase the degree to which some fairness measures are satisfied at the same time -- some fairness measures are gradually compatible.
[ { "created": "Thu, 9 Sep 2021 16:37:30 GMT", "version": "v1" }, { "created": "Wed, 16 Mar 2022 18:03:52 GMT", "version": "v2" } ]
2022-03-21
[ [ "Hertweck", "Corinna", "" ], [ "Räz", "Tim", "" ] ]
Impossibility results show that important fairness measures (independence, separation, sufficiency) cannot be satisfied at the same time under reasonable assumptions. This paper explores whether we can satisfy and/or improve these fairness measures simultaneously to a certain degree. We introduce information-theoretic formulations of the fairness measures and define degrees of fairness based on these formulations. The information-theoretic formulations suggest unexplored theoretical relations between the three fairness measures. In the experimental part, we use the information-theoretic expressions as regularizers to obtain fairness-regularized predictors for three standard datasets. Our experiments show that a) fairness regularization directly increases fairness measures, in line with existing work, and b) some fairness regularizations indirectly increase other fairness measures, as suggested by our theoretical findings. This establishes that it is possible to increase the degree to which some fairness measures are satisfied at the same time -- some fairness measures are gradually compatible.
1807.04001
Ismail Elezi
Benjamin Bruno Meier, Ismail Elezi, Mohammadreza Amirian, Oliver Durr and Thilo Stadelmann
Learning Neural Models for End-to-End Clustering
Accepted for publication on ANNPR 2018
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters $k$, and for each $1 \leq k \leq k_\mathrm{max}$, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this ``learning to cluster'' and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
[ { "created": "Wed, 11 Jul 2018 08:45:45 GMT", "version": "v1" } ]
2018-07-12
[ [ "Meier", "Benjamin Bruno", "" ], [ "Elezi", "Ismail", "" ], [ "Amirian", "Mohammadreza", "" ], [ "Durr", "Oliver", "" ], [ "Stadelmann", "Thilo", "" ] ]
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters $k$, and for each $1 \leq k \leq k_\mathrm{max}$, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this ``learning to cluster'' and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
1608.07670
Snehanshu Saha
Sobin CC, Snehanshu Saha, Vaskar Raychoudhury, Hategekimana Fidele and Sumana Sinha
CISER: An Amoebiasis inspired Model for Epidemic Message Propagation in DTN
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Delay Tolerant Networks (DTNs) are sparse mobile networks, which experiences frequent disruptions in connectivity among nodes. Usually, DTN follows store-carry-and forward mechanism for message forwarding, in which a node store and carry the message until it finds an appropriate relay node to forward further in the network. So, The efficiency of DTN routing protocol relies on the intelligent selection of a relay node from a set of encountered nodes. Although there are plenty of DTN routing schemes proposed in the literature based on different strategies of relay selection, there are not many mathematical models proposed to study the behavior of message forwarding in DTN. In this paper, we have proposed a novel epidemic model, called as CISER model, for message propagation in DTN, based on Amoebiasis disease propagation in human population. The proposed CISER model is an extension of SIR epidemic model with additional states to represent the resource constrained behavior of nodes in DTN. Experimental results using both synthetic and real-world traces show that the proposed model improves the routing performance metrics, such as delivery ratio, overhead ratio and delivery delay compared to SIR model.
[ { "created": "Sat, 27 Aug 2016 07:20:39 GMT", "version": "v1" } ]
2016-08-30
[ [ "CC", "Sobin", "" ], [ "Saha", "Snehanshu", "" ], [ "Raychoudhury", "Vaskar", "" ], [ "Fidele", "Hategekimana", "" ], [ "Sinha", "Sumana", "" ] ]
Delay Tolerant Networks (DTNs) are sparse mobile networks, which experiences frequent disruptions in connectivity among nodes. Usually, DTN follows store-carry-and forward mechanism for message forwarding, in which a node store and carry the message until it finds an appropriate relay node to forward further in the network. So, The efficiency of DTN routing protocol relies on the intelligent selection of a relay node from a set of encountered nodes. Although there are plenty of DTN routing schemes proposed in the literature based on different strategies of relay selection, there are not many mathematical models proposed to study the behavior of message forwarding in DTN. In this paper, we have proposed a novel epidemic model, called as CISER model, for message propagation in DTN, based on Amoebiasis disease propagation in human population. The proposed CISER model is an extension of SIR epidemic model with additional states to represent the resource constrained behavior of nodes in DTN. Experimental results using both synthetic and real-world traces show that the proposed model improves the routing performance metrics, such as delivery ratio, overhead ratio and delivery delay compared to SIR model.
2312.02339
Derek Lim
Derek Lim and Joshua Robinson and Stefanie Jegelka and Haggai Maron
Expressive Sign Equivariant Networks for Spectral Geometric Learning
NeurIPS 2023 Spotlight
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown the utility of developing machine learning models that respect the structure and symmetries of eigenvectors. These works promote sign invariance, since for any eigenvector v the negation -v is also an eigenvector. However, we show that sign invariance is theoretically limited for tasks such as building orthogonally equivariant models and learning node positional encodings for link prediction in graphs. In this work, we demonstrate the benefits of sign equivariance for these tasks. To obtain these benefits, we develop novel sign equivariant neural network architectures. Our models are based on a new analytic characterization of sign equivariant polynomials and thus inherit provable expressiveness properties. Controlled synthetic experiments show that our networks can achieve the theoretically predicted benefits of sign equivariant models. Code is available at https://github.com/cptq/Sign-Equivariant-Nets.
[ { "created": "Mon, 4 Dec 2023 20:48:18 GMT", "version": "v1" } ]
2023-12-06
[ [ "Lim", "Derek", "" ], [ "Robinson", "Joshua", "" ], [ "Jegelka", "Stefanie", "" ], [ "Maron", "Haggai", "" ] ]
Recent work has shown the utility of developing machine learning models that respect the structure and symmetries of eigenvectors. These works promote sign invariance, since for any eigenvector v the negation -v is also an eigenvector. However, we show that sign invariance is theoretically limited for tasks such as building orthogonally equivariant models and learning node positional encodings for link prediction in graphs. In this work, we demonstrate the benefits of sign equivariance for these tasks. To obtain these benefits, we develop novel sign equivariant neural network architectures. Our models are based on a new analytic characterization of sign equivariant polynomials and thus inherit provable expressiveness properties. Controlled synthetic experiments show that our networks can achieve the theoretically predicted benefits of sign equivariant models. Code is available at https://github.com/cptq/Sign-Equivariant-Nets.
2308.03648
Richard Nock
Richard Nock and Mathieu Guillame-Bert
Generative Forests
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Tabular data represents one of the most prevalent form of data. When it comes to data generation, many approaches would learn a density for the data generation process, but would not necessarily end up with a sampler, even less so being exact with respect to the underlying density. A second issue is on models: while complex modeling based on neural nets thrives in image or text generation (etc.), less is known for powerful generative models on tabular data. A third problem is the visible chasm on tabular data between training algorithms for supervised learning with remarkable properties (e.g. boosting), and a comparative lack of guarantees when it comes to data generation. In this paper, we tackle the three problems, introducing new tree-based generative models convenient for density modeling and tabular data generation that improve on modeling capabilities of recent proposals, and a training algorithm which simplifies the training setting of previous approaches and displays boosting-compliant convergence. This algorithm has the convenient property to rely on a supervised training scheme that can be implemented by a few tweaks to the most popular induction scheme for decision tree induction with two classes. Experiments are provided on missing data imputation and comparing generated data to real data, displaying the quality of the results obtained by our approach, in particular against state of the art.
[ { "created": "Mon, 7 Aug 2023 14:58:53 GMT", "version": "v1" } ]
2023-08-08
[ [ "Nock", "Richard", "" ], [ "Guillame-Bert", "Mathieu", "" ] ]
Tabular data represents one of the most prevalent form of data. When it comes to data generation, many approaches would learn a density for the data generation process, but would not necessarily end up with a sampler, even less so being exact with respect to the underlying density. A second issue is on models: while complex modeling based on neural nets thrives in image or text generation (etc.), less is known for powerful generative models on tabular data. A third problem is the visible chasm on tabular data between training algorithms for supervised learning with remarkable properties (e.g. boosting), and a comparative lack of guarantees when it comes to data generation. In this paper, we tackle the three problems, introducing new tree-based generative models convenient for density modeling and tabular data generation that improve on modeling capabilities of recent proposals, and a training algorithm which simplifies the training setting of previous approaches and displays boosting-compliant convergence. This algorithm has the convenient property to rely on a supervised training scheme that can be implemented by a few tweaks to the most popular induction scheme for decision tree induction with two classes. Experiments are provided on missing data imputation and comparing generated data to real data, displaying the quality of the results obtained by our approach, in particular against state of the art.
2108.09134
Jed Mills
Jed Mills, Jia Hu, Geyong Min, Rui Jin, Siwei Zheng, Jin Wang
Accelerating Federated Learning with a Global Biased Optimiser
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
Federated Learning (FL) is a recent development in distributed machine learning that collaboratively trains models without training data leaving client devices, preserving data privacy. In real-world FL, the training set is distributed over clients in a highly non-Independent and Identically Distributed (non-IID) fashion, harming model convergence speed and final performance. To address this challenge, we propose a novel, generalised approach for incorporating adaptive optimisation into FL with the Federated Global Biased Optimiser (FedGBO) algorithm. FedGBO accelerates FL by employing a set of global biased optimiser values during training, reducing 'client-drift' from non-IID data whilst benefiting from adaptive optimisation. We show that in FedGBO, updates to the global model can be reformulated as centralised training using biased gradients and optimiser updates, and apply this framework to prove FedGBO's convergence on nonconvex objectives when using the momentum-SGD (SGDm) optimiser. We also conduct extensive experiments using 4 FL benchmark datasets (CIFAR100, Sent140, FEMNIST, Shakespeare) and 3 popular optimisers (SGDm, RMSProp, Adam) to compare FedGBO against six state-of-the-art FL algorithms. The results demonstrate that FedGBO displays superior or competitive performance across the datasets whilst having low data-upload and computational costs, and provide practical insights into the trade-offs associated with different adaptive-FL algorithms and optimisers.
[ { "created": "Fri, 20 Aug 2021 12:08:44 GMT", "version": "v1" }, { "created": "Sun, 12 Sep 2021 10:38:22 GMT", "version": "v2" }, { "created": "Wed, 5 Oct 2022 21:27:40 GMT", "version": "v3" } ]
2022-10-07
[ [ "Mills", "Jed", "" ], [ "Hu", "Jia", "" ], [ "Min", "Geyong", "" ], [ "Jin", "Rui", "" ], [ "Zheng", "Siwei", "" ], [ "Wang", "Jin", "" ] ]
Federated Learning (FL) is a recent development in distributed machine learning that collaboratively trains models without training data leaving client devices, preserving data privacy. In real-world FL, the training set is distributed over clients in a highly non-Independent and Identically Distributed (non-IID) fashion, harming model convergence speed and final performance. To address this challenge, we propose a novel, generalised approach for incorporating adaptive optimisation into FL with the Federated Global Biased Optimiser (FedGBO) algorithm. FedGBO accelerates FL by employing a set of global biased optimiser values during training, reducing 'client-drift' from non-IID data whilst benefiting from adaptive optimisation. We show that in FedGBO, updates to the global model can be reformulated as centralised training using biased gradients and optimiser updates, and apply this framework to prove FedGBO's convergence on nonconvex objectives when using the momentum-SGD (SGDm) optimiser. We also conduct extensive experiments using 4 FL benchmark datasets (CIFAR100, Sent140, FEMNIST, Shakespeare) and 3 popular optimisers (SGDm, RMSProp, Adam) to compare FedGBO against six state-of-the-art FL algorithms. The results demonstrate that FedGBO displays superior or competitive performance across the datasets whilst having low data-upload and computational costs, and provide practical insights into the trade-offs associated with different adaptive-FL algorithms and optimisers.
2405.03870
Widad Elouataoui
Widad Elouataoui
AI-Driven Frameworks for Enhancing Data Quality in Big Data Ecosystems: Error_Detection, Correction, and Metadata Integration
Doctoral thesis
null
null
null
cs.AI cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The widespread adoption of big data has ushered in a new era of data-driven decision-making, transforming numerous industries and sectors. However, the efficacy of these decisions hinges on the quality of the underlying data. Poor data quality can result in inaccurate analyses and deceptive conclusions. Managing the vast volume, velocity, and variety of data sources presents significant challenges, heightening the importance of addressing big data quality issues. While there has been increased attention from both academia and industry, current approaches often lack comprehensiveness and universality. They tend to focus on limited metrics, neglecting other dimensions of data quality. Moreover, existing methods are often context-specific, limiting their applicability across different domains. There is a clear need for intelligent, automated approaches leveraging artificial intelligence (AI) for advanced data quality corrections. To bridge these gaps, this Ph.D. thesis proposes a novel set of interconnected frameworks aimed at enhancing big data quality comprehensively. Firstly, we introduce new quality metrics and a weighted scoring system for precise data quality assessment. Secondly, we present a generic framework for detecting various quality anomalies using AI models. Thirdly, we propose an innovative framework for correcting detected anomalies through predictive modeling. Additionally, we address metadata quality enhancement within big data ecosystems. These frameworks are rigorously tested on diverse datasets, demonstrating their efficacy in improving big data quality. Finally, the thesis concludes with insights and suggestions for future research directions.
[ { "created": "Mon, 6 May 2024 21:36:45 GMT", "version": "v1" } ]
2024-05-08
[ [ "Elouataoui", "Widad", "" ] ]
The widespread adoption of big data has ushered in a new era of data-driven decision-making, transforming numerous industries and sectors. However, the efficacy of these decisions hinges on the quality of the underlying data. Poor data quality can result in inaccurate analyses and deceptive conclusions. Managing the vast volume, velocity, and variety of data sources presents significant challenges, heightening the importance of addressing big data quality issues. While there has been increased attention from both academia and industry, current approaches often lack comprehensiveness and universality. They tend to focus on limited metrics, neglecting other dimensions of data quality. Moreover, existing methods are often context-specific, limiting their applicability across different domains. There is a clear need for intelligent, automated approaches leveraging artificial intelligence (AI) for advanced data quality corrections. To bridge these gaps, this Ph.D. thesis proposes a novel set of interconnected frameworks aimed at enhancing big data quality comprehensively. Firstly, we introduce new quality metrics and a weighted scoring system for precise data quality assessment. Secondly, we present a generic framework for detecting various quality anomalies using AI models. Thirdly, we propose an innovative framework for correcting detected anomalies through predictive modeling. Additionally, we address metadata quality enhancement within big data ecosystems. These frameworks are rigorously tested on diverse datasets, demonstrating their efficacy in improving big data quality. Finally, the thesis concludes with insights and suggestions for future research directions.
2107.03068
Hajime Taira
Hajime Taira, Koki Onbe, Naoyuki Miyashita, Masatoshi Okutomi
Video-Based Camera Localization Using Anchor View Detection and Recursive 3D Reconstruction
This paper have been accepted and will be appeared in the proceedings of 17th International Conference on Machine Vision Applications (MVA2021)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a new camera localization strategy designed for image sequences captured in challenging industrial situations such as industrial parts inspection. To deal with peculiar appearances that hurt standard 3D reconstruction pipeline, we exploit pre-knowledge of the scene by selecting key frames in the sequence (called as anchors) which are roughly connected to a certain location. Our method then seek the location of each frame in time-order, while recursively updating an augmented 3D model which can provide current camera location and surrounding 3D structure. In an experiment on a practical industrial situation, our method can localize over 99% frames in the input sequence, whereas standard localization methods fail to reconstruct a complete camera trajectory.
[ { "created": "Wed, 7 Jul 2021 08:13:33 GMT", "version": "v1" } ]
2021-07-08
[ [ "Taira", "Hajime", "" ], [ "Onbe", "Koki", "" ], [ "Miyashita", "Naoyuki", "" ], [ "Okutomi", "Masatoshi", "" ] ]
In this paper we introduce a new camera localization strategy designed for image sequences captured in challenging industrial situations such as industrial parts inspection. To deal with peculiar appearances that hurt standard 3D reconstruction pipeline, we exploit pre-knowledge of the scene by selecting key frames in the sequence (called as anchors) which are roughly connected to a certain location. Our method then seek the location of each frame in time-order, while recursively updating an augmented 3D model which can provide current camera location and surrounding 3D structure. In an experiment on a practical industrial situation, our method can localize over 99% frames in the input sequence, whereas standard localization methods fail to reconstruct a complete camera trajectory.
2407.11867
Zikui Cai
Zikui Cai, Yaoteng Tan, M. Salman Asif
Single Layer Single Gradient Unlearning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine unlearning methods seek to revise pretrained models such that effects of certain training samples can be removed. In addition to effective erasure, low computational cost and general utility retention are also highly desirable. Existing unlearning methods usually involve iterative updates over the model parameters, which incurs a high computational cost. In this work, we propose an efficient method that only requires a one-time gradient computation, with which we modify only a single layer of model parameters. Specifically, we first identify a small number of model layers that lie on the Pareto front of high forget importance and low retain influence as critical layers. Then we search for a suitable step size and take a step along the gradient direction of a single critical layer while keeping other layers frozen. This method is highly modular and can be used to unlearn multiple concepts simultaneously in a controllable manner. We demonstrate the effectiveness and efficiency of this method on various models including CLIP, stable diffusion, and VLMs, surpassing other state-of-the-art methods.
[ { "created": "Tue, 16 Jul 2024 15:52:36 GMT", "version": "v1" } ]
2024-07-17
[ [ "Cai", "Zikui", "" ], [ "Tan", "Yaoteng", "" ], [ "Asif", "M. Salman", "" ] ]
Machine unlearning methods seek to revise pretrained models such that effects of certain training samples can be removed. In addition to effective erasure, low computational cost and general utility retention are also highly desirable. Existing unlearning methods usually involve iterative updates over the model parameters, which incurs a high computational cost. In this work, we propose an efficient method that only requires a one-time gradient computation, with which we modify only a single layer of model parameters. Specifically, we first identify a small number of model layers that lie on the Pareto front of high forget importance and low retain influence as critical layers. Then we search for a suitable step size and take a step along the gradient direction of a single critical layer while keeping other layers frozen. This method is highly modular and can be used to unlearn multiple concepts simultaneously in a controllable manner. We demonstrate the effectiveness and efficiency of this method on various models including CLIP, stable diffusion, and VLMs, surpassing other state-of-the-art methods.
1703.09876
Hongge Chen
Hongge Chen, Duane Boning and Zheng Zhang
Efficient Spatial Variation Characterization via Matrix Completion
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel method to estimate and characterize spatial variations on dies or wafers. This new technique exploits recent developments in matrix completion, enabling estimation of spatial variation across wafers or dies with a small number of randomly picked sampling points while still achieving fairly high accuracy. This new approach can be easily generalized, including for estimation of mixed spatial and structure or device type information.
[ { "created": "Wed, 29 Mar 2017 03:48:14 GMT", "version": "v1" } ]
2017-03-30
[ [ "Chen", "Hongge", "" ], [ "Boning", "Duane", "" ], [ "Zhang", "Zheng", "" ] ]
In this paper, we propose a novel method to estimate and characterize spatial variations on dies or wafers. This new technique exploits recent developments in matrix completion, enabling estimation of spatial variation across wafers or dies with a small number of randomly picked sampling points while still achieving fairly high accuracy. This new approach can be easily generalized, including for estimation of mixed spatial and structure or device type information.
1903.00719
Lukas Pfannschmidt
Lukas Pfannschmidt, Christina G\"opfert, Ursula Neumann, Dominik Heider, Barbara Hammer
FRI -- Feature Relevance Intervals for Interpretable and Interactive Data Exploration
Addition of IEEE copyright notice. Accepted for CIBCB 2019 (https://cibcb2019.icas.xyz/)
null
10.1109/CIBCB.2019.8791489
null
cs.LG cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing feature selection methods are insufficient for analytic purposes as soon as high dimensional data or redundant sensor signals are dealt with since features can be selected due to spurious effects or correlations rather than causal effects. To support the finding of causal features in biomedical experiments, we hereby present FRI, an open source Python library that can be used to identify all-relevant variables in linear classification and (ordinal) regression problems. Using the recently proposed feature relevance method, FRI is able to provide the base for further general experimentation or in specific can facilitate the search for alternative biomarkers. It can be used in an interactive context, by providing model manipulation and visualization methods, or in a batch process as a filter method.
[ { "created": "Sat, 2 Mar 2019 15:16:15 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2019 17:21:03 GMT", "version": "v2" }, { "created": "Fri, 21 Jun 2019 14:41:04 GMT", "version": "v3" } ]
2019-08-13
[ [ "Pfannschmidt", "Lukas", "" ], [ "Göpfert", "Christina", "" ], [ "Neumann", "Ursula", "" ], [ "Heider", "Dominik", "" ], [ "Hammer", "Barbara", "" ] ]
Most existing feature selection methods are insufficient for analytic purposes as soon as high dimensional data or redundant sensor signals are dealt with since features can be selected due to spurious effects or correlations rather than causal effects. To support the finding of causal features in biomedical experiments, we hereby present FRI, an open source Python library that can be used to identify all-relevant variables in linear classification and (ordinal) regression problems. Using the recently proposed feature relevance method, FRI is able to provide the base for further general experimentation or in specific can facilitate the search for alternative biomarkers. It can be used in an interactive context, by providing model manipulation and visualization methods, or in a batch process as a filter method.
2112.05000
Sebastian U. Stich
Yehao Liu and Matteo Pagliardini and Tatjana Chavdarova and Sebastian U. Stich
The Peril of Popular Deep Learning Uncertainty Estimation Methods
Presented at the Bayesian Deep Learning Workshop at NeurIPS 2021
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural networks (BNN), Monte Carlo dropout (MCDropout) -- aim to improve the interpretability of machine learning models by assigning an estimated uncertainty value to each of their prediction outputs. However, since too high uncertainty estimates can have fatal consequences in practice, this paper analyzes the above techniques. Firstly, we show that GP methods always yield high uncertainty estimates on out of distribution (OOD) data. Secondly, we show on a 2D toy example that both BNNs and MCDropout do not give high uncertainty estimates on OOD samples. Finally, we show empirically that this pitfall of BNNs and MCDropout holds on real world datasets as well. Our insights (i) raise awareness for the more cautious use of currently popular UE methods in Deep Learning, (ii) encourage the development of UE methods that approximate GP-based methods -- instead of BNNs and MCDropout, and (iii) our empirical setups can be used for verifying the OOD performances of any other UE method. The source code is available at https://github.com/epfml/uncertainity-estimation.
[ { "created": "Thu, 9 Dec 2021 15:47:21 GMT", "version": "v1" } ]
2021-12-10
[ [ "Liu", "Yehao", "" ], [ "Pagliardini", "Matteo", "" ], [ "Chavdarova", "Tatjana", "" ], [ "Stich", "Sebastian U.", "" ] ]
Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural networks (BNN), Monte Carlo dropout (MCDropout) -- aim to improve the interpretability of machine learning models by assigning an estimated uncertainty value to each of their prediction outputs. However, since too high uncertainty estimates can have fatal consequences in practice, this paper analyzes the above techniques. Firstly, we show that GP methods always yield high uncertainty estimates on out of distribution (OOD) data. Secondly, we show on a 2D toy example that both BNNs and MCDropout do not give high uncertainty estimates on OOD samples. Finally, we show empirically that this pitfall of BNNs and MCDropout holds on real world datasets as well. Our insights (i) raise awareness for the more cautious use of currently popular UE methods in Deep Learning, (ii) encourage the development of UE methods that approximate GP-based methods -- instead of BNNs and MCDropout, and (iii) our empirical setups can be used for verifying the OOD performances of any other UE method. The source code is available at https://github.com/epfml/uncertainity-estimation.
1810.12894
Yuri Burda
Yuri Burda, Harrison Edwards, Amos Storkey, Oleg Klimov
Exploration by Random Network Distillation
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.
[ { "created": "Tue, 30 Oct 2018 17:44:42 GMT", "version": "v1" } ]
2018-10-31
[ [ "Burda", "Yuri", "" ], [ "Edwards", "Harrison", "" ], [ "Storkey", "Amos", "" ], [ "Klimov", "Oleg", "" ] ]
We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.
1809.00926
Charith Perera
Mahmoud Barhamgi, Charith Perera, Chirine Ghedira, Djamal Benslimane
User-centric Privacy Engineering for the Internet of Things
12 Pages
IEEE Cloud Computing, 2018
null
null
cs.NI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
User privacy concerns are widely regarded as a key obstacle to the success of modern smart cyber-physical systems. In this paper, we analyse, through an example, some of the requirements that future data collection architectures of these systems should implement to provide effective privacy protection for users. Then, we give an example of how these requirements can be implemented in a smart home scenario. Our example architecture allows the user to balance the privacy risks with the potential benefits and take a practical decision determining the extent of the sharing. Based on this example architecture, we identify a number of challenges that must be addressed by future data processing systems in order to achieve effective privacy management for smart cyber-physical systems.
[ { "created": "Tue, 4 Sep 2018 12:53:59 GMT", "version": "v1" } ]
2018-09-10
[ [ "Barhamgi", "Mahmoud", "" ], [ "Perera", "Charith", "" ], [ "Ghedira", "Chirine", "" ], [ "Benslimane", "Djamal", "" ] ]
User privacy concerns are widely regarded as a key obstacle to the success of modern smart cyber-physical systems. In this paper, we analyse, through an example, some of the requirements that future data collection architectures of these systems should implement to provide effective privacy protection for users. Then, we give an example of how these requirements can be implemented in a smart home scenario. Our example architecture allows the user to balance the privacy risks with the potential benefits and take a practical decision determining the extent of the sharing. Based on this example architecture, we identify a number of challenges that must be addressed by future data processing systems in order to achieve effective privacy management for smart cyber-physical systems.
2012.11011
Theodore Omtzigt
E. Theodore L. Omtzigt, Peter Gottschling, Mark Seligman, William Zorn
Universal Numbers Library: design and implementation of a high-performance reproducible number systems library
7 pages, 4 figures
null
null
null
cs.CE cs.MS
http://creativecommons.org/licenses/by/4.0/
With the proliferation of embedded systems requiring intelligent behavior, custom number systems to optimize performance per Watt of the entire system become essential components for successful commercial products. We present the Universal Number Library, a high-performance number systems library that includes arbitrary integer, decimal, fixed-point, floating-point, and introduces two tapered floating-point types, posit and valid, that support reproducible arithmetic computation in arbitrary concurrency environments. We discuss the design of the Universal library as a run-time for application development, and as a platform for application-driven hardware validation. The library implementation is described, and examples are provided to show educational examples to elucidate the number system properties, and how specialization is used to yield very high-performance emulation on existing x86, ARM, and POWER processors. We will highlight the integration of the library in larger application environments in computational science and engineering to enable multi-precision and adaptive precision algorithms to improve performance and efficiency of large scale and real-time applications. We will demonstrate the integration of the Universal library into a high-performance reproducible linear algebra run-time. We will conclude with the roadmap of additional functionality of the library as we are targeting new application domains, such as Software Defined Radio, instrumentation, sensor fusion, and model-predictive control.
[ { "created": "Sun, 20 Dec 2020 20:07:57 GMT", "version": "v1" } ]
2020-12-22
[ [ "Omtzigt", "E. Theodore L.", "" ], [ "Gottschling", "Peter", "" ], [ "Seligman", "Mark", "" ], [ "Zorn", "William", "" ] ]
With the proliferation of embedded systems requiring intelligent behavior, custom number systems to optimize performance per Watt of the entire system become essential components for successful commercial products. We present the Universal Number Library, a high-performance number systems library that includes arbitrary integer, decimal, fixed-point, floating-point, and introduces two tapered floating-point types, posit and valid, that support reproducible arithmetic computation in arbitrary concurrency environments. We discuss the design of the Universal library as a run-time for application development, and as a platform for application-driven hardware validation. The library implementation is described, and examples are provided to show educational examples to elucidate the number system properties, and how specialization is used to yield very high-performance emulation on existing x86, ARM, and POWER processors. We will highlight the integration of the library in larger application environments in computational science and engineering to enable multi-precision and adaptive precision algorithms to improve performance and efficiency of large scale and real-time applications. We will demonstrate the integration of the Universal library into a high-performance reproducible linear algebra run-time. We will conclude with the roadmap of additional functionality of the library as we are targeting new application domains, such as Software Defined Radio, instrumentation, sensor fusion, and model-predictive control.
2009.05413
Michael Neuder
Michael Neuder, Daniel J. Moroz, Rithvik Rao, David C. Parkes
Defending Against Malicious Reorgs in Tezos Proof-of-Stake
To appear in the second ACM conference on Advances in Financial Technology (AFT'20)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blockchains are intended to be immutable, so an attacker who is able to delete transactions through a chain reorganization (a malicious reorg) can perform a profitable double-spend attack. We study the rate at which an attacker can execute reorgs in the Tezos Proof-of-Stake protocol. As an example, an attacker with 40% of the staking power is able to execute a 20-block malicious reorg at an average rate of once per day, and the attack probability increases super-linearly as the staking power grows beyond 40%. Moreover, an attacker of the Tezos protocol knows in advance when an attack opportunity will arise, and can use this knowledge to arrange transactions to double-spend. We show that in particular cases, the Tezos protocol can be adjusted to protect against deep reorgs. For instance, we demonstrate protocol parameters that reduce the rate of length-20 reorg opportunities for a 40% attacker by two orders of magnitude. We also observe a trade-off between optimizing for robustness to deep reorgs (costly deviations that may be net profitable because they enable double-spends) and robustness to selfish mining (mining deviations that result in typically short reorgs that are profitable even without double-spends). That is, the parameters that optimally protect against one make the other attack easy. Finally, we develop a method that monitors the Tezos blockchain health with respect to malicious reorgs using only publicly available information.
[ { "created": "Fri, 11 Sep 2020 12:58:51 GMT", "version": "v1" } ]
2020-09-14
[ [ "Neuder", "Michael", "" ], [ "Moroz", "Daniel J.", "" ], [ "Rao", "Rithvik", "" ], [ "Parkes", "David C.", "" ] ]
Blockchains are intended to be immutable, so an attacker who is able to delete transactions through a chain reorganization (a malicious reorg) can perform a profitable double-spend attack. We study the rate at which an attacker can execute reorgs in the Tezos Proof-of-Stake protocol. As an example, an attacker with 40% of the staking power is able to execute a 20-block malicious reorg at an average rate of once per day, and the attack probability increases super-linearly as the staking power grows beyond 40%. Moreover, an attacker of the Tezos protocol knows in advance when an attack opportunity will arise, and can use this knowledge to arrange transactions to double-spend. We show that in particular cases, the Tezos protocol can be adjusted to protect against deep reorgs. For instance, we demonstrate protocol parameters that reduce the rate of length-20 reorg opportunities for a 40% attacker by two orders of magnitude. We also observe a trade-off between optimizing for robustness to deep reorgs (costly deviations that may be net profitable because they enable double-spends) and robustness to selfish mining (mining deviations that result in typically short reorgs that are profitable even without double-spends). That is, the parameters that optimally protect against one make the other attack easy. Finally, we develop a method that monitors the Tezos blockchain health with respect to malicious reorgs using only publicly available information.
1909.10111
Michael Posa
Yu-Ming Chen and Michael Posa
Optimal Reduced-order Modeling of Bipedal Locomotion
Submitted to ICRA 2020
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art approaches to legged locomotion are widely dependent on the use of models like the linear inverted pendulum (LIP) and the spring-loaded inverted pendulum (SLIP), popular because their simplicity enables a wide array of tools for planning, control, and analysis. However, they inevitably limit the ability to execute complex tasks or agile maneuvers. In this work, we aim to automatically synthesize models that remain low-dimensional but retain the capabilities of the high-dimensional system. For example, if one were to restore a small degree of complexity to LIP, SLIP, or a similar model, our approach discovers the form of that additional complexity which optimizes performance. In this paper, we define a class of reduced-order models and provide an algorithm for optimization within this class. To demonstrate our method, we optimize models for walking at a range of speeds and ground inclines, for both a five-link model and the Cassie bipedal robot.
[ { "created": "Mon, 23 Sep 2019 01:13:11 GMT", "version": "v1" } ]
2019-09-24
[ [ "Chen", "Yu-Ming", "" ], [ "Posa", "Michael", "" ] ]
State-of-the-art approaches to legged locomotion are widely dependent on the use of models like the linear inverted pendulum (LIP) and the spring-loaded inverted pendulum (SLIP), popular because their simplicity enables a wide array of tools for planning, control, and analysis. However, they inevitably limit the ability to execute complex tasks or agile maneuvers. In this work, we aim to automatically synthesize models that remain low-dimensional but retain the capabilities of the high-dimensional system. For example, if one were to restore a small degree of complexity to LIP, SLIP, or a similar model, our approach discovers the form of that additional complexity which optimizes performance. In this paper, we define a class of reduced-order models and provide an algorithm for optimization within this class. To demonstrate our method, we optimize models for walking at a range of speeds and ground inclines, for both a five-link model and the Cassie bipedal robot.
2206.11302
Brendon Boldt
Brendon Boldt, David Mortensen
Recommendations for Systematic Research on Emergent Language
10 pages
null
null
null
cs.MA cs.CL
http://creativecommons.org/licenses/by/4.0/
Emergent language is unique among fields within the discipline of machine learning for its open-endedness, not obviously presenting well-defined problems to be solved. As a result, the current research in the field has largely been exploratory: focusing on establishing new problems, techniques, and phenomena. Yet after these problems have been established, subsequent progress requires research which can measurably demonstrate how it improves on prior approaches. This type of research is what we call systematic research; in this paper, we illustrate this mode of research specifically for emergent language. We first identify the overarching goals of emergent language research, categorizing them as either science or engineering. Using this distinction, we present core methodological elements of science and engineering, analyze their role in current emergent language research, and recommend how to apply these elements.
[ { "created": "Wed, 22 Jun 2022 18:10:44 GMT", "version": "v1" } ]
2022-06-24
[ [ "Boldt", "Brendon", "" ], [ "Mortensen", "David", "" ] ]
Emergent language is unique among fields within the discipline of machine learning for its open-endedness, not obviously presenting well-defined problems to be solved. As a result, the current research in the field has largely been exploratory: focusing on establishing new problems, techniques, and phenomena. Yet after these problems have been established, subsequent progress requires research which can measurably demonstrate how it improves on prior approaches. This type of research is what we call systematic research; in this paper, we illustrate this mode of research specifically for emergent language. We first identify the overarching goals of emergent language research, categorizing them as either science or engineering. Using this distinction, we present core methodological elements of science and engineering, analyze their role in current emergent language research, and recommend how to apply these elements.
1306.1947
Wan Fokkink
Wan Fokkink, Dick Grune, Brinio Hond, Peter Rutgers
Detecting Useless Transitions in Pushdown Automata
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pushdown automata may contain transitions that are never used in any accepting run of the automaton. We present an algorithm for detecting such useless transitions. A finite automaton that captures the possible stack content during runs of the pushdown automaton, is first constructed in a forward procedure to determine which transitions are reachable, and then employed in a backward procedure to determine which of these transitions can lead to a final stat
[ { "created": "Sat, 8 Jun 2013 19:10:45 GMT", "version": "v1" } ]
2013-06-11
[ [ "Fokkink", "Wan", "" ], [ "Grune", "Dick", "" ], [ "Hond", "Brinio", "" ], [ "Rutgers", "Peter", "" ] ]
Pushdown automata may contain transitions that are never used in any accepting run of the automaton. We present an algorithm for detecting such useless transitions. A finite automaton that captures the possible stack content during runs of the pushdown automaton, is first constructed in a forward procedure to determine which transitions are reachable, and then employed in a backward procedure to determine which of these transitions can lead to a final stat
1803.10654
Najoua Essoukri Ben Amara
Ines Baccouche, Sabeur Jemmali, Asma Mlayah, Bilal Manai, Najoua Essoukri Ben Amara
Implementation of an Improved Coulomb-Counting Algorithm Based on a Piecewise SOC-OCV Relationship for SOC Estimation of Li-IonBattery
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Considering the expanding useofembedded devices equipped with rechargeable batteries, especially Li-ionbatteries that have higher power and energy density, the battery management systemis becomingincreasingly important. Infact, theestimationaccuracy of the amount of the remaining charges is critical as it affects the device operational autonomy.Therefore, the battery State-Of-Charge (SOC) is defined to indicate its estimated available charge. In this paper, a solution isproposed for Li-ion battery SOC estimation based on an enhanced Coulomb-counting algorithm to be implemented formultimedia applications.However,the Coulomb-counting algorithm suffers from cumulative errors due to the initial SOC andtheerrors ofmeasurements uncertainties,thereforeto overcome these limitations,we use the Open-CircuitVoltage (OCV),thushavinga piecewise linear SOC-OCV relationship andperformingperiodic re-calibration of the battery capacity. Thissolutionis implementedand validated on a hardware platform based onthePIC18F MCU family. The measured resultsarecorrelated withthetheoretical ones; they have shown a reliable estimation since accuracy is less than 2%.
[ { "created": "Tue, 27 Mar 2018 16:55:59 GMT", "version": "v1" } ]
2018-03-29
[ [ "Baccouche", "Ines", "" ], [ "Jemmali", "Sabeur", "" ], [ "Mlayah", "Asma", "" ], [ "Manai", "Bilal", "" ], [ "Amara", "Najoua Essoukri Ben", "" ] ]
Considering the expanding useofembedded devices equipped with rechargeable batteries, especially Li-ionbatteries that have higher power and energy density, the battery management systemis becomingincreasingly important. Infact, theestimationaccuracy of the amount of the remaining charges is critical as it affects the device operational autonomy.Therefore, the battery State-Of-Charge (SOC) is defined to indicate its estimated available charge. In this paper, a solution isproposed for Li-ion battery SOC estimation based on an enhanced Coulomb-counting algorithm to be implemented formultimedia applications.However,the Coulomb-counting algorithm suffers from cumulative errors due to the initial SOC andtheerrors ofmeasurements uncertainties,thereforeto overcome these limitations,we use the Open-CircuitVoltage (OCV),thushavinga piecewise linear SOC-OCV relationship andperformingperiodic re-calibration of the battery capacity. Thissolutionis implementedand validated on a hardware platform based onthePIC18F MCU family. The measured resultsarecorrelated withthetheoretical ones; they have shown a reliable estimation since accuracy is less than 2%.
2010.02029
Quan Zhang
Quan Zhang, Huangjie Zheng, Mingyuan Zhou
MCMC-Interactive Variational Inference
25 pages, 7 figures, 3 tables
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Leveraging well-established MCMC strategies, we propose MCMC-interactive variational inference (MIVI) to not only estimate the posterior in a time constrained manner, but also facilitate the design of MCMC transitions. Constructing a variational distribution followed by a short Markov chain that has parameters to learn, MIVI takes advantage of the complementary properties of variational inference and MCMC to encourage mutual improvement. On one hand, with the variational distribution locating high posterior density regions, the Markov chain is optimized within the variational inference framework to efficiently target the posterior despite a small number of transitions. On the other hand, the optimized Markov chain with considerable flexibility guides the variational distribution towards the posterior and alleviates its underestimation of uncertainty. Furthermore, we prove the optimized Markov chain in MIVI admits extrapolation, which means its marginal distribution gets closer to the true posterior as the chain grows. Therefore, the Markov chain can be used separately as an efficient MCMC scheme. Experiments show that MIVI not only accurately and efficiently approximates the posteriors but also facilitates designs of stochastic gradient MCMC and Gibbs sampling transitions.
[ { "created": "Fri, 2 Oct 2020 17:43:20 GMT", "version": "v1" }, { "created": "Mon, 12 Dec 2022 20:07:54 GMT", "version": "v2" } ]
2022-12-14
[ [ "Zhang", "Quan", "" ], [ "Zheng", "Huangjie", "" ], [ "Zhou", "Mingyuan", "" ] ]
Leveraging well-established MCMC strategies, we propose MCMC-interactive variational inference (MIVI) to not only estimate the posterior in a time constrained manner, but also facilitate the design of MCMC transitions. Constructing a variational distribution followed by a short Markov chain that has parameters to learn, MIVI takes advantage of the complementary properties of variational inference and MCMC to encourage mutual improvement. On one hand, with the variational distribution locating high posterior density regions, the Markov chain is optimized within the variational inference framework to efficiently target the posterior despite a small number of transitions. On the other hand, the optimized Markov chain with considerable flexibility guides the variational distribution towards the posterior and alleviates its underestimation of uncertainty. Furthermore, we prove the optimized Markov chain in MIVI admits extrapolation, which means its marginal distribution gets closer to the true posterior as the chain grows. Therefore, the Markov chain can be used separately as an efficient MCMC scheme. Experiments show that MIVI not only accurately and efficiently approximates the posteriors but also facilitates designs of stochastic gradient MCMC and Gibbs sampling transitions.
1102.0850
Zoltan Esik
Zoltan Esik
Scattered context-free linear orderings
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that it is decidable in exponential time whether the lexicographic ordering of a context-free language is scattered, or a well-ordering.
[ { "created": "Fri, 4 Feb 2011 08:13:59 GMT", "version": "v1" }, { "created": "Wed, 23 Feb 2011 03:51:12 GMT", "version": "v2" } ]
2015-03-18
[ [ "Esik", "Zoltan", "" ] ]
We show that it is decidable in exponential time whether the lexicographic ordering of a context-free language is scattered, or a well-ordering.
2406.04290
Trevor E. Carlson
Ali Hajiabadi, Trevor E. Carlson
Providing High-Performance Execution with a Sequential Contract for Cryptographic Programs
17 pages, 7 figures, 4 tables
null
null
null
cs.CR cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constant-time programming is a widely deployed approach to harden cryptographic programs against side channel attacks. However, modern processors violate the underlying assumptions of constant-time policies by speculatively executing unintended paths of the program. In this work, we propose Cassandra, a novel hardware-software mechanism to protect constant-time cryptographic code against speculative control flow based attacks. Cassandra explores the radical design point of disabling the branch predictor and recording-and-replaying sequential control flow of the program. Two key insights that enable our design are that (1) the sequential control flow of a constant-time program is constant over different runs, and (2) cryptographic programs are highly looped and their control flow patterns repeat in a highly compressible way. These insights allow us to perform an offline branch analysis that significantly compresses control flow traces. We add a small component to a typical processor design, the Branch Trace Unit, to store compressed traces and determine fetch redirections according to the sequential model of the program. Moreover, we provide a formal security analysis and prove that our methodology adheres to a strong security contract by design. Despite providing a higher security guarantee, Cassandra counter-intuitively improves performance by 1.77% by eliminating branch misprediction penalties.
[ { "created": "Thu, 6 Jun 2024 17:34:48 GMT", "version": "v1" } ]
2024-06-07
[ [ "Hajiabadi", "Ali", "" ], [ "Carlson", "Trevor E.", "" ] ]
Constant-time programming is a widely deployed approach to harden cryptographic programs against side channel attacks. However, modern processors violate the underlying assumptions of constant-time policies by speculatively executing unintended paths of the program. In this work, we propose Cassandra, a novel hardware-software mechanism to protect constant-time cryptographic code against speculative control flow based attacks. Cassandra explores the radical design point of disabling the branch predictor and recording-and-replaying sequential control flow of the program. Two key insights that enable our design are that (1) the sequential control flow of a constant-time program is constant over different runs, and (2) cryptographic programs are highly looped and their control flow patterns repeat in a highly compressible way. These insights allow us to perform an offline branch analysis that significantly compresses control flow traces. We add a small component to a typical processor design, the Branch Trace Unit, to store compressed traces and determine fetch redirections according to the sequential model of the program. Moreover, we provide a formal security analysis and prove that our methodology adheres to a strong security contract by design. Despite providing a higher security guarantee, Cassandra counter-intuitively improves performance by 1.77% by eliminating branch misprediction penalties.
2311.18085
Shubham Gandhi
Shubham Gandhi, Om Khare, Mihika Dravid, Mihika Sanghvi, Sunil Mane, Aadesh Gajaralwar, Saloni Gandhi
Leveraging a Randomized Key Matrix to Enhance the Security of Symmetric Substitution Ciphers
In Proceedings of the 10th IEEE Asia-Pacific Conference on Computer Science and Data Engineering 2023 (CSDE)
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
An innovative strategy to enhance the security of symmetric substitution ciphers is presented, through the implementation of a randomized key matrix suitable for various file formats, including but not limited to binary and text files. Despite their historical relevance, symmetric substitution ciphers have been limited by vulnerabilities to cryptanalytic methods like frequency analysis and known plaintext attacks. The aim of our research is to mitigate these vulnerabilities by employing a polyalphabetic substitution strategy that incorporates a distinct randomized key matrix. This matrix plays a pivotal role in generating a unique random key, comprising characters, encompassing both uppercase and lowercase letters, numeric, and special characters, to derive the corresponding ciphertext. The effectiveness of the proposed methodology in enhancing the security of conventional substitution methods for file encryption and decryption is supported by comprehensive testing and analysis, which encompass computational speed, frequency analysis, keyspace examination, Kasiski test, entropy analysis, and the utilization of a large language model.
[ { "created": "Wed, 29 Nov 2023 21:13:38 GMT", "version": "v1" } ]
2023-12-01
[ [ "Gandhi", "Shubham", "" ], [ "Khare", "Om", "" ], [ "Dravid", "Mihika", "" ], [ "Sanghvi", "Mihika", "" ], [ "Mane", "Sunil", "" ], [ "Gajaralwar", "Aadesh", "" ], [ "Gandhi", "Saloni", "" ] ]
An innovative strategy to enhance the security of symmetric substitution ciphers is presented, through the implementation of a randomized key matrix suitable for various file formats, including but not limited to binary and text files. Despite their historical relevance, symmetric substitution ciphers have been limited by vulnerabilities to cryptanalytic methods like frequency analysis and known plaintext attacks. The aim of our research is to mitigate these vulnerabilities by employing a polyalphabetic substitution strategy that incorporates a distinct randomized key matrix. This matrix plays a pivotal role in generating a unique random key, comprising characters, encompassing both uppercase and lowercase letters, numeric, and special characters, to derive the corresponding ciphertext. The effectiveness of the proposed methodology in enhancing the security of conventional substitution methods for file encryption and decryption is supported by comprehensive testing and analysis, which encompass computational speed, frequency analysis, keyspace examination, Kasiski test, entropy analysis, and the utilization of a large language model.
1610.09064
Himabindu Lakkaraju
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Eric Horvitz
Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration
To appear in AAAI 2017; Presented at NIPS Workshop on Reliability in ML, 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive models deployed in the real world may assign incorrect labels to instances with high confidence. Such errors or unknown unknowns are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases encountered at test time. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this paper, we formulate and address the problem of informed discovery of unknown unknowns of any given predictive model where unknown unknowns occur due to systematic biases in the training data. We propose a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery. We employ a two-phase approach which first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model, and then utilizes an explore-exploit strategy for discovering unknown unknowns across these partitions. We demonstrate the efficacy of our framework by varying the underlying causes of unknown unknowns across various applications. To the best of our knowledge, this paper presents the first algorithmic approach to the problem of discovering unknown unknowns of predictive models.
[ { "created": "Fri, 28 Oct 2016 02:55:14 GMT", "version": "v1" }, { "created": "Tue, 6 Dec 2016 03:01:21 GMT", "version": "v2" }, { "created": "Sat, 10 Dec 2016 06:02:38 GMT", "version": "v3" } ]
2016-12-13
[ [ "Lakkaraju", "Himabindu", "" ], [ "Kamar", "Ece", "" ], [ "Caruana", "Rich", "" ], [ "Horvitz", "Eric", "" ] ]
Predictive models deployed in the real world may assign incorrect labels to instances with high confidence. Such errors or unknown unknowns are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases encountered at test time. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this paper, we formulate and address the problem of informed discovery of unknown unknowns of any given predictive model where unknown unknowns occur due to systematic biases in the training data. We propose a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery. We employ a two-phase approach which first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model, and then utilizes an explore-exploit strategy for discovering unknown unknowns across these partitions. We demonstrate the efficacy of our framework by varying the underlying causes of unknown unknowns across various applications. To the best of our knowledge, this paper presents the first algorithmic approach to the problem of discovering unknown unknowns of predictive models.
2101.04834
Doris Xin
Doris Xin, Eva Yiwei Wu, Doris Jung-Lin Lee, Niloufar Salehi, Aditya Parameswaran
Whither AutoML? Understanding the Role of Automation in Machine Learning Workflows
null
null
null
null
cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
Efforts to make machine learning more widely accessible have led to a rapid increase in Auto-ML tools that aim to automate the process of training and deploying machine learning. To understand how Auto-ML tools are used in practice today, we performed a qualitative study with participants ranging from novice hobbyists to industry researchers who use Auto-ML tools. We present insights into the benefits and deficiencies of existing tools, as well as the respective roles of the human and automation in ML workflows. Finally, we discuss design implications for the future of Auto-ML tool development. We argue that instead of full automation being the ultimate goal of Auto-ML, designers of these tools should focus on supporting a partnership between the user and the Auto-ML tool. This means that a range of Auto-ML tools will need to be developed to support varying user goals such as simplicity, reproducibility, and reliability.
[ { "created": "Wed, 13 Jan 2021 02:12:46 GMT", "version": "v1" } ]
2021-01-14
[ [ "Xin", "Doris", "" ], [ "Wu", "Eva Yiwei", "" ], [ "Lee", "Doris Jung-Lin", "" ], [ "Salehi", "Niloufar", "" ], [ "Parameswaran", "Aditya", "" ] ]
Efforts to make machine learning more widely accessible have led to a rapid increase in Auto-ML tools that aim to automate the process of training and deploying machine learning. To understand how Auto-ML tools are used in practice today, we performed a qualitative study with participants ranging from novice hobbyists to industry researchers who use Auto-ML tools. We present insights into the benefits and deficiencies of existing tools, as well as the respective roles of the human and automation in ML workflows. Finally, we discuss design implications for the future of Auto-ML tool development. We argue that instead of full automation being the ultimate goal of Auto-ML, designers of these tools should focus on supporting a partnership between the user and the Auto-ML tool. This means that a range of Auto-ML tools will need to be developed to support varying user goals such as simplicity, reproducibility, and reliability.
2107.11878
Abhishek Aich
Abhishek Aich, Meng Zheng, Srikrishna Karanam, Terrence Chen, Amit K. Roy-Chowdhury, Ziyan Wu
Spatio-Temporal Representation Factorization for Video-based Person Re-Identification
Accepted at IEEE ICCV 2021, Includes Supplementary Material
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite much recent progress in video-based person re-identification (re-ID), the current state-of-the-art still suffers from common real-world challenges such as appearance similarity among various people, occlusions, and frame misalignment. To alleviate these problems, we propose Spatio-Temporal Representation Factorization (STRF), a flexible new computational unit that can be used in conjunction with most existing 3D convolutional neural network architectures for re-ID. The key innovations of STRF over prior work include explicit pathways for learning discriminative temporal and spatial features, with each component further factorized to capture complementary person-specific appearance and motion information. Specifically, temporal factorization comprises two branches, one each for static features (e.g., the color of clothes) that do not change much over time, and dynamic features (e.g., walking patterns) that change over time. Further, spatial factorization also comprises two branches to learn both global (coarse segments) as well as local (finer segments) appearance features, with the local features particularly useful in cases of occlusion or spatial misalignment. These two factorization operations taken together result in a modular architecture for our parameter-wise light STRF unit that can be plugged in between any two 3D convolutional layers, resulting in an end-to-end learning framework. We empirically show that STRF improves performance of various existing baseline architectures while demonstrating new state-of-the-art results using standard person re-ID evaluation protocols on three benchmarks.
[ { "created": "Sun, 25 Jul 2021 19:29:37 GMT", "version": "v1" }, { "created": "Sun, 15 Aug 2021 01:49:08 GMT", "version": "v2" } ]
2021-08-17
[ [ "Aich", "Abhishek", "" ], [ "Zheng", "Meng", "" ], [ "Karanam", "Srikrishna", "" ], [ "Chen", "Terrence", "" ], [ "Roy-Chowdhury", "Amit K.", "" ], [ "Wu", "Ziyan", "" ] ]
Despite much recent progress in video-based person re-identification (re-ID), the current state-of-the-art still suffers from common real-world challenges such as appearance similarity among various people, occlusions, and frame misalignment. To alleviate these problems, we propose Spatio-Temporal Representation Factorization (STRF), a flexible new computational unit that can be used in conjunction with most existing 3D convolutional neural network architectures for re-ID. The key innovations of STRF over prior work include explicit pathways for learning discriminative temporal and spatial features, with each component further factorized to capture complementary person-specific appearance and motion information. Specifically, temporal factorization comprises two branches, one each for static features (e.g., the color of clothes) that do not change much over time, and dynamic features (e.g., walking patterns) that change over time. Further, spatial factorization also comprises two branches to learn both global (coarse segments) as well as local (finer segments) appearance features, with the local features particularly useful in cases of occlusion or spatial misalignment. These two factorization operations taken together result in a modular architecture for our parameter-wise light STRF unit that can be plugged in between any two 3D convolutional layers, resulting in an end-to-end learning framework. We empirically show that STRF improves performance of various existing baseline architectures while demonstrating new state-of-the-art results using standard person re-ID evaluation protocols on three benchmarks.
0709.4671
Ruoheng Liu
Ruoheng Liu and H. Vincent Poor
Secrecy Capacity Region of a Multi-Antenna Gaussian Broadcast Channel with Confidential Messages
Submitted to the IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
null
In wireless data networks, communication is particularly susceptible to eavesdropping due to its broadcast nature. Security and privacy systems have become critical for wireless providers and enterprise networks. This paper considers the problem of secret communication over the Gaussian broadcast channel, where a multi-antenna transmitter sends independent confidential messages to two users with information-theoretic secrecy. That is, each user would like to obtain its own confidential message in a reliable and safe manner. This communication model is referred to as the multi-antenna Gaussian broadcast channel with confidential messages (MGBC-CM). Under this communication scenario, a secret dirty-paper coding scheme and the corresponding achievable secrecy rate region are first developed based on Gaussian codebooks. Next, a computable Sato-type outer bound on the secrecy capacity region is provided for the MGBC-CM. Furthermore, the Sato-type outer bound prove to be consistent with the boundary of the secret dirty-paper coding achievable rate region, and hence, the secrecy capacity region of the MGBC-CM is established. Finally, two numerical examples demonstrate that both users can achieve positive rates simultaneously under the information-theoretic secrecy requirement.
[ { "created": "Fri, 28 Sep 2007 19:10:03 GMT", "version": "v1" } ]
2007-10-01
[ [ "Liu", "Ruoheng", "" ], [ "Poor", "H. Vincent", "" ] ]
In wireless data networks, communication is particularly susceptible to eavesdropping due to its broadcast nature. Security and privacy systems have become critical for wireless providers and enterprise networks. This paper considers the problem of secret communication over the Gaussian broadcast channel, where a multi-antenna transmitter sends independent confidential messages to two users with information-theoretic secrecy. That is, each user would like to obtain its own confidential message in a reliable and safe manner. This communication model is referred to as the multi-antenna Gaussian broadcast channel with confidential messages (MGBC-CM). Under this communication scenario, a secret dirty-paper coding scheme and the corresponding achievable secrecy rate region are first developed based on Gaussian codebooks. Next, a computable Sato-type outer bound on the secrecy capacity region is provided for the MGBC-CM. Furthermore, the Sato-type outer bound prove to be consistent with the boundary of the secret dirty-paper coding achievable rate region, and hence, the secrecy capacity region of the MGBC-CM is established. Finally, two numerical examples demonstrate that both users can achieve positive rates simultaneously under the information-theoretic secrecy requirement.
1702.03791
Hong Yu
Hong Yu, Zheng-Hua Tan, Zhanyu Ma, Jun Guo
DNN Filter Bank Cepstral Coefficients for Spoofing Detection
null
null
null
null
cs.SD cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of speech synthesis techniques, automatic speaker verification systems face the serious challenge of spoofing attack. In order to improve the reliability of speaker verification systems, we develop a new filter bank based cepstral feature, deep neural network filter bank cepstral coefficients (DNN-FBCC), to distinguish between natural and spoofed speech. The deep neural network filter bank is automatically generated by training a filter bank neural network (FBNN) using natural and synthetic speech. By adding restrictions on the training rules, the learned weight matrix of FBNN is band-limited and sorted by frequency, similar to the normal filter bank. Unlike the manually designed filter bank, the learned filter bank has different filter shapes in different channels, which can capture the differences between natural and synthetic speech more effectively. The experimental results on the ASVspoof {2015} database show that the Gaussian mixture model maximum-likelihood (GMM-ML) classifier trained by the new feature performs better than the state-of-the-art linear frequency cepstral coefficients (LFCC) based classifier, especially on detecting unknown attacks.
[ { "created": "Mon, 13 Feb 2017 14:44:17 GMT", "version": "v1" } ]
2017-02-14
[ [ "Yu", "Hong", "" ], [ "Tan", "Zheng-Hua", "" ], [ "Ma", "Zhanyu", "" ], [ "Guo", "Jun", "" ] ]
With the development of speech synthesis techniques, automatic speaker verification systems face the serious challenge of spoofing attack. In order to improve the reliability of speaker verification systems, we develop a new filter bank based cepstral feature, deep neural network filter bank cepstral coefficients (DNN-FBCC), to distinguish between natural and spoofed speech. The deep neural network filter bank is automatically generated by training a filter bank neural network (FBNN) using natural and synthetic speech. By adding restrictions on the training rules, the learned weight matrix of FBNN is band-limited and sorted by frequency, similar to the normal filter bank. Unlike the manually designed filter bank, the learned filter bank has different filter shapes in different channels, which can capture the differences between natural and synthetic speech more effectively. The experimental results on the ASVspoof {2015} database show that the Gaussian mixture model maximum-likelihood (GMM-ML) classifier trained by the new feature performs better than the state-of-the-art linear frequency cepstral coefficients (LFCC) based classifier, especially on detecting unknown attacks.
2012.00483
Markus Leippold
Francesco S. Varini and Jordan Boyd-Graber and Massimiliano Ciaramita and Markus Leippold
ClimaText: A Dataset for Climate Change Topic Detection
Accepted for the Tackling Climate Change with Machine Learning Workshop at NeurIPS 2020
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Climate change communication in the mass media and other textual sources may affect and shape public perception. Extracting climate change information from these sources is an important task, e.g., for filtering content and e-discovery, sentiment analysis, automatic summarization, question-answering, and fact-checking. However, automating this process is a challenge, as climate change is a complex, fast-moving, and often ambiguous topic with scarce resources for popular text-based AI tasks. In this paper, we introduce \textsc{ClimaText}, a dataset for sentence-based climate change topic detection, which we make publicly available. We explore different approaches to identify the climate change topic in various text sources. We find that popular keyword-based models are not adequate for such a complex and evolving task. Context-based algorithms like BERT \cite{devlin2018bert} can detect, in addition to many trivial cases, a variety of complex and implicit topic patterns. Nevertheless, our analysis reveals a great potential for improvement in several directions, such as, e.g., capturing the discussion on indirect effects of climate change. Hence, we hope this work can serve as a good starting point for further research on this topic.
[ { "created": "Tue, 1 Dec 2020 13:42:37 GMT", "version": "v1" }, { "created": "Sat, 2 Jan 2021 16:13:06 GMT", "version": "v2" } ]
2021-01-05
[ [ "Varini", "Francesco S.", "" ], [ "Boyd-Graber", "Jordan", "" ], [ "Ciaramita", "Massimiliano", "" ], [ "Leippold", "Markus", "" ] ]
Climate change communication in the mass media and other textual sources may affect and shape public perception. Extracting climate change information from these sources is an important task, e.g., for filtering content and e-discovery, sentiment analysis, automatic summarization, question-answering, and fact-checking. However, automating this process is a challenge, as climate change is a complex, fast-moving, and often ambiguous topic with scarce resources for popular text-based AI tasks. In this paper, we introduce \textsc{ClimaText}, a dataset for sentence-based climate change topic detection, which we make publicly available. We explore different approaches to identify the climate change topic in various text sources. We find that popular keyword-based models are not adequate for such a complex and evolving task. Context-based algorithms like BERT \cite{devlin2018bert} can detect, in addition to many trivial cases, a variety of complex and implicit topic patterns. Nevertheless, our analysis reveals a great potential for improvement in several directions, such as, e.g., capturing the discussion on indirect effects of climate change. Hence, we hope this work can serve as a good starting point for further research on this topic.
1601.06453
Or Ordentlich
Or Ordentlich
Novel Lower Bounds on the Entropy Rate of Binary Hidden Markov Processes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, Samorodnitsky proved a strengthened version of Mrs. Gerber's Lemma, where the output entropy of a binary symmetric channel is bounded in terms of the average entropy of the input projected on a random subset of coordinates. Here, this result is applied for deriving novel lower bounds on the entropy rate of binary hidden Markov processes. For symmetric underlying Markov processes, our bound improves upon the best known bound in the very noisy regime. The nonsymmetric case is also considered, and explicit bounds are derived for Markov processes that satisfy the $(1,\infty)$-RLL constraint.
[ { "created": "Mon, 25 Jan 2016 00:19:36 GMT", "version": "v1" }, { "created": "Mon, 9 May 2016 21:24:26 GMT", "version": "v2" } ]
2016-05-11
[ [ "Ordentlich", "Or", "" ] ]
Recently, Samorodnitsky proved a strengthened version of Mrs. Gerber's Lemma, where the output entropy of a binary symmetric channel is bounded in terms of the average entropy of the input projected on a random subset of coordinates. Here, this result is applied for deriving novel lower bounds on the entropy rate of binary hidden Markov processes. For symmetric underlying Markov processes, our bound improves upon the best known bound in the very noisy regime. The nonsymmetric case is also considered, and explicit bounds are derived for Markov processes that satisfy the $(1,\infty)$-RLL constraint.
2310.06762
Xiao Wang
Xiao Wang, Yuansen Zhang, Tianze Chen, Songyang Gao, Senjie Jin, Xianjun Yang, Zhiheng Xi, Rui Zheng, Yicheng Zou, Tao Gui, Qi Zhang, Xuanjing Huang
TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety. However, the continual learning aspect of these aligned LLMs has been largely overlooked. Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs, owing to both their simplicity and the models' potential exposure during instruction tuning. In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs. TRACE consists of 8 distinct datasets spanning challenging tasks including domain-specific tasks, multilingual capabilities, code generation, and mathematical reasoning. All datasets are standardized into a unified format, allowing for effortless automatic evaluation of LLMs. Our experiments show that after training on TRACE, aligned LLMs exhibit significant declines in both general ability and instruction-following capabilities. For example, the accuracy of llama2-chat 13B on gsm8k dataset declined precipitously from 28.8\% to 2\% after training on our datasets. This highlights the challenge of finding a suitable tradeoff between achieving performance on specific tasks while preserving the original prowess of LLMs. Empirical findings suggest that tasks inherently equipped with reasoning paths contribute significantly to preserving certain capabilities of LLMs against potential declines. Motivated by this, we introduce the Reasoning-augmented Continual Learning (RCL) approach. RCL integrates task-specific cues with meta-rationales, effectively reducing catastrophic forgetting in LLMs while expediting convergence on novel tasks.
[ { "created": "Tue, 10 Oct 2023 16:38:49 GMT", "version": "v1" } ]
2023-10-11
[ [ "Wang", "Xiao", "" ], [ "Zhang", "Yuansen", "" ], [ "Chen", "Tianze", "" ], [ "Gao", "Songyang", "" ], [ "Jin", "Senjie", "" ], [ "Yang", "Xianjun", "" ], [ "Xi", "Zhiheng", "" ], [ "Zheng", "Rui", "" ], [ "Zou", "Yicheng", "" ], [ "Gui", "Tao", "" ], [ "Zhang", "Qi", "" ], [ "Huang", "Xuanjing", "" ] ]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety. However, the continual learning aspect of these aligned LLMs has been largely overlooked. Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs, owing to both their simplicity and the models' potential exposure during instruction tuning. In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs. TRACE consists of 8 distinct datasets spanning challenging tasks including domain-specific tasks, multilingual capabilities, code generation, and mathematical reasoning. All datasets are standardized into a unified format, allowing for effortless automatic evaluation of LLMs. Our experiments show that after training on TRACE, aligned LLMs exhibit significant declines in both general ability and instruction-following capabilities. For example, the accuracy of llama2-chat 13B on gsm8k dataset declined precipitously from 28.8\% to 2\% after training on our datasets. This highlights the challenge of finding a suitable tradeoff between achieving performance on specific tasks while preserving the original prowess of LLMs. Empirical findings suggest that tasks inherently equipped with reasoning paths contribute significantly to preserving certain capabilities of LLMs against potential declines. Motivated by this, we introduce the Reasoning-augmented Continual Learning (RCL) approach. RCL integrates task-specific cues with meta-rationales, effectively reducing catastrophic forgetting in LLMs while expediting convergence on novel tasks.
2403.12830
Chenglong Wang
Cheng-Long Wang, Qi Li, Zihang Xiang, Yinzhi Cao, and Di Wang
Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Approximate Unlearning Completeness
null
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
By adopting a more flexible definition of unlearning and adjusting the model distribution to simulate training without the targeted data, approximate machine unlearning provides a less resource-demanding alternative to the more laborious exact unlearning methods. Yet, the unlearning completeness of target samples-even when the approximate algorithms are executed faithfully without external threats-remains largely unexamined, raising questions about those approximate algorithms' ability to fulfill their commitment of unlearning during the lifecycle. In this paper, we introduce the task of Lifecycle Unlearning Commitment Management (LUCM) for approximate unlearning and outline its primary challenges. We propose an efficient metric designed to assess the sample-level unlearning completeness. Our empirical results demonstrate its superiority over membership inference techniques in two key areas: the strong correlation of its measurements with unlearning completeness across various unlearning tasks, and its computational efficiency, making it suitable for real-time applications. Additionally, we show that this metric is able to serve as a tool for monitoring unlearning anomalies throughout the unlearning lifecycle, including both under-unlearning and over-unlearning. We apply this metric to evaluate the unlearning commitments of current approximate algorithms. Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups. These insights emphasize the crucial importance of LUCM throughout the unlearning lifecycle. We will soon open-source our newly developed benchmark.
[ { "created": "Tue, 19 Mar 2024 15:37:27 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 23:20:41 GMT", "version": "v2" } ]
2024-05-02
[ [ "Wang", "Cheng-Long", "" ], [ "Li", "Qi", "" ], [ "Xiang", "Zihang", "" ], [ "Cao", "Yinzhi", "" ], [ "Wang", "Di", "" ] ]
By adopting a more flexible definition of unlearning and adjusting the model distribution to simulate training without the targeted data, approximate machine unlearning provides a less resource-demanding alternative to the more laborious exact unlearning methods. Yet, the unlearning completeness of target samples-even when the approximate algorithms are executed faithfully without external threats-remains largely unexamined, raising questions about those approximate algorithms' ability to fulfill their commitment of unlearning during the lifecycle. In this paper, we introduce the task of Lifecycle Unlearning Commitment Management (LUCM) for approximate unlearning and outline its primary challenges. We propose an efficient metric designed to assess the sample-level unlearning completeness. Our empirical results demonstrate its superiority over membership inference techniques in two key areas: the strong correlation of its measurements with unlearning completeness across various unlearning tasks, and its computational efficiency, making it suitable for real-time applications. Additionally, we show that this metric is able to serve as a tool for monitoring unlearning anomalies throughout the unlearning lifecycle, including both under-unlearning and over-unlearning. We apply this metric to evaluate the unlearning commitments of current approximate algorithms. Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups. These insights emphasize the crucial importance of LUCM throughout the unlearning lifecycle. We will soon open-source our newly developed benchmark.
1603.03727
Hongwei Xi
Hongwei Xi and Zhiqiang Ren and Hanwen Wu and William Blair
Session Types in a Linearly Typed Multi-Threaded Lambda-Calculus
This is the original version of the paper on supporting programming with dyadic session types in ATS
null
null
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a formalization of session types in a multi-threaded lambda-calculus (MTLC) equipped with a linear type system, establishing for the MTLC both type preservation and global progress. The latter (global progress) implies that the evaluation of a well-typed program in the MTLC can never reach a deadlock. As this formulated MTLC can be readily embedded into ATS, a full-fledged language with a functional programming core that supports both dependent types (of DML-style) and linear types, we obtain a direct implementation of session types in ATS. In addition, we gain immediate support for a form of dependent session types based on this embedding into ATS. Compared to various existing formalizations of session types, we see the one given in this paper is unique in its closeness to concrete implementation. In particular, we report such an implementation ready for practical use that generates Erlang code from well-typed ATS source (making use of session types), thus taking great advantage of the infrastructural support for distributed computing in Erlang.
[ { "created": "Fri, 11 Mar 2016 19:15:03 GMT", "version": "v1" } ]
2016-03-14
[ [ "Xi", "Hongwei", "" ], [ "Ren", "Zhiqiang", "" ], [ "Wu", "Hanwen", "" ], [ "Blair", "William", "" ] ]
We present a formalization of session types in a multi-threaded lambda-calculus (MTLC) equipped with a linear type system, establishing for the MTLC both type preservation and global progress. The latter (global progress) implies that the evaluation of a well-typed program in the MTLC can never reach a deadlock. As this formulated MTLC can be readily embedded into ATS, a full-fledged language with a functional programming core that supports both dependent types (of DML-style) and linear types, we obtain a direct implementation of session types in ATS. In addition, we gain immediate support for a form of dependent session types based on this embedding into ATS. Compared to various existing formalizations of session types, we see the one given in this paper is unique in its closeness to concrete implementation. In particular, we report such an implementation ready for practical use that generates Erlang code from well-typed ATS source (making use of session types), thus taking great advantage of the infrastructural support for distributed computing in Erlang.
2112.13301
Rajagopal Venkatesaramani
Rajagopal Venkatesaramani, Zhiyu Wan, Bradley A. Malin, Yevgeniy Vorobeychik
Defending Against Membership Inference Attacks on Beacon Services
null
null
null
null
cs.CR q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Large genomic datasets are now created through numerous activities, including recreational genealogical investigations, biomedical research, and clinical care. At the same time, genomic data has become valuable for reuse beyond their initial point of collection, but privacy concerns often hinder access. Over the past several years, Beacon services have emerged to broaden accessibility to such data. These services enable users to query for the presence of a particular minor allele in a private dataset, information that can help care providers determine if genomic variation is spurious or has some known clinical indication. However, various studies have shown that even this limited access model can leak if individuals are members in the underlying dataset. Several approaches for mitigating this vulnerability have been proposed, but they are limited in that they 1) typically rely on heuristics and 2) offer probabilistic privacy guarantees, but neglect utility. In this paper, we present a novel algorithmic framework to ensure privacy in a Beacon service setting with a minimal number of query response flips (e.g., changing a positive response to a negative). Specifically, we represent this problem as combinatorial optimization in both the batch setting (where queries arrive all at once), as well as the online setting (where queries arrive sequentially). The former setting has been the primary focus in prior literature, whereas real Beacons allow sequential queries, motivating the latter investigation. We present principled algorithms in this framework with both privacy and, in some cases, worst-case utility guarantees. Moreover, through an extensive experimental evaluation, we show that the proposed approaches significantly outperform the state of the art in terms of privacy and utility.
[ { "created": "Sat, 25 Dec 2021 23:33:44 GMT", "version": "v1" } ]
2021-12-28
[ [ "Venkatesaramani", "Rajagopal", "" ], [ "Wan", "Zhiyu", "" ], [ "Malin", "Bradley A.", "" ], [ "Vorobeychik", "Yevgeniy", "" ] ]
Large genomic datasets are now created through numerous activities, including recreational genealogical investigations, biomedical research, and clinical care. At the same time, genomic data has become valuable for reuse beyond their initial point of collection, but privacy concerns often hinder access. Over the past several years, Beacon services have emerged to broaden accessibility to such data. These services enable users to query for the presence of a particular minor allele in a private dataset, information that can help care providers determine if genomic variation is spurious or has some known clinical indication. However, various studies have shown that even this limited access model can leak if individuals are members in the underlying dataset. Several approaches for mitigating this vulnerability have been proposed, but they are limited in that they 1) typically rely on heuristics and 2) offer probabilistic privacy guarantees, but neglect utility. In this paper, we present a novel algorithmic framework to ensure privacy in a Beacon service setting with a minimal number of query response flips (e.g., changing a positive response to a negative). Specifically, we represent this problem as combinatorial optimization in both the batch setting (where queries arrive all at once), as well as the online setting (where queries arrive sequentially). The former setting has been the primary focus in prior literature, whereas real Beacons allow sequential queries, motivating the latter investigation. We present principled algorithms in this framework with both privacy and, in some cases, worst-case utility guarantees. Moreover, through an extensive experimental evaluation, we show that the proposed approaches significantly outperform the state of the art in terms of privacy and utility.
2307.06563
Geoffrey Goodell
Ryan Bowler, Chris Speed, Geoffrey Goodell
Money: Who Has a Stake in the Most Value-Centric Common Design Material?
19 pages, 1 figure
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Money is more than just a numeric value. It embodies trust and moral gravity, and it offers flexible ways to transact. However, the emergence of Central Bank Digital Currency (CBDC) is set to bring about a drastic change in the future of money. This paper invites designers to reflect on their role in shaping material and immaterial monetary change. In this rapidly changing landscape, design could be instrumental in uncovering and showcasing the diverse values that money holds for different stakeholders. Understanding these diversities could promote a more equitable and inclusive financial, social, and global landscape within emergent forms of cash-like digital currency. Without such consideration, certain forms of money we have come to know could disappear, along with the values people hold upon them. We report on semi-structured interviews with stakeholders who have current knowledge or involvement in the emerging field of Central Bank Digital Currency (CBDC). Our research indicates that this new form of money presents both challenges and opportunities for designers. Specifically, we emphasise the potential for Central Bank Digital Currency (CBDC) to either positively or negatively reform values through its design. By considering time, reflecting present values, and promoting inclusion in its deployment, we can strive to ensure that Central Bank Digital Currency (CBDC) represents the diverse needs and perspectives of its users.
[ { "created": "Thu, 13 Jul 2023 05:30:41 GMT", "version": "v1" } ]
2023-07-14
[ [ "Bowler", "Ryan", "" ], [ "Speed", "Chris", "" ], [ "Goodell", "Geoffrey", "" ] ]
Money is more than just a numeric value. It embodies trust and moral gravity, and it offers flexible ways to transact. However, the emergence of Central Bank Digital Currency (CBDC) is set to bring about a drastic change in the future of money. This paper invites designers to reflect on their role in shaping material and immaterial monetary change. In this rapidly changing landscape, design could be instrumental in uncovering and showcasing the diverse values that money holds for different stakeholders. Understanding these diversities could promote a more equitable and inclusive financial, social, and global landscape within emergent forms of cash-like digital currency. Without such consideration, certain forms of money we have come to know could disappear, along with the values people hold upon them. We report on semi-structured interviews with stakeholders who have current knowledge or involvement in the emerging field of Central Bank Digital Currency (CBDC). Our research indicates that this new form of money presents both challenges and opportunities for designers. Specifically, we emphasise the potential for Central Bank Digital Currency (CBDC) to either positively or negatively reform values through its design. By considering time, reflecting present values, and promoting inclusion in its deployment, we can strive to ensure that Central Bank Digital Currency (CBDC) represents the diverse needs and perspectives of its users.
2209.07943
Mirza Fuad Adnan
Mirza Fuad Adnan, Nadim Ahmed, Imrez Ishraque, Md. Sifath Al Amin, Md. Sumit Hasan
Traffic Congestion Prediction using Deep Convolutional Neural Networks: A Color-coding Approach
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The traffic video data has become a critical factor in confining the state of traffic congestion due to the recent advancements in computer vision. This work proposes a unique technique for traffic video classification using a color-coding scheme before training the traffic data in a Deep convolutional neural network. At first, the video data is transformed into an imagery data set; then, the vehicle detection is performed using the You Only Look Once algorithm. A color-coded scheme has been adopted to transform the imagery dataset into a binary image dataset. These binary images are fed to a Deep Convolutional Neural Network. Using the UCSD dataset, we have obtained a classification accuracy of 98.2%.
[ { "created": "Fri, 16 Sep 2022 14:02:20 GMT", "version": "v1" } ]
2022-09-19
[ [ "Adnan", "Mirza Fuad", "" ], [ "Ahmed", "Nadim", "" ], [ "Ishraque", "Imrez", "" ], [ "Amin", "Md. Sifath Al", "" ], [ "Hasan", "Md. Sumit", "" ] ]
The traffic video data has become a critical factor in confining the state of traffic congestion due to the recent advancements in computer vision. This work proposes a unique technique for traffic video classification using a color-coding scheme before training the traffic data in a Deep convolutional neural network. At first, the video data is transformed into an imagery data set; then, the vehicle detection is performed using the You Only Look Once algorithm. A color-coded scheme has been adopted to transform the imagery dataset into a binary image dataset. These binary images are fed to a Deep Convolutional Neural Network. Using the UCSD dataset, we have obtained a classification accuracy of 98.2%.
1809.03254
Jakub Michaliszyn
Jakub Michaliszyn
Elementary Multimodal Logics
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study multimodal logics over universally first-order definable classes of frames. We show that even for bimodal logics, there are universal Horn formulas that define set of frames such that the satisfiability problem is undecidable, even if one or two of the binary relations are transitive.
[ { "created": "Mon, 10 Sep 2018 12:01:13 GMT", "version": "v1" } ]
2018-09-11
[ [ "Michaliszyn", "Jakub", "" ] ]
We study multimodal logics over universally first-order definable classes of frames. We show that even for bimodal logics, there are universal Horn formulas that define set of frames such that the satisfiability problem is undecidable, even if one or two of the binary relations are transitive.
2303.02880
Yan Qin
Yan Qin, Yong Liang Guan, and Chau Yuen
Spatiotemporal Capsule Neural Network for Vehicle Trajectory Prediction
IEEE TVT has accepted this paper
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Through advancement of the Vehicle-to-Everything (V2X) network, road safety, energy consumption, and traffic efficiency can be significantly improved. An accurate vehicle trajectory prediction benefits communication traffic management and network resource allocation for the real-time application of the V2X network. Recurrent neural networks and their variants have been reported in recent research to predict vehicle mobility. However, the spatial attribute of vehicle movement behavior has been overlooked, resulting in incomplete information utilization. To bridge this gap, we put forward for the first time a hierarchical trajectory prediction structure using the capsule neural network (CapsNet) with three sequential components. First, the geographic information is transformed into a grid map presentation, describing vehicle mobility distribution spatially and temporally. Second, CapsNet serves as the core model to embed local temporal and global spatial correlation through hierarchical capsules. Finally, extensive experiments conducted on actual taxi mobility data collected in Porto city (Portugal) and Singapore show that the proposed method outperforms the state-of-the-art methods.
[ { "created": "Mon, 6 Mar 2023 04:15:29 GMT", "version": "v1" } ]
2023-03-07
[ [ "Qin", "Yan", "" ], [ "Guan", "Yong Liang", "" ], [ "Yuen", "Chau", "" ] ]
Through advancement of the Vehicle-to-Everything (V2X) network, road safety, energy consumption, and traffic efficiency can be significantly improved. An accurate vehicle trajectory prediction benefits communication traffic management and network resource allocation for the real-time application of the V2X network. Recurrent neural networks and their variants have been reported in recent research to predict vehicle mobility. However, the spatial attribute of vehicle movement behavior has been overlooked, resulting in incomplete information utilization. To bridge this gap, we put forward for the first time a hierarchical trajectory prediction structure using the capsule neural network (CapsNet) with three sequential components. First, the geographic information is transformed into a grid map presentation, describing vehicle mobility distribution spatially and temporally. Second, CapsNet serves as the core model to embed local temporal and global spatial correlation through hierarchical capsules. Finally, extensive experiments conducted on actual taxi mobility data collected in Porto city (Portugal) and Singapore show that the proposed method outperforms the state-of-the-art methods.
1912.03135
Preslav Nakov
Francisco Guzman, Shafiq Joty, Lluis Marquez, Preslav Nakov
Pairwise Neural Machine Translation Evaluation
machine translation evaluation, machine translation, pairwise ranking, learning to rank. arXiv admin note: substantial text overlap with arXiv:1710.02095
Conference of the Association for Computational Linguistics (ACL'2015)
null
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a pair of hypotheses, given the reference translation. In this framework, lexical, syntactic and semantic information from the reference and the two hypotheses is compacted into relatively small distributed vector representations, and fed into a multi-layer neural network that models the interaction between each of the hypotheses and the reference, as well as between the two hypotheses. These compact representations are in turn based on word and sentence embeddings, which are learned using neural networks. The framework is flexible, allows for efficient learning and classification, and yields correlation with humans that rivals the state of the art.
[ { "created": "Thu, 5 Dec 2019 05:17:05 GMT", "version": "v1" } ]
2019-12-09
[ [ "Guzman", "Francisco", "" ], [ "Joty", "Shafiq", "" ], [ "Marquez", "Lluis", "" ], [ "Nakov", "Preslav", "" ] ]
We present a novel framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a pair of hypotheses, given the reference translation. In this framework, lexical, syntactic and semantic information from the reference and the two hypotheses is compacted into relatively small distributed vector representations, and fed into a multi-layer neural network that models the interaction between each of the hypotheses and the reference, as well as between the two hypotheses. These compact representations are in turn based on word and sentence embeddings, which are learned using neural networks. The framework is flexible, allows for efficient learning and classification, and yields correlation with humans that rivals the state of the art.
1805.10383
Christopher Jenkins
Christopher Jenkins, Aaron Stump
Spine-local Type Inference
Submitted to IFL'18 (Implementation and Application of Functional Languages)
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present spine-local type inference, a partial type inference system for inferring omitted type annotations for System F terms based on local type inference. Local type inference relies on bidirectional inference rules to propagate type information into and out of adjacent nodes of the AST and restricts type-argument inference to occur only within a single node. Spine-local inference relaxes the restriction on type-argument inference by allowing it to occur only within an {application spine and improves upon it by using contextual type-argument inference. As our goal is to explore the design space of local type inference, we show that, relative to other variants, spine-local type inference enables desirable features such as first-class curried applications, partial type applications, and the ability to infer types for some terms not otherwise possible. Our approach enjoys usual properties of a bidirectional system of having a specification for our inference algorithm and predictable requirements for typing annotations, and in particular maintains some the advantages of local type inference such as a relatively simple implementation and a tendency to produce good-quality error messages when type inference fails.
[ { "created": "Fri, 25 May 2018 22:44:08 GMT", "version": "v1" } ]
2018-05-29
[ [ "Jenkins", "Christopher", "" ], [ "Stump", "Aaron", "" ] ]
We present spine-local type inference, a partial type inference system for inferring omitted type annotations for System F terms based on local type inference. Local type inference relies on bidirectional inference rules to propagate type information into and out of adjacent nodes of the AST and restricts type-argument inference to occur only within a single node. Spine-local inference relaxes the restriction on type-argument inference by allowing it to occur only within an {application spine and improves upon it by using contextual type-argument inference. As our goal is to explore the design space of local type inference, we show that, relative to other variants, spine-local type inference enables desirable features such as first-class curried applications, partial type applications, and the ability to infer types for some terms not otherwise possible. Our approach enjoys usual properties of a bidirectional system of having a specification for our inference algorithm and predictable requirements for typing annotations, and in particular maintains some the advantages of local type inference such as a relatively simple implementation and a tendency to produce good-quality error messages when type inference fails.
2210.06333
Max Chumley
Max M. Chumley, Melih C. Yesilli, Jisheng Chen, Firas A. Khasawneh, Yang Guo
Pattern Characterization Using Topological Data Analysis: Application to Piezo Vibration Striking Treatment
Updated 6/9/23 to include changes from the review process. Main updates: redefined roundness score to be consistent with the outputs from the depth score (percentage), all quantities defined in terms of radius instead of diameter, added noise study to demonstrate noise robustness of the scores
null
10.1016/j.precisioneng.2023.05.005
null
cs.CG math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantifying patterns in visual or tactile textures provides important information about the process or phenomena that generated these patterns. In manufacturing, these patterns can be intentionally introduced as a design feature, or they can be a byproduct of a specific process. Since surface texture has significant impact on the mechanical properties and the longevity of the workpiece, it is important to develop tools for quantifying surface patterns and, when applicable, comparing them to their nominal counterparts. While existing tools may be able to indicate the existence of a pattern, they typically do not provide more information about the pattern structure, or how much it deviates from a nominal pattern. Further, prior works do not provide automatic or algorithmic approaches for quantifying other pattern characteristics such as depths' consistency, and variations in the pattern motifs at different level sets. This paper leverages persistent homology from Topological Data Analysis (TDA) to derive noise-robust scores for quantifying motifs' depth and roundness in a pattern. Specifically, sublevel persistence is used to derive scores that quantify the consistency of indentation depths at any level set in Piezo Vibration Striking Treatment (PVST) surfaces. Moreover, we combine sublevel persistence with the distance transform to quantify the consistency of the indentation radii, and to compare them with the nominal ones. Although the tool in our PVST experiments had a semi-spherical profile, we present a generalization of our approach to tools/motifs of arbitrary shapes thus making our method applicable to other pattern-generating manufacturing processes.
[ { "created": "Wed, 12 Oct 2022 15:53:23 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 22:26:13 GMT", "version": "v2" } ]
2023-06-13
[ [ "Chumley", "Max M.", "" ], [ "Yesilli", "Melih C.", "" ], [ "Chen", "Jisheng", "" ], [ "Khasawneh", "Firas A.", "" ], [ "Guo", "Yang", "" ] ]
Quantifying patterns in visual or tactile textures provides important information about the process or phenomena that generated these patterns. In manufacturing, these patterns can be intentionally introduced as a design feature, or they can be a byproduct of a specific process. Since surface texture has significant impact on the mechanical properties and the longevity of the workpiece, it is important to develop tools for quantifying surface patterns and, when applicable, comparing them to their nominal counterparts. While existing tools may be able to indicate the existence of a pattern, they typically do not provide more information about the pattern structure, or how much it deviates from a nominal pattern. Further, prior works do not provide automatic or algorithmic approaches for quantifying other pattern characteristics such as depths' consistency, and variations in the pattern motifs at different level sets. This paper leverages persistent homology from Topological Data Analysis (TDA) to derive noise-robust scores for quantifying motifs' depth and roundness in a pattern. Specifically, sublevel persistence is used to derive scores that quantify the consistency of indentation depths at any level set in Piezo Vibration Striking Treatment (PVST) surfaces. Moreover, we combine sublevel persistence with the distance transform to quantify the consistency of the indentation radii, and to compare them with the nominal ones. Although the tool in our PVST experiments had a semi-spherical profile, we present a generalization of our approach to tools/motifs of arbitrary shapes thus making our method applicable to other pattern-generating manufacturing processes.
2407.13363
Chang Liu
Chang Liu, Giulia Rizzoli, Pietro Zanuttigh, Fu Li, Yi Niu
Learning from the Web: Language Drives Weakly-Supervised Incremental Learning for Semantic Segmentation
ECCV 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current weakly-supervised incremental learning for semantic segmentation (WILSS) approaches only consider replacing pixel-level annotations with image-level labels, while the training images are still from well-designed datasets. In this work, we argue that widely available web images can also be considered for the learning of new classes. To achieve this, firstly we introduce a strategy to select web images which are similar to previously seen examples in the latent space using a Fourier-based domain discriminator. Then, an effective caption-driven reharsal strategy is proposed to preserve previously learnt classes. To our knowledge, this is the first work to rely solely on web images for both the learning of new concepts and the preservation of the already learned ones in WILSS. Experimental results show that the proposed approach can reach state-of-the-art performances without using manually selected and annotated data in the incremental steps.
[ { "created": "Thu, 18 Jul 2024 10:14:49 GMT", "version": "v1" } ]
2024-07-19
[ [ "Liu", "Chang", "" ], [ "Rizzoli", "Giulia", "" ], [ "Zanuttigh", "Pietro", "" ], [ "Li", "Fu", "" ], [ "Niu", "Yi", "" ] ]
Current weakly-supervised incremental learning for semantic segmentation (WILSS) approaches only consider replacing pixel-level annotations with image-level labels, while the training images are still from well-designed datasets. In this work, we argue that widely available web images can also be considered for the learning of new classes. To achieve this, firstly we introduce a strategy to select web images which are similar to previously seen examples in the latent space using a Fourier-based domain discriminator. Then, an effective caption-driven reharsal strategy is proposed to preserve previously learnt classes. To our knowledge, this is the first work to rely solely on web images for both the learning of new concepts and the preservation of the already learned ones in WILSS. Experimental results show that the proposed approach can reach state-of-the-art performances without using manually selected and annotated data in the incremental steps.
1509.05618
Ioannis Krikidis
Ioannis Krikidis
Relay Selection in Wireless Powered Cooperative Networks with Energy Storage
IEEE Journal Selected Areas on Communications- Special Issue on Green Communications and Networking
null
10.1109/JSAC.2015.2479015
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the problem of relay selection in wireless powered cooperative networks, where spatially random relays are equipped with energy storage devices e.g., batteries. In contrast to conventional techniques and in order to reduce complexity, the relay nodes can either harvest energy from the source signal (in case of uncharged battery) or attempt to decode and forward it (in case of charged battery). Several relay selection schemes that correspond to different state information requirements and implementation complexities are proposed. The charging/discharging behavior of the battery is modeled as a two-state Markov chain and analytical expressions for the steady-state distribution and the outage probability performance are derived for each relay selection scheme. We prove that energy storage significantly affects the performance of the system and results in a zeroth diversity gain at high signal-to-noise ratios; the convergence floors depend on the steady-state distribution of the battery and are derived in closed-form by using appropriate approximations. The proposed relay selection schemes are generalized to a large-scale network with multiple access points (APs), where relays assist the closest AP and suffer from multi-user interference.
[ { "created": "Fri, 18 Sep 2015 13:22:43 GMT", "version": "v1" } ]
2016-11-17
[ [ "Krikidis", "Ioannis", "" ] ]
This paper deals with the problem of relay selection in wireless powered cooperative networks, where spatially random relays are equipped with energy storage devices e.g., batteries. In contrast to conventional techniques and in order to reduce complexity, the relay nodes can either harvest energy from the source signal (in case of uncharged battery) or attempt to decode and forward it (in case of charged battery). Several relay selection schemes that correspond to different state information requirements and implementation complexities are proposed. The charging/discharging behavior of the battery is modeled as a two-state Markov chain and analytical expressions for the steady-state distribution and the outage probability performance are derived for each relay selection scheme. We prove that energy storage significantly affects the performance of the system and results in a zeroth diversity gain at high signal-to-noise ratios; the convergence floors depend on the steady-state distribution of the battery and are derived in closed-form by using appropriate approximations. The proposed relay selection schemes are generalized to a large-scale network with multiple access points (APs), where relays assist the closest AP and suffer from multi-user interference.
1703.00356
Renata Khasanova
Renata Khasanova and Pascal Frossard
Graph-based Isometry Invariant Representation Learning
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning transformation invariant representations of visual data is an important problem in computer vision. Deep convolutional networks have demonstrated remarkable results for image and video classification tasks. However, they have achieved only limited success in the classification of images that undergo geometric transformations. In this work we present a novel Transformation Invariant Graph-based Network (TIGraNet), which learns graph-based features that are inherently invariant to isometric transformations such as rotation and translation of input images. In particular, images are represented as signals on graphs, which permits to replace classical convolution and pooling layers in deep networks with graph spectral convolution and dynamic graph pooling layers that together contribute to invariance to isometric transformations. Our experiments show high performance on rotated and translated images from the test set compared to classical architectures that are very sensitive to transformations in the data. The inherent invariance properties of our framework provide key advantages, such as increased resiliency to data variability and sustained performance with limited training sets.
[ { "created": "Wed, 1 Mar 2017 15:51:13 GMT", "version": "v1" } ]
2017-03-02
[ [ "Khasanova", "Renata", "" ], [ "Frossard", "Pascal", "" ] ]
Learning transformation invariant representations of visual data is an important problem in computer vision. Deep convolutional networks have demonstrated remarkable results for image and video classification tasks. However, they have achieved only limited success in the classification of images that undergo geometric transformations. In this work we present a novel Transformation Invariant Graph-based Network (TIGraNet), which learns graph-based features that are inherently invariant to isometric transformations such as rotation and translation of input images. In particular, images are represented as signals on graphs, which permits to replace classical convolution and pooling layers in deep networks with graph spectral convolution and dynamic graph pooling layers that together contribute to invariance to isometric transformations. Our experiments show high performance on rotated and translated images from the test set compared to classical architectures that are very sensitive to transformations in the data. The inherent invariance properties of our framework provide key advantages, such as increased resiliency to data variability and sustained performance with limited training sets.
2004.01300
Sabur Baidya
Sabur Baidya, Peyman Tehrani and Marco Levorato
Data-Driven Path Selection for Real-Time Video Streaming at the Network Edge
This article has been accepted for publication in the IEEE International Conference on Communications (ICC) Workshop on "Edge Machine Learning for 5G Mobile Networks and Beyond (EML5G)" 2020
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a framework for the dynamic selection of the wireless channels used to deliver information-rich data streams to edge servers. The approach we propose is data-driven, where a predictor, whose output informs the decision making of the channel selector, is built from available data on the transformation imposed by the network on previously transmitted packets. The proposed technique is contextualized to real-time video streaming for immediate processing. The core of our framework is the notion of probes, that is, short bursts of packets transmitted over unused channels to acquire information while generating a controlled impact on other active links. Results indicate a high accuracy of the prediction output and a significant improvement in terms of received video quality when the prediction output is used to dynamically select the used channel for transmission.
[ { "created": "Thu, 2 Apr 2020 23:08:00 GMT", "version": "v1" } ]
2020-04-06
[ [ "Baidya", "Sabur", "" ], [ "Tehrani", "Peyman", "" ], [ "Levorato", "Marco", "" ] ]
In this paper, we present a framework for the dynamic selection of the wireless channels used to deliver information-rich data streams to edge servers. The approach we propose is data-driven, where a predictor, whose output informs the decision making of the channel selector, is built from available data on the transformation imposed by the network on previously transmitted packets. The proposed technique is contextualized to real-time video streaming for immediate processing. The core of our framework is the notion of probes, that is, short bursts of packets transmitted over unused channels to acquire information while generating a controlled impact on other active links. Results indicate a high accuracy of the prediction output and a significant improvement in terms of received video quality when the prediction output is used to dynamically select the used channel for transmission.
cs/0512100
Giorgi Japaridze
Giorgi Japaridze
The logic of interactive Turing reduction
null
Journal of Symbolic Logic 72 (2007), pp. 243-276
10.2178/jsl/1174668394
null
cs.LO cs.AI math.LO
null
The paper gives a soundness and completeness proof for the implicative fragment of intuitionistic calculus with respect to the semantics of computability logic, which understands intuitionistic implication as interactive algorithmic reduction. This concept -- more precisely, the associated concept of reducibility -- is a generalization of Turing reducibility from the traditional, input/output sorts of problems to computational tasks of arbitrary degrees of interactivity. See http://www.cis.upenn.edu/~giorgi/cl.html for a comprehensive online source on computability logic.
[ { "created": "Wed, 28 Dec 2005 07:25:57 GMT", "version": "v1" }, { "created": "Thu, 29 Dec 2005 22:03:06 GMT", "version": "v2" }, { "created": "Fri, 3 Feb 2006 05:44:27 GMT", "version": "v3" }, { "created": "Sat, 11 Feb 2006 15:38:42 GMT", "version": "v4" } ]
2011-04-15
[ [ "Japaridze", "Giorgi", "" ] ]
The paper gives a soundness and completeness proof for the implicative fragment of intuitionistic calculus with respect to the semantics of computability logic, which understands intuitionistic implication as interactive algorithmic reduction. This concept -- more precisely, the associated concept of reducibility -- is a generalization of Turing reducibility from the traditional, input/output sorts of problems to computational tasks of arbitrary degrees of interactivity. See http://www.cis.upenn.edu/~giorgi/cl.html for a comprehensive online source on computability logic.
2309.09228
Nikola Jedli\v{c}kov\'a
Nikola Jedli\v{c}kov\'a, Jan Kratochv\'il
Hamiltonian path and Hamiltonian cycle are solvable in polynomial time in graphs of bounded independence number
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Hamiltonian path (a Hamiltonian cycle) in a graph is a path (a cycle, respectively) that traverses all of its vertices. The problems of deciding their existence in an input graph are well-known to be NP-complete, in fact, they belong to the first problems shown to be computationally hard when the theory of NP-completeness was being developed. A lot of research has been devoted to the complexity of Hamiltonian path and Hamiltonian cycle problems for special graph classes, yet only a handful of positive results are known. The complexities of both of these problems have been open even for $4K_1$-free graphs, i.e., graphs of independence number at most $3$. We answer this question in the general setting of graphs of bounded independence number. We also consider a newly introduced problem called \emph{Hamiltonian-$\ell$-Linkage} which is related to the notions of a path cover and of a linkage in a graph. This problem asks if given $\ell$ pairs of vertices in an input graph can be connected by disjoint paths that altogether traverse all vertices of the graph. For $\ell=1$, Hamiltonian-1-Linkage asks for existence of a Hamiltonian path connecting a given pair of vertices. Our main result reads that for every pair of integers $k$ and $\ell$, the Hamiltonian-$\ell$-Linkage problem is polynomial time solvable for graphs of independence number not exceeding $k$.
[ { "created": "Sun, 17 Sep 2023 09:59:47 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2024 16:58:02 GMT", "version": "v2" } ]
2024-04-10
[ [ "Jedličková", "Nikola", "" ], [ "Kratochvíl", "Jan", "" ] ]
A Hamiltonian path (a Hamiltonian cycle) in a graph is a path (a cycle, respectively) that traverses all of its vertices. The problems of deciding their existence in an input graph are well-known to be NP-complete, in fact, they belong to the first problems shown to be computationally hard when the theory of NP-completeness was being developed. A lot of research has been devoted to the complexity of Hamiltonian path and Hamiltonian cycle problems for special graph classes, yet only a handful of positive results are known. The complexities of both of these problems have been open even for $4K_1$-free graphs, i.e., graphs of independence number at most $3$. We answer this question in the general setting of graphs of bounded independence number. We also consider a newly introduced problem called \emph{Hamiltonian-$\ell$-Linkage} which is related to the notions of a path cover and of a linkage in a graph. This problem asks if given $\ell$ pairs of vertices in an input graph can be connected by disjoint paths that altogether traverse all vertices of the graph. For $\ell=1$, Hamiltonian-1-Linkage asks for existence of a Hamiltonian path connecting a given pair of vertices. Our main result reads that for every pair of integers $k$ and $\ell$, the Hamiltonian-$\ell$-Linkage problem is polynomial time solvable for graphs of independence number not exceeding $k$.
2101.12691
Tao Wang
Tao Wang, Xiangrui Yang, Gianni Antichi, Anirudh Sivaraman, Aurojit Panda
Isolation mechanisms for high-speed packet-processing pipelines
null
The 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI '22), 2022
null
null
cs.NI cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data-plane programmability is now mainstream. As we find more use cases, deployments need to be able to run multiple packet-processing modules in a single device. These are likely to be developed by independent teams, either within the same organization or from multiple organizations. Therefore, we need isolation mechanisms to ensure that modules on the same device do not interfere with each other. This paper presents Menshen, an extension of the Reconfigurable Match Tables (RMT) pipeline that enforces isolation between different packet-processing modules. Menshen is comprised of a set of lightweight hardware primitives and an extension to the open source P4-16 reference compiler that act in conjunction to meet this goal. We have prototyped Menshen on two FPGA platforms (NetFPGA and Corundum). We show that our design provides isolation, and allows new modules to be loaded without impacting the ones already running. Finally, we demonstrate that feasibility of implementing Menshen on ASICs by using the FreePDK45nm technology library and the Synopsys DC synthesis software, showing that our design meets timing at a 1GHz clock frequency and needs approximately 6% additional chip area. We have open sourced the code for Menshen's hardware and software at https://isolation.quest/.
[ { "created": "Fri, 29 Jan 2021 17:21:27 GMT", "version": "v1" }, { "created": "Mon, 10 May 2021 20:48:20 GMT", "version": "v2" }, { "created": "Fri, 17 Sep 2021 03:45:01 GMT", "version": "v3" }, { "created": "Wed, 2 Mar 2022 17:26:01 GMT", "version": "v4" } ]
2022-04-19
[ [ "Wang", "Tao", "" ], [ "Yang", "Xiangrui", "" ], [ "Antichi", "Gianni", "" ], [ "Sivaraman", "Anirudh", "" ], [ "Panda", "Aurojit", "" ] ]
Data-plane programmability is now mainstream. As we find more use cases, deployments need to be able to run multiple packet-processing modules in a single device. These are likely to be developed by independent teams, either within the same organization or from multiple organizations. Therefore, we need isolation mechanisms to ensure that modules on the same device do not interfere with each other. This paper presents Menshen, an extension of the Reconfigurable Match Tables (RMT) pipeline that enforces isolation between different packet-processing modules. Menshen is comprised of a set of lightweight hardware primitives and an extension to the open source P4-16 reference compiler that act in conjunction to meet this goal. We have prototyped Menshen on two FPGA platforms (NetFPGA and Corundum). We show that our design provides isolation, and allows new modules to be loaded without impacting the ones already running. Finally, we demonstrate that feasibility of implementing Menshen on ASICs by using the FreePDK45nm technology library and the Synopsys DC synthesis software, showing that our design meets timing at a 1GHz clock frequency and needs approximately 6% additional chip area. We have open sourced the code for Menshen's hardware and software at https://isolation.quest/.
2301.03512
Julian Schmidt
Thomas Monninger, Julian Schmidt, Jan Rupprecht, David Raba, Julian Jordan, Daniel Frank, Steffen Staab, Klaus Dietmayer
SCENE: Reasoning about Traffic Scenes using Heterogeneous Graph Neural Networks
Thomas Monninger and Julian Schmidt are co-first authors. The order was determined alphabetically
IEEE Robotics and Automation Letters (RA-L), 2023
10.1109/LRA.2023.3234771
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding traffic scenes requires considering heterogeneous information about dynamic agents and the static infrastructure. In this work we propose SCENE, a methodology to encode diverse traffic scenes in heterogeneous graphs and to reason about these graphs using a heterogeneous Graph Neural Network encoder and task-specific decoders. The heterogeneous graphs, whose structures are defined by an ontology, consist of different nodes with type-specific node features and different relations with type-specific edge features. In order to exploit all the information given by these graphs, we propose to use cascaded layers of graph convolution. The result is an encoding of the scene. Task-specific decoders can be applied to predict desired attributes of the scene. Extensive evaluation on two diverse binary node classification tasks show the main strength of this methodology: despite being generic, it even manages to outperform task-specific baselines. The further application of our methodology to the task of node classification in various knowledge graphs shows its transferability to other domains.
[ { "created": "Mon, 9 Jan 2023 17:05:28 GMT", "version": "v1" } ]
2023-01-10
[ [ "Monninger", "Thomas", "" ], [ "Schmidt", "Julian", "" ], [ "Rupprecht", "Jan", "" ], [ "Raba", "David", "" ], [ "Jordan", "Julian", "" ], [ "Frank", "Daniel", "" ], [ "Staab", "Steffen", "" ], [ "Dietmayer", "Klaus", "" ] ]
Understanding traffic scenes requires considering heterogeneous information about dynamic agents and the static infrastructure. In this work we propose SCENE, a methodology to encode diverse traffic scenes in heterogeneous graphs and to reason about these graphs using a heterogeneous Graph Neural Network encoder and task-specific decoders. The heterogeneous graphs, whose structures are defined by an ontology, consist of different nodes with type-specific node features and different relations with type-specific edge features. In order to exploit all the information given by these graphs, we propose to use cascaded layers of graph convolution. The result is an encoding of the scene. Task-specific decoders can be applied to predict desired attributes of the scene. Extensive evaluation on two diverse binary node classification tasks show the main strength of this methodology: despite being generic, it even manages to outperform task-specific baselines. The further application of our methodology to the task of node classification in various knowledge graphs shows its transferability to other domains.
1502.02481
Shahbaz Khan
Surender Baswana, Shreejit Ray Chaudhury, Keerti Choudhary and Shahbaz Khan
Dynamic DFS Tree in Undirected Graphs: breaking the $O(m)$ barrier
27 pages, SODA 2016
null
10.1137/1.9781611974331.ch52
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Depth first search (DFS) tree is a fundamental data structure for solving various problems in graphs. It is well known that it takes $O(m+n)$ time to build a DFS tree for a given undirected graph $G=(V,E)$ on $n$ vertices and $m$ edges. We address the problem of maintaining a DFS tree when the graph is undergoing {\em updates} (insertion and deletion of vertices or edges). We present the following results for this problem. (a) Fault tolerant DFS tree: There exists a data structure of size ${O}(m ~polylog~ n)$ such that given any set ${\cal F}$ of failed vertices or edges, a DFS tree of the graph $G\setminus {\cal F}$ can be reported in ${O}(n|{\cal F}| ~polylog~ n)$ time. (b) Fully dynamic DFS tree: There exists a fully dynamic algorithm for maintaining a DFS tree that takes worst case ${O}(\sqrt{mn} ~polylog~ n)$ time per update for any arbitrary online sequence of updates. (c) Incremental DFS tree: Given any arbitrary online sequence of edge insertions, we can maintain a DFS tree in ${O}(n ~polylog~ n)$ worst case time per edge insertion. These are the first $o(m)$ worst case time results for maintaining a DFS tree in a dynamic environment. Moreover, our fully dynamic algorithm provides, in a seamless manner, the first deterministic algorithm with $O(1)$ query time and $o(m)$ worst case update time for the dynamic subgraph connectivity, biconnectivity, and 2-edge connectivity.
[ { "created": "Mon, 9 Feb 2015 13:36:20 GMT", "version": "v1" }, { "created": "Fri, 3 Apr 2015 10:11:02 GMT", "version": "v2" }, { "created": "Mon, 28 Dec 2015 17:34:08 GMT", "version": "v3" }, { "created": "Wed, 7 Feb 2018 15:42:54 GMT", "version": "v4" } ]
2018-02-08
[ [ "Baswana", "Surender", "" ], [ "Chaudhury", "Shreejit Ray", "" ], [ "Choudhary", "Keerti", "" ], [ "Khan", "Shahbaz", "" ] ]
Depth first search (DFS) tree is a fundamental data structure for solving various problems in graphs. It is well known that it takes $O(m+n)$ time to build a DFS tree for a given undirected graph $G=(V,E)$ on $n$ vertices and $m$ edges. We address the problem of maintaining a DFS tree when the graph is undergoing {\em updates} (insertion and deletion of vertices or edges). We present the following results for this problem. (a) Fault tolerant DFS tree: There exists a data structure of size ${O}(m ~polylog~ n)$ such that given any set ${\cal F}$ of failed vertices or edges, a DFS tree of the graph $G\setminus {\cal F}$ can be reported in ${O}(n|{\cal F}| ~polylog~ n)$ time. (b) Fully dynamic DFS tree: There exists a fully dynamic algorithm for maintaining a DFS tree that takes worst case ${O}(\sqrt{mn} ~polylog~ n)$ time per update for any arbitrary online sequence of updates. (c) Incremental DFS tree: Given any arbitrary online sequence of edge insertions, we can maintain a DFS tree in ${O}(n ~polylog~ n)$ worst case time per edge insertion. These are the first $o(m)$ worst case time results for maintaining a DFS tree in a dynamic environment. Moreover, our fully dynamic algorithm provides, in a seamless manner, the first deterministic algorithm with $O(1)$ query time and $o(m)$ worst case update time for the dynamic subgraph connectivity, biconnectivity, and 2-edge connectivity.
1105.6010
Tayeb Bouhadiba
Tayeb Bouhadiba (INRIA Grenoble Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), Quentin Sabah (INRIA Grenoble Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), Gwena\"el Delaval (INRIA Grenoble Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), \'Eric Rutten (INRIA Grenoble Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble)
Synchronous Control of Reconfiguration in Fractal Component-based Systems -- a Case Study
null
N° RR-7631 (2011)
null
RR-7631, RR-7631
cs.SE cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of component-based embedded systems, the management of dynamic reconfiguration in adaptive systems is an increasingly important feature. The Fractal component-based framework, and its industrial instantiation MIND, provide for support for control operations in the lifecycle of components. Nevertheless, the use of complex and integrated architectures make the management of this reconfiguration operations difficult to handle by programmers. To address this issue, we propose to use Synchronous languages, which are a complete approach to the design of reactive systems, based on behavior models in the form of transition systems. Furthermore, the design of closed-loop reactive managers of reconfigurations can benefit from formal tools like Discrete Controller Synthesis. In this paper we describe an approach to concretely integrate synchronous reconfiguration managers in Fractal component-based systems. We describe how to model the state space of the control problem, and how to specify the control objectives. We describe the implementation of the resulting manager with the Fractal/Cecilia programming environment, taking advantage of the Comete distributed middleware. We illustrate and validate it with the case study of the Comanche HTTP server on a multi-core execution platform.
[ { "created": "Mon, 30 May 2011 14:46:00 GMT", "version": "v1" }, { "created": "Mon, 6 Jun 2011 11:49:30 GMT", "version": "v2" } ]
2011-06-07
[ [ "Bouhadiba", "Tayeb", "", "INRIA Grenoble Rhône-Alpes / LIG Laboratoire\n d'Informatique de Grenoble" ], [ "Sabah", "Quentin", "", "INRIA Grenoble Rhône-Alpes /\n LIG Laboratoire d'Informatique de Grenoble" ], [ "Delaval", "Gwenaël", "", "INRIA\n Grenoble Rhône-Alpes / LIG Laboratoire d'Informatique de Grenoble" ], [ "Rutten", "Éric", "", "INRIA Grenoble Rhône-Alpes / LIG Laboratoire d'Informatique de\n Grenoble" ] ]
In the context of component-based embedded systems, the management of dynamic reconfiguration in adaptive systems is an increasingly important feature. The Fractal component-based framework, and its industrial instantiation MIND, provide for support for control operations in the lifecycle of components. Nevertheless, the use of complex and integrated architectures make the management of this reconfiguration operations difficult to handle by programmers. To address this issue, we propose to use Synchronous languages, which are a complete approach to the design of reactive systems, based on behavior models in the form of transition systems. Furthermore, the design of closed-loop reactive managers of reconfigurations can benefit from formal tools like Discrete Controller Synthesis. In this paper we describe an approach to concretely integrate synchronous reconfiguration managers in Fractal component-based systems. We describe how to model the state space of the control problem, and how to specify the control objectives. We describe the implementation of the resulting manager with the Fractal/Cecilia programming environment, taking advantage of the Comete distributed middleware. We illustrate and validate it with the case study of the Comanche HTTP server on a multi-core execution platform.
2105.15100
Suman Kumar
Suman Kumar, Kazi Amanul Islam Siddiqui, Mukesh Kumary
Skin-Health Monitoring system using a Wireless Body Area Network
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
A new class of sensing paradigm known as lab-onskin where stretchable and flexible smart sensor devices are integrated into the skin, provides direct monitoring and diagnostic interfaces to the body. Distributed lab-on-skin wireless sensors have the ability to provide continuous long term assessment of the skin health. This paper proposes a distributed skin health monitoring system using a wireless body area network. The system is responsive to the dynamic changes in the skin health, and remotely reports on the same. The proposed algorithm detects the abnormal skin and creates an energy efficient data aggregation tree covering the affected area while putting the unnecessary sensors to sleep mode. The algorithm responds to the changing conditions of the skin by dynamically adapting the size and shape of the monitoring trees to that of the abnormal skin areas thus providing a comprehensive monitoring. Simulation results demonstrate the application and utility of the proposed algorithm for changing wound shapes and sizes.
[ { "created": "Thu, 15 Apr 2021 20:32:54 GMT", "version": "v1" } ]
2021-06-01
[ [ "Kumar", "Suman", "" ], [ "Siddiqui", "Kazi Amanul Islam", "" ], [ "Kumary", "Mukesh", "" ] ]
A new class of sensing paradigm known as lab-onskin where stretchable and flexible smart sensor devices are integrated into the skin, provides direct monitoring and diagnostic interfaces to the body. Distributed lab-on-skin wireless sensors have the ability to provide continuous long term assessment of the skin health. This paper proposes a distributed skin health monitoring system using a wireless body area network. The system is responsive to the dynamic changes in the skin health, and remotely reports on the same. The proposed algorithm detects the abnormal skin and creates an energy efficient data aggregation tree covering the affected area while putting the unnecessary sensors to sleep mode. The algorithm responds to the changing conditions of the skin by dynamically adapting the size and shape of the monitoring trees to that of the abnormal skin areas thus providing a comprehensive monitoring. Simulation results demonstrate the application and utility of the proposed algorithm for changing wound shapes and sizes.
2302.08135
Youjia Zhang
Youjia Zhang, Pingzhong Tang
A Truthful Referral Auction Over Networks
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies a mechanism design problem over a network, where agents can only participate by referrals. The Bulow-Klemberer theorem proposes that expanding the number of participants is a more effective approach to increase revenue than modifying the auction format. However, agents lack the motivation to invite others because doing so intensifies competition among them. On the other hand, misreporting social networks is also a common problem that can reduce revenue. Examples of misreporting include Sybil attacks (an agent pretending to be multiple bidders) and coalition groups (multiple agents pretending to be an agent). To address these challenges, we introduce a novel mechanism called the Truthful Referral Diffusion Mechanism (TRDM). TRDM incentivizes agents to report their social networks truthfully, and some of them are rewarded by the seller for improving revenue. In spite of the fact that some agents overbid in TRDM, the revenue is fixed, and it is higher than the revenue of any mechanism without referrals. TRDM is budget-balanced (non-negative revenue) and generates an efficient outcome (maximized social welfare), making it attractive for both the seller and the buyers as it improves revenue and reward.
[ { "created": "Thu, 16 Feb 2023 08:06:55 GMT", "version": "v1" }, { "created": "Thu, 23 Feb 2023 11:51:42 GMT", "version": "v2" }, { "created": "Thu, 16 Mar 2023 07:51:52 GMT", "version": "v3" } ]
2023-03-17
[ [ "Zhang", "Youjia", "" ], [ "Tang", "Pingzhong", "" ] ]
This paper studies a mechanism design problem over a network, where agents can only participate by referrals. The Bulow-Klemberer theorem proposes that expanding the number of participants is a more effective approach to increase revenue than modifying the auction format. However, agents lack the motivation to invite others because doing so intensifies competition among them. On the other hand, misreporting social networks is also a common problem that can reduce revenue. Examples of misreporting include Sybil attacks (an agent pretending to be multiple bidders) and coalition groups (multiple agents pretending to be an agent). To address these challenges, we introduce a novel mechanism called the Truthful Referral Diffusion Mechanism (TRDM). TRDM incentivizes agents to report their social networks truthfully, and some of them are rewarded by the seller for improving revenue. In spite of the fact that some agents overbid in TRDM, the revenue is fixed, and it is higher than the revenue of any mechanism without referrals. TRDM is budget-balanced (non-negative revenue) and generates an efficient outcome (maximized social welfare), making it attractive for both the seller and the buyers as it improves revenue and reward.
2307.08364
Lennart Purucker
Lennart Purucker, Lennart Schneider, Marie Anastacio, Joeran Beel, Bernd Bischl, Holger Hoos
Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc Ensemble Selection in AutoML
10 pages main paper, 24 pages references and appendix, 4 figures, 16 subfigures, 13 tables, to be published in: International Conference on Automated Machine Learning 2023; affiliations corrected. arXiv admin note: text overlap with arXiv:2307.00286
null
null
null
cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Automated machine learning (AutoML) systems commonly ensemble models post hoc to improve predictive performance, typically via greedy ensemble selection (GES). However, we believe that GES may not always be optimal, as it performs a simple deterministic greedy search. In this work, we introduce two novel population-based ensemble selection methods, QO-ES and QDO-ES, and compare them to GES. While QO-ES optimises solely for predictive performance, QDO-ES also considers the diversity of ensembles within the population, maintaining a diverse set of well-performing ensembles during optimisation based on ideas of quality diversity optimisation. The methods are evaluated using 71 classification datasets from the AutoML benchmark, demonstrating that QO-ES and QDO-ES often outrank GES, albeit only statistically significant on validation data. Our results further suggest that diversity can be beneficial for post hoc ensembling but also increases the risk of overfitting.
[ { "created": "Mon, 17 Jul 2023 10:02:01 GMT", "version": "v1" }, { "created": "Wed, 2 Aug 2023 16:09:56 GMT", "version": "v2" } ]
2023-08-03
[ [ "Purucker", "Lennart", "" ], [ "Schneider", "Lennart", "" ], [ "Anastacio", "Marie", "" ], [ "Beel", "Joeran", "" ], [ "Bischl", "Bernd", "" ], [ "Hoos", "Holger", "" ] ]
Automated machine learning (AutoML) systems commonly ensemble models post hoc to improve predictive performance, typically via greedy ensemble selection (GES). However, we believe that GES may not always be optimal, as it performs a simple deterministic greedy search. In this work, we introduce two novel population-based ensemble selection methods, QO-ES and QDO-ES, and compare them to GES. While QO-ES optimises solely for predictive performance, QDO-ES also considers the diversity of ensembles within the population, maintaining a diverse set of well-performing ensembles during optimisation based on ideas of quality diversity optimisation. The methods are evaluated using 71 classification datasets from the AutoML benchmark, demonstrating that QO-ES and QDO-ES often outrank GES, albeit only statistically significant on validation data. Our results further suggest that diversity can be beneficial for post hoc ensembling but also increases the risk of overfitting.
2111.13156
Ammarah Farooq
Ammarah Farooq, Muhammad Awais, Sara Ahmed, Josef Kittler
Global Interaction Modelling in Vision Transformer via Super Tokens
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
With the popularity of Transformer architectures in computer vision, the research focus has shifted towards developing computationally efficient designs. Window-based local attention is one of the major techniques being adopted in recent works. These methods begin with very small patch size and small embedding dimensions and then perform strided convolution (patch merging) in order to reduce the feature map size and increase embedding dimensions, hence, forming a pyramidal Convolutional Neural Network (CNN) like design. In this work, we investigate local and global information modelling in transformers by presenting a novel isotropic architecture that adopts local windows and special tokens, called Super tokens, for self-attention. Specifically, a single Super token is assigned to each image window which captures the rich local details for that window. These tokens are then employed for cross-window communication and global representation learning. Hence, most of the learning is independent of the image patches $(N)$ in the higher layers, and the class embedding is learned solely based on the Super tokens $(N/M^2)$ where $M^2$ is the window size. In standard image classification on Imagenet-1K, the proposed Super tokens based transformer (STT-S25) achieves 83.5\% accuracy which is equivalent to Swin transformer (Swin-B) with circa half the number of parameters (49M) and double the inference time throughput. The proposed Super token transformer offers a lightweight and promising backbone for visual recognition tasks.
[ { "created": "Thu, 25 Nov 2021 16:22:57 GMT", "version": "v1" } ]
2021-11-29
[ [ "Farooq", "Ammarah", "" ], [ "Awais", "Muhammad", "" ], [ "Ahmed", "Sara", "" ], [ "Kittler", "Josef", "" ] ]
With the popularity of Transformer architectures in computer vision, the research focus has shifted towards developing computationally efficient designs. Window-based local attention is one of the major techniques being adopted in recent works. These methods begin with very small patch size and small embedding dimensions and then perform strided convolution (patch merging) in order to reduce the feature map size and increase embedding dimensions, hence, forming a pyramidal Convolutional Neural Network (CNN) like design. In this work, we investigate local and global information modelling in transformers by presenting a novel isotropic architecture that adopts local windows and special tokens, called Super tokens, for self-attention. Specifically, a single Super token is assigned to each image window which captures the rich local details for that window. These tokens are then employed for cross-window communication and global representation learning. Hence, most of the learning is independent of the image patches $(N)$ in the higher layers, and the class embedding is learned solely based on the Super tokens $(N/M^2)$ where $M^2$ is the window size. In standard image classification on Imagenet-1K, the proposed Super tokens based transformer (STT-S25) achieves 83.5\% accuracy which is equivalent to Swin transformer (Swin-B) with circa half the number of parameters (49M) and double the inference time throughput. The proposed Super token transformer offers a lightweight and promising backbone for visual recognition tasks.
2304.08176
Saber Elsayed
Saber Elsayed
Towards Mitigating ChatGPT's Negative Impact on Education: Optimizing Question Design through Bloom's Taxonomy
null
null
null
null
cs.CY cs.CL
http://creativecommons.org/licenses/by/4.0/
The popularity of generative text AI tools in answering questions has led to concerns regarding their potential negative impact on students' academic performance and the challenges that educators face in evaluating student learning. To address these concerns, this paper introduces an evolutionary approach that aims to identify the best set of Bloom's taxonomy keywords to generate questions that these tools have low confidence in answering. The effectiveness of this approach is evaluated through a case study that uses questions from a Data Structures and Representation course being taught at the University of New South Wales in Canberra, Australia. The results demonstrate that the optimization algorithm is able to find keywords from different cognitive levels to create questions that ChatGPT has low confidence in answering. This study is a step forward to offer valuable insights for educators seeking to create more effective questions that promote critical thinking among students.
[ { "created": "Fri, 31 Mar 2023 00:01:59 GMT", "version": "v1" } ]
2023-04-18
[ [ "Elsayed", "Saber", "" ] ]
The popularity of generative text AI tools in answering questions has led to concerns regarding their potential negative impact on students' academic performance and the challenges that educators face in evaluating student learning. To address these concerns, this paper introduces an evolutionary approach that aims to identify the best set of Bloom's taxonomy keywords to generate questions that these tools have low confidence in answering. The effectiveness of this approach is evaluated through a case study that uses questions from a Data Structures and Representation course being taught at the University of New South Wales in Canberra, Australia. The results demonstrate that the optimization algorithm is able to find keywords from different cognitive levels to create questions that ChatGPT has low confidence in answering. This study is a step forward to offer valuable insights for educators seeking to create more effective questions that promote critical thinking among students.
2209.13815
Yuntao Wang
Yuntao Wang, Zhou Su, Abderrahim Benslimane, Qichao Xu, Minghui Dai, and Ruidong Li
A Learning-based Honeypot Game for Collaborative Defense in UAV Networks
Accepted by IEEE Globecom2022
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of unmanned aerial vehicles (UAVs) opens up new opportunities for on-demand service provisioning anywhere and anytime, but it also exposes UAVs to various cyber threats. Low/medium-interaction honeypot is regarded as a promising lightweight defense to actively protect mobile Internet of things, especially UAV networks. Existing works primarily focused on honeypot design and attack pattern recognition, the incentive issue for motivating UAVs' participation (e.g., sharing trapped attack data in honeypots) to collaboratively resist distributed and sophisticated attacks is still under-explored. This paper proposes a novel game-based collaborative defense approach to address optimal, fair, and feasible incentive mechanism design, in the presence of network dynamics and UAVs' multi-dimensional private information (e.g., valid defense data (VDD) volume, communication delay, and UAV cost). Specifically, we first develop a honeypot game between UAVs under both partial and complete information asymmetry scenarios. We then devise a contract-theoretic method to solve the optimal VDD-reward contract design problem with partial information asymmetry, while ensuring truthfulness, fairness, and computational efficiency. Furthermore, under complete information asymmetry, we devise a reinforcement learning based distributed method to dynamically design optimal contracts for distinct types of UAVs in the fast-changing network. Experimental simulations show that the proposed scheme can motivate UAV's collaboration in VDD sharing and enhance defensive effectiveness, compared with existing solutions.
[ { "created": "Wed, 28 Sep 2022 03:40:06 GMT", "version": "v1" } ]
2022-09-29
[ [ "Wang", "Yuntao", "" ], [ "Su", "Zhou", "" ], [ "Benslimane", "Abderrahim", "" ], [ "Xu", "Qichao", "" ], [ "Dai", "Minghui", "" ], [ "Li", "Ruidong", "" ] ]
The proliferation of unmanned aerial vehicles (UAVs) opens up new opportunities for on-demand service provisioning anywhere and anytime, but it also exposes UAVs to various cyber threats. Low/medium-interaction honeypot is regarded as a promising lightweight defense to actively protect mobile Internet of things, especially UAV networks. Existing works primarily focused on honeypot design and attack pattern recognition, the incentive issue for motivating UAVs' participation (e.g., sharing trapped attack data in honeypots) to collaboratively resist distributed and sophisticated attacks is still under-explored. This paper proposes a novel game-based collaborative defense approach to address optimal, fair, and feasible incentive mechanism design, in the presence of network dynamics and UAVs' multi-dimensional private information (e.g., valid defense data (VDD) volume, communication delay, and UAV cost). Specifically, we first develop a honeypot game between UAVs under both partial and complete information asymmetry scenarios. We then devise a contract-theoretic method to solve the optimal VDD-reward contract design problem with partial information asymmetry, while ensuring truthfulness, fairness, and computational efficiency. Furthermore, under complete information asymmetry, we devise a reinforcement learning based distributed method to dynamically design optimal contracts for distinct types of UAVs in the fast-changing network. Experimental simulations show that the proposed scheme can motivate UAV's collaboration in VDD sharing and enhance defensive effectiveness, compared with existing solutions.
2202.03460
Ji Gao
Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Full version of a paper appearing in the 22nd Privacy Enhancing Technologies Symposium (PETS 2022)
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Privacy attacks on machine learning models aim to identify the data that is used to train such models. Such attacks, traditionally, are studied on static models that are trained once and are accessible by the adversary. Motivated to meet new legal requirements, many machine learning methods are recently extended to support machine unlearning, i.e., updating models as if certain examples are removed from their training sets, and meet new legal requirements. However, privacy attacks could potentially become more devastating in this new setting, since an attacker could now access both the original model before deletion and the new model after the deletion. In fact, the very act of deletion might make the deleted record more vulnerable to privacy attacks. Inspired by cryptographic definitions and the differential privacy framework, we formally study privacy implications of machine unlearning. We formalize (various forms of) deletion inference and deletion reconstruction attacks, in which the adversary aims to either identify which record is deleted or to reconstruct (perhaps part of) the deleted records. We then present successful deletion inference and reconstruction attacks for a variety of machine learning models and tasks such as classification, regression, and language models. Finally, we show that our attacks would provably be precluded if the schemes satisfy (variants of) Deletion Compliance (Garg, Goldwasser, and Vasudevan, Eurocrypt' 20).
[ { "created": "Mon, 7 Feb 2022 19:02:58 GMT", "version": "v1" } ]
2022-02-09
[ [ "Gao", "Ji", "" ], [ "Garg", "Sanjam", "" ], [ "Mahmoody", "Mohammad", "" ], [ "Vasudevan", "Prashant Nalini", "" ] ]
Privacy attacks on machine learning models aim to identify the data that is used to train such models. Such attacks, traditionally, are studied on static models that are trained once and are accessible by the adversary. Motivated to meet new legal requirements, many machine learning methods are recently extended to support machine unlearning, i.e., updating models as if certain examples are removed from their training sets, and meet new legal requirements. However, privacy attacks could potentially become more devastating in this new setting, since an attacker could now access both the original model before deletion and the new model after the deletion. In fact, the very act of deletion might make the deleted record more vulnerable to privacy attacks. Inspired by cryptographic definitions and the differential privacy framework, we formally study privacy implications of machine unlearning. We formalize (various forms of) deletion inference and deletion reconstruction attacks, in which the adversary aims to either identify which record is deleted or to reconstruct (perhaps part of) the deleted records. We then present successful deletion inference and reconstruction attacks for a variety of machine learning models and tasks such as classification, regression, and language models. Finally, we show that our attacks would provably be precluded if the schemes satisfy (variants of) Deletion Compliance (Garg, Goldwasser, and Vasudevan, Eurocrypt' 20).
1308.0686
Arpan Chattopadhyay
Arpan Chattopadhyay, Marceau Coupechoux, and Anurag Kumar
As-You-Go Deployment of a Wireless Network with On-Line Measurements and Backtracking
16 pages; 6 figures; submitted
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are motivated by the need, in some applications, for impromptu or as-you-go deployment of wireless sensor networks. A person walks along a line, making link quality measurements with the previous relay at equally spaced locations, and deploys relays at some of these locations, so as to connect a sensor placed on the line with a sink at the start of the line. In this paper, we extend our earlier work on the problem (see [1]) to incorporate two new aspects: (i) inclusion of path outage in the deployment objective, and (ii) permitting the deployment agent to make measurements over several consecutive steps before selecting a placement location among them (which we call backtracking). We consider a light traffic regime, and formulate the problem as a Markov decision process. Placement algorithms are obtained for two cases: (i) the distance to the source is geometrically distributed with known mean, and (ii) the average cost per step case. We motivate the per-step cost function in terms of several known forwarding protocols for sleep-wake cycling wireless sensor networks. We obtain the structures of the optimal policies for the various formulations, and provide some sensitivity results about the policies and the optimal values. We then provide a numerical study of the algorithms, thus providing insights into the advantage of backtracking, and a comparison with simple heuristic placement policies.
[ { "created": "Sat, 3 Aug 2013 11:37:14 GMT", "version": "v1" }, { "created": "Thu, 22 Aug 2013 04:06:02 GMT", "version": "v2" } ]
2013-08-23
[ [ "Chattopadhyay", "Arpan", "" ], [ "Coupechoux", "Marceau", "" ], [ "Kumar", "Anurag", "" ] ]
We are motivated by the need, in some applications, for impromptu or as-you-go deployment of wireless sensor networks. A person walks along a line, making link quality measurements with the previous relay at equally spaced locations, and deploys relays at some of these locations, so as to connect a sensor placed on the line with a sink at the start of the line. In this paper, we extend our earlier work on the problem (see [1]) to incorporate two new aspects: (i) inclusion of path outage in the deployment objective, and (ii) permitting the deployment agent to make measurements over several consecutive steps before selecting a placement location among them (which we call backtracking). We consider a light traffic regime, and formulate the problem as a Markov decision process. Placement algorithms are obtained for two cases: (i) the distance to the source is geometrically distributed with known mean, and (ii) the average cost per step case. We motivate the per-step cost function in terms of several known forwarding protocols for sleep-wake cycling wireless sensor networks. We obtain the structures of the optimal policies for the various formulations, and provide some sensitivity results about the policies and the optimal values. We then provide a numerical study of the algorithms, thus providing insights into the advantage of backtracking, and a comparison with simple heuristic placement policies.
2008.09706
Pengjie Ren
Yangjun Zhang, Pengjie Ren, Maarten de Rijke
Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data and Methodology
under review at JASIST
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversational interfaces are increasingly popular as a way of connecting people to information. Corpus-based conversational interfaces are able to generate more diverse and natural responses than template-based or retrieval-based agents. With their increased generative capacity of corpusbased conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of recognizing and classifying inappropriate content are mostly focused on a certain category of malevolence or on single sentences instead of an entire dialogue. In this paper, we define the task of Malevolent Dialogue Response Detection and Classification (MDRDC). We make three contributions to advance research on this task. First, we present a Hierarchical Malevolent Dialogue Taxonomy (HMDT). Second, we create a labelled multi-turn dialogue dataset and formulate the MDRDC task as a hierarchical classification task over this taxonomy. Third, we apply stateof-the-art text classification methods to the MDRDC task and report on extensive experiments aimed at assessing the performance of these approaches.
[ { "created": "Fri, 21 Aug 2020 22:43:27 GMT", "version": "v1" } ]
2020-08-25
[ [ "Zhang", "Yangjun", "" ], [ "Ren", "Pengjie", "" ], [ "de Rijke", "Maarten", "" ] ]
Conversational interfaces are increasingly popular as a way of connecting people to information. Corpus-based conversational interfaces are able to generate more diverse and natural responses than template-based or retrieval-based agents. With their increased generative capacity of corpusbased conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of recognizing and classifying inappropriate content are mostly focused on a certain category of malevolence or on single sentences instead of an entire dialogue. In this paper, we define the task of Malevolent Dialogue Response Detection and Classification (MDRDC). We make three contributions to advance research on this task. First, we present a Hierarchical Malevolent Dialogue Taxonomy (HMDT). Second, we create a labelled multi-turn dialogue dataset and formulate the MDRDC task as a hierarchical classification task over this taxonomy. Third, we apply stateof-the-art text classification methods to the MDRDC task and report on extensive experiments aimed at assessing the performance of these approaches.
2306.00488
Ruizhong Qiu
Ruizhong Qiu, Dingsu Wang, Lei Ying, H. Vincent Poor, Yifang Zhang, Hanghang Tong
Reconstructing Graph Diffusion History from a Single Snapshot
Full version of the KDD 2023 paper (including the appendix)
null
10.1145/3580305.3599488
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion on graphs is ubiquitous with numerous high-impact applications. In these applications, complete diffusion histories play an essential role in terms of identifying dynamical patterns, reflecting on precaution actions, and forecasting intervention effects. Despite their importance, complete diffusion histories are rarely available and are highly challenging to reconstruct due to ill-posedness, explosive search space, and scarcity of training data. To date, few methods exist for diffusion history reconstruction. They are exclusively based on the maximum likelihood estimation (MLE) formulation and require to know true diffusion parameters. In this paper, we study an even harder problem, namely reconstructing Diffusion history from A single SnapsHot} (DASH), where we seek to reconstruct the history from only the final snapshot without knowing true diffusion parameters. We start with theoretical analyses that reveal a fundamental limitation of the MLE formulation. We prove: (a) estimation error of diffusion parameters is unavoidable due to NP-hardness of diffusion parameter estimation, and (b) the MLE formulation is sensitive to estimation error of diffusion parameters. To overcome the inherent limitation of the MLE formulation, we propose a novel barycenter formulation: finding the barycenter of the posterior distribution of histories, which is provably stable against the estimation error of diffusion parameters. We further develop an effective solver named DIffusion hiTting Times with Optimal proposal (DITTO) by reducing the problem to estimating posterior expected hitting times via the Metropolis--Hastings Markov chain Monte Carlo method (M--H MCMC) and employing an unsupervised graph neural network to learn an optimal proposal to accelerate the convergence of M--H MCMC. We conduct extensive experiments to demonstrate the efficacy of the proposed method.
[ { "created": "Thu, 1 Jun 2023 09:39:32 GMT", "version": "v1" }, { "created": "Sun, 4 Jun 2023 21:25:25 GMT", "version": "v2" }, { "created": "Sat, 1 Jul 2023 06:46:07 GMT", "version": "v3" }, { "created": "Fri, 31 May 2024 23:25:07 GMT", "version": "v4" } ]
2024-06-04
[ [ "Qiu", "Ruizhong", "" ], [ "Wang", "Dingsu", "" ], [ "Ying", "Lei", "" ], [ "Poor", "H. Vincent", "" ], [ "Zhang", "Yifang", "" ], [ "Tong", "Hanghang", "" ] ]
Diffusion on graphs is ubiquitous with numerous high-impact applications. In these applications, complete diffusion histories play an essential role in terms of identifying dynamical patterns, reflecting on precaution actions, and forecasting intervention effects. Despite their importance, complete diffusion histories are rarely available and are highly challenging to reconstruct due to ill-posedness, explosive search space, and scarcity of training data. To date, few methods exist for diffusion history reconstruction. They are exclusively based on the maximum likelihood estimation (MLE) formulation and require to know true diffusion parameters. In this paper, we study an even harder problem, namely reconstructing Diffusion history from A single SnapsHot} (DASH), where we seek to reconstruct the history from only the final snapshot without knowing true diffusion parameters. We start with theoretical analyses that reveal a fundamental limitation of the MLE formulation. We prove: (a) estimation error of diffusion parameters is unavoidable due to NP-hardness of diffusion parameter estimation, and (b) the MLE formulation is sensitive to estimation error of diffusion parameters. To overcome the inherent limitation of the MLE formulation, we propose a novel barycenter formulation: finding the barycenter of the posterior distribution of histories, which is provably stable against the estimation error of diffusion parameters. We further develop an effective solver named DIffusion hiTting Times with Optimal proposal (DITTO) by reducing the problem to estimating posterior expected hitting times via the Metropolis--Hastings Markov chain Monte Carlo method (M--H MCMC) and employing an unsupervised graph neural network to learn an optimal proposal to accelerate the convergence of M--H MCMC. We conduct extensive experiments to demonstrate the efficacy of the proposed method.
2309.02528
Jayanth Yetukuri
Ian Hardy, Jayanth Yetukuri and Yang Liu
Adaptive Adversarial Training Does Not Increase Recourse Costs
null
In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES '23). Association for Computing Machinery, New York, NY, USA, 432 442
10.1145/3600211.3604704
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
Recent work has connected adversarial attack methods and algorithmic recourse methods: both seek minimal changes to an input instance which alter a model's classification decision. It has been shown that traditional adversarial training, which seeks to minimize a classifier's susceptibility to malicious perturbations, increases the cost of generated recourse; with larger adversarial training radii correlating with higher recourse costs. From the perspective of algorithmic recourse, however, the appropriate adversarial training radius has always been unknown. Another recent line of work has motivated adversarial training with adaptive training radii to address the issue of instance-wise variable adversarial vulnerability, showing success in domains with unknown attack radii. This work studies the effects of adaptive adversarial training on algorithmic recourse costs. We establish that the improvements in model robustness induced by adaptive adversarial training show little effect on algorithmic recourse costs, providing a potential avenue for affordable robustness in domains where recoursability is critical.
[ { "created": "Tue, 5 Sep 2023 18:40:22 GMT", "version": "v1" } ]
2023-09-07
[ [ "Hardy", "Ian", "" ], [ "Yetukuri", "Jayanth", "" ], [ "Liu", "Yang", "" ] ]
Recent work has connected adversarial attack methods and algorithmic recourse methods: both seek minimal changes to an input instance which alter a model's classification decision. It has been shown that traditional adversarial training, which seeks to minimize a classifier's susceptibility to malicious perturbations, increases the cost of generated recourse; with larger adversarial training radii correlating with higher recourse costs. From the perspective of algorithmic recourse, however, the appropriate adversarial training radius has always been unknown. Another recent line of work has motivated adversarial training with adaptive training radii to address the issue of instance-wise variable adversarial vulnerability, showing success in domains with unknown attack radii. This work studies the effects of adaptive adversarial training on algorithmic recourse costs. We establish that the improvements in model robustness induced by adaptive adversarial training show little effect on algorithmic recourse costs, providing a potential avenue for affordable robustness in domains where recoursability is critical.
1806.11248
Rory Mitchell
Rory Mitchell, Andrey Adinets, Thejaswi Rao, Eibe Frank
XGBoost: Scalable GPU Accelerated Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the multi-GPU gradient boosting algorithm implemented in the XGBoost library (https://github.com/dmlc/xgboost). Our algorithm allows fast, scalable training on multi-GPU systems with all of the features of the XGBoost library. We employ data compression techniques to minimise the usage of scarce GPU memory while still allowing highly efficient implementation. Using our algorithm we show that it is possible to process 115 million training instances in under three minutes on a publicly available cloud computing instance. The algorithm is implemented using end-to-end GPU parallelism, with prediction, gradient calculation, feature quantisation, decision tree construction and evaluation phases all computed on device.
[ { "created": "Fri, 29 Jun 2018 02:05:32 GMT", "version": "v1" } ]
2018-07-02
[ [ "Mitchell", "Rory", "" ], [ "Adinets", "Andrey", "" ], [ "Rao", "Thejaswi", "" ], [ "Frank", "Eibe", "" ] ]
We describe the multi-GPU gradient boosting algorithm implemented in the XGBoost library (https://github.com/dmlc/xgboost). Our algorithm allows fast, scalable training on multi-GPU systems with all of the features of the XGBoost library. We employ data compression techniques to minimise the usage of scarce GPU memory while still allowing highly efficient implementation. Using our algorithm we show that it is possible to process 115 million training instances in under three minutes on a publicly available cloud computing instance. The algorithm is implemented using end-to-end GPU parallelism, with prediction, gradient calculation, feature quantisation, decision tree construction and evaluation phases all computed on device.
2303.06987
Georges Gagnere
Georges Gagner\'e (INREV, UP8, UPL), Andy Lavender, C\'edric Plessiet (INREV, AIAC, UP8, UPL), Tim White
Challenges of movement quality using motion capture in theatre
null
MOCO '18: 5th International Conference on Movement and Computing, Jun 2018, Genoa, Italy. pp.1-6
10.1145/3212721.3212883
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe1 two case studies of AvatarStaging theatrical mixed reality framework combining avatars and performers acting in an artistic context. We outline a qualitative approach toward the condition for stage presence for the avatars. We describe the motion control solutions we experimented with from the perspective of building a protocol of avatar direction in a mixed reality appropriate to live performance.
[ { "created": "Mon, 13 Mar 2023 10:37:20 GMT", "version": "v1" } ]
2023-03-14
[ [ "Gagneré", "Georges", "", "INREV, UP8, UPL" ], [ "Lavender", "Andy", "", "INREV, AIAC, UP8, UPL" ], [ "Plessiet", "Cédric", "", "INREV, AIAC, UP8, UPL" ], [ "White", "Tim", "" ] ]
We describe1 two case studies of AvatarStaging theatrical mixed reality framework combining avatars and performers acting in an artistic context. We outline a qualitative approach toward the condition for stage presence for the avatars. We describe the motion control solutions we experimented with from the perspective of building a protocol of avatar direction in a mixed reality appropriate to live performance.
2012.01468
Yuqi Ouyang
Yuqi Ouyang, Victor Sanchez
Video Anomaly Detection by Estimating Likelihood of Representations
Accepted to ICPR 2020
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video anomaly detection is a challenging task not only because it involves solving many sub-tasks such as motion representation, object localization and action recognition, but also because it is commonly considered as an unsupervised learning problem that involves detecting outliers. Traditionally, solutions to this task have focused on the mapping between video frames and their low-dimensional features, while ignoring the spatial connections of those features. Recent solutions focus on analyzing these spatial connections by using hard clustering techniques, such as K-Means, or applying neural networks to map latent features to a general understanding, such as action attributes. In order to solve video anomaly in the latent feature space, we propose a deep probabilistic model to transfer this task into a density estimation problem where latent manifolds are generated by a deep denoising autoencoder and clustered by expectation maximization. Evaluations on several benchmarks datasets show the strengths of our model, achieving outstanding performance on challenging datasets.
[ { "created": "Wed, 2 Dec 2020 19:16:22 GMT", "version": "v1" } ]
2020-12-04
[ [ "Ouyang", "Yuqi", "" ], [ "Sanchez", "Victor", "" ] ]
Video anomaly detection is a challenging task not only because it involves solving many sub-tasks such as motion representation, object localization and action recognition, but also because it is commonly considered as an unsupervised learning problem that involves detecting outliers. Traditionally, solutions to this task have focused on the mapping between video frames and their low-dimensional features, while ignoring the spatial connections of those features. Recent solutions focus on analyzing these spatial connections by using hard clustering techniques, such as K-Means, or applying neural networks to map latent features to a general understanding, such as action attributes. In order to solve video anomaly in the latent feature space, we propose a deep probabilistic model to transfer this task into a density estimation problem where latent manifolds are generated by a deep denoising autoencoder and clustered by expectation maximization. Evaluations on several benchmarks datasets show the strengths of our model, achieving outstanding performance on challenging datasets.
1812.10265
Zheng Xin
Xin Zheng, Yanqing Guo, Huaibo Huang, Yi Li, Ran He
A Survey of Deep Facial Attribute Analysis
submitted to International Journal of Computer Vision (IJCV)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial attribute analysis has received considerable attention when deep learning techniques made remarkable breakthroughs in this field over the past few years. Deep learning based facial attribute analysis consists of two basic sub-issues: facial attribute estimation (FAE), which recognizes whether facial attributes are present in given images, and facial attribute manipulation (FAM), which synthesizes or removes desired facial attributes. In this paper, we provide a comprehensive survey of deep facial attribute analysis from the perspectives of both estimation and manipulation. First, we summarize a general pipeline that deep facial attribute analysis follows, which comprises two stages: data preprocessing and model construction. Additionally, we introduce the underlying theories of this two-stage pipeline for both FAE and FAM. Second, the datasets and performance metrics commonly used in facial attribute analysis are presented. Third, we create a taxonomy of state-of-the-art methods and review deep FAE and FAM algorithms in detail. Furthermore, several additional facial attribute related issues are introduced, as well as relevant real-world applications. Finally, we discuss possible challenges and promising future research directions.
[ { "created": "Wed, 26 Dec 2018 09:24:07 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2019 06:58:40 GMT", "version": "v2" }, { "created": "Sun, 27 Oct 2019 03:13:51 GMT", "version": "v3" } ]
2019-10-29
[ [ "Zheng", "Xin", "" ], [ "Guo", "Yanqing", "" ], [ "Huang", "Huaibo", "" ], [ "Li", "Yi", "" ], [ "He", "Ran", "" ] ]
Facial attribute analysis has received considerable attention when deep learning techniques made remarkable breakthroughs in this field over the past few years. Deep learning based facial attribute analysis consists of two basic sub-issues: facial attribute estimation (FAE), which recognizes whether facial attributes are present in given images, and facial attribute manipulation (FAM), which synthesizes or removes desired facial attributes. In this paper, we provide a comprehensive survey of deep facial attribute analysis from the perspectives of both estimation and manipulation. First, we summarize a general pipeline that deep facial attribute analysis follows, which comprises two stages: data preprocessing and model construction. Additionally, we introduce the underlying theories of this two-stage pipeline for both FAE and FAM. Second, the datasets and performance metrics commonly used in facial attribute analysis are presented. Third, we create a taxonomy of state-of-the-art methods and review deep FAE and FAM algorithms in detail. Furthermore, several additional facial attribute related issues are introduced, as well as relevant real-world applications. Finally, we discuss possible challenges and promising future research directions.
1812.09383
Franziska Roesner
John Akers, Gagan Bansal, Gabriel Cadamuro, Christine Chen, Quanze Chen, Lucy Lin, Phoebe Mulcaire, Rajalakshmi Nandakumar, Matthew Rockett, Lucy Simko, John Toman, Tongshuang Wu, Eric Zeng, Bill Zorn, Franziska Roesner
Technology-Enabled Disinformation: Summary, Lessons, and Recommendations
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Technology is increasingly used -- unintentionally (misinformation) or intentionally (disinformation) -- to spread false information at scale, with potentially broad-reaching societal effects. For example, technology enables increasingly realistic false images and videos, and hyper-personal targeting means different people may see different versions of reality. This report is the culmination of a PhD-level special topics course (https://courses.cs.washington.edu/courses/cse599b/18au/) in Computer Science & Engineering at the University of Washington's Paul G. Allen School in the fall of 2018. The goals of this course were to study (1) how technologies and today's technical platforms enable and support the creation and spread of such mis- and disinformation, as well as (2) how technical approaches could be used to mitigate these issues. In this report, we summarize the space of technology-enabled mis- and disinformation based on our investigations, and then surface our lessons and recommendations for technologists, researchers, platform designers, policymakers, and users.
[ { "created": "Fri, 21 Dec 2018 21:46:34 GMT", "version": "v1" }, { "created": "Thu, 3 Jan 2019 14:35:55 GMT", "version": "v2" } ]
2019-01-04
[ [ "Akers", "John", "" ], [ "Bansal", "Gagan", "" ], [ "Cadamuro", "Gabriel", "" ], [ "Chen", "Christine", "" ], [ "Chen", "Quanze", "" ], [ "Lin", "Lucy", "" ], [ "Mulcaire", "Phoebe", "" ], [ "Nandakumar", "Rajalakshmi", "" ], [ "Rockett", "Matthew", "" ], [ "Simko", "Lucy", "" ], [ "Toman", "John", "" ], [ "Wu", "Tongshuang", "" ], [ "Zeng", "Eric", "" ], [ "Zorn", "Bill", "" ], [ "Roesner", "Franziska", "" ] ]
Technology is increasingly used -- unintentionally (misinformation) or intentionally (disinformation) -- to spread false information at scale, with potentially broad-reaching societal effects. For example, technology enables increasingly realistic false images and videos, and hyper-personal targeting means different people may see different versions of reality. This report is the culmination of a PhD-level special topics course (https://courses.cs.washington.edu/courses/cse599b/18au/) in Computer Science & Engineering at the University of Washington's Paul G. Allen School in the fall of 2018. The goals of this course were to study (1) how technologies and today's technical platforms enable and support the creation and spread of such mis- and disinformation, as well as (2) how technical approaches could be used to mitigate these issues. In this report, we summarize the space of technology-enabled mis- and disinformation based on our investigations, and then surface our lessons and recommendations for technologists, researchers, platform designers, policymakers, and users.
1602.06058
Sung-Il Pae
Sung-il Pae
Binarization Trees and Random Number Generation
8 pages
null
null
null
cs.DS cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An m-extracting procedure produces unbiased random bits from a loaded dice with m faces. A binarization takes inputs from an m-faced dice and produce bit sequences to be fed into a (binary) extracting procedure to obtain random bits. Thus, binary extracting procedures give rise to an m-extracting procedure via a binarization. An entropy- preserving binarization is to be called complete, and such a procedure has been proposed by Zhou and Bruck. We show that there exist complete binarizations in abundance as naturally arising from binary trees with m leaves. The well-known leaf entropy theorem and a closely related structure lemma play important roles in the arguments.
[ { "created": "Fri, 19 Feb 2016 07:12:02 GMT", "version": "v1" }, { "created": "Fri, 11 May 2018 21:58:45 GMT", "version": "v2" } ]
2018-05-15
[ [ "Pae", "Sung-il", "" ] ]
An m-extracting procedure produces unbiased random bits from a loaded dice with m faces. A binarization takes inputs from an m-faced dice and produce bit sequences to be fed into a (binary) extracting procedure to obtain random bits. Thus, binary extracting procedures give rise to an m-extracting procedure via a binarization. An entropy- preserving binarization is to be called complete, and such a procedure has been proposed by Zhou and Bruck. We show that there exist complete binarizations in abundance as naturally arising from binary trees with m leaves. The well-known leaf entropy theorem and a closely related structure lemma play important roles in the arguments.
2405.15994
Junlin Wu
Junlin Wu, Huan Zhang, Yevgeniy Vorobeychik
Verified Safe Reinforcement Learning for Neural Network Dynamic Models
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning reliably safe autonomous control is one of the core problems in trustworthy autonomy. However, training a controller that can be formally verified to be safe remains a major challenge. We introduce a novel approach for learning verified safe control policies in nonlinear neural dynamical systems while maximizing overall performance. Our approach aims to achieve safety in the sense of finite-horizon reachability proofs, and is comprised of three key parts. The first is a novel curriculum learning scheme that iteratively increases the verified safe horizon. The second leverages the iterative nature of gradient-based learning to leverage incremental verification, reusing information from prior verification runs. Finally, we learn multiple verified initial-state-dependent controllers, an idea that is especially valuable for more complex domains where learning a single universal verified safe controller is extremely challenging. Our experiments on five safe control problems demonstrate that our trained controllers can achieve verified safety over horizons that are as much as an order of magnitude longer than state-of-the-art baselines, while maintaining high reward, as well as a perfect safety record over entire episodes.
[ { "created": "Sat, 25 May 2024 00:35:39 GMT", "version": "v1" } ]
2024-05-28
[ [ "Wu", "Junlin", "" ], [ "Zhang", "Huan", "" ], [ "Vorobeychik", "Yevgeniy", "" ] ]
Learning reliably safe autonomous control is one of the core problems in trustworthy autonomy. However, training a controller that can be formally verified to be safe remains a major challenge. We introduce a novel approach for learning verified safe control policies in nonlinear neural dynamical systems while maximizing overall performance. Our approach aims to achieve safety in the sense of finite-horizon reachability proofs, and is comprised of three key parts. The first is a novel curriculum learning scheme that iteratively increases the verified safe horizon. The second leverages the iterative nature of gradient-based learning to leverage incremental verification, reusing information from prior verification runs. Finally, we learn multiple verified initial-state-dependent controllers, an idea that is especially valuable for more complex domains where learning a single universal verified safe controller is extremely challenging. Our experiments on five safe control problems demonstrate that our trained controllers can achieve verified safety over horizons that are as much as an order of magnitude longer than state-of-the-art baselines, while maintaining high reward, as well as a perfect safety record over entire episodes.
2205.04816
Jiaqiang Zhang
Jiaqiang Zhang, Senzhang Wang, Songcan Chen
Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks
Accepted at IJCAI-ECAI 2022
IJCAI2022
10.24963/ijcai.2022/330
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting abnormal nodes from attributed networks is of great importance in many real applications, such as financial fraud detection and cyber security. This task is challenging due to both the complex interactions between the anomalous nodes with other counterparts and their inconsistency in terms of attributes. This paper proposes a self-supervised learning framework that jointly optimizes a multi-view contrastive learning-based module and an attribute reconstruction-based module to more accurately detect anomalies on attributed networks. Specifically, two contrastive learning views are firstly established, which allow the model to better encode rich local and global information related to the abnormality. Motivated by the attribute consistency principle between neighboring nodes, a masked autoencoder-based reconstruction module is also introduced to identify the nodes which have large reconstruction errors, then are regarded as anomalies. Finally, the two complementary modules are integrated for more accurately detecting the anomalous nodes. Extensive experiments conducted on five benchmark datasets show our model outperforms current state-of-the-art models.
[ { "created": "Tue, 10 May 2022 11:35:32 GMT", "version": "v1" } ]
2023-10-02
[ [ "Zhang", "Jiaqiang", "" ], [ "Wang", "Senzhang", "" ], [ "Chen", "Songcan", "" ] ]
Detecting abnormal nodes from attributed networks is of great importance in many real applications, such as financial fraud detection and cyber security. This task is challenging due to both the complex interactions between the anomalous nodes with other counterparts and their inconsistency in terms of attributes. This paper proposes a self-supervised learning framework that jointly optimizes a multi-view contrastive learning-based module and an attribute reconstruction-based module to more accurately detect anomalies on attributed networks. Specifically, two contrastive learning views are firstly established, which allow the model to better encode rich local and global information related to the abnormality. Motivated by the attribute consistency principle between neighboring nodes, a masked autoencoder-based reconstruction module is also introduced to identify the nodes which have large reconstruction errors, then are regarded as anomalies. Finally, the two complementary modules are integrated for more accurately detecting the anomalous nodes. Extensive experiments conducted on five benchmark datasets show our model outperforms current state-of-the-art models.
2112.14680
Sebastian Drost
Sebastian Drost, Arne Vogt, Christian Danowski-Buhren, Simon Jirka, Verena Kirstein, Kian Pakzad, and Matthes Rieke
WaCoDiS: Automated Earth Observation Data Processing within an Event-Driven Architecture for Water Monitoring
null
null
10.1016/j.cageo.2021.105003
null
cs.DC cs.SE
http://creativecommons.org/licenses/by/4.0/
To ensure an efficient and environmentally friendly water resource management, water management associations need means for efficient water monitoring as well as novel strategies to reduce the pollution of surface and ground water. Traditionally, water management associations operate large sensor networks to suffice their needs for hydrological and meteorological measurement data to monitor and model physical processes within catchments. Implementing a comprehensive monitoring system often suffers from sparse coverage of in-situ data. Due to the evolvement of the Copernicus satellite platforms, the broader availability of satellite data provides a great potential for deriving complementary information from Earth Observation data. Although the number of satellite data platforms that provide online processing environments is growing, it is still a big challenge to integrate those platforms into traditional workflows of users from environmental domains such as hydrology. Thus, in this paper, we introduce a software architecture to facilitate the generation of Earth Observation information targeted towards hydrology. The presented WaCoDiS System comprises several microservices as well standardized interfaces that enable a platform-independent processing of satellite data. First, we discuss the contribution of Earth Observation data to water monitoring and derive several challenges regarding the facilitation of satellite data processing. We then describe our system design with a brief overview about the different system components which form an automated processing pipeline. The suitability of our system is proven as part of a pre-operational deployment for a German water management association. In addition, we demonstrate how our system is capable of integrating satellite data platforms, using the Copernicus Data and Exploitation Platform - Deutschland (CODE-DE) as a reference example.
[ { "created": "Thu, 23 Dec 2021 15:37:10 GMT", "version": "v1" } ]
2021-12-30
[ [ "Drost", "Sebastian", "" ], [ "Vogt", "Arne", "" ], [ "Danowski-Buhren", "Christian", "" ], [ "Jirka", "Simon", "" ], [ "Kirstein", "Verena", "" ], [ "Pakzad", "Kian", "" ], [ "Rieke", "Matthes", "" ] ]
To ensure an efficient and environmentally friendly water resource management, water management associations need means for efficient water monitoring as well as novel strategies to reduce the pollution of surface and ground water. Traditionally, water management associations operate large sensor networks to suffice their needs for hydrological and meteorological measurement data to monitor and model physical processes within catchments. Implementing a comprehensive monitoring system often suffers from sparse coverage of in-situ data. Due to the evolvement of the Copernicus satellite platforms, the broader availability of satellite data provides a great potential for deriving complementary information from Earth Observation data. Although the number of satellite data platforms that provide online processing environments is growing, it is still a big challenge to integrate those platforms into traditional workflows of users from environmental domains such as hydrology. Thus, in this paper, we introduce a software architecture to facilitate the generation of Earth Observation information targeted towards hydrology. The presented WaCoDiS System comprises several microservices as well standardized interfaces that enable a platform-independent processing of satellite data. First, we discuss the contribution of Earth Observation data to water monitoring and derive several challenges regarding the facilitation of satellite data processing. We then describe our system design with a brief overview about the different system components which form an automated processing pipeline. The suitability of our system is proven as part of a pre-operational deployment for a German water management association. In addition, we demonstrate how our system is capable of integrating satellite data platforms, using the Copernicus Data and Exploitation Platform - Deutschland (CODE-DE) as a reference example.
1403.3376
Xiang Gao
Xiang Gao, Ove Edfors, Fredrik Rusek, Fredrik Tufvesson
Massive MIMO performance evaluation based on measured propagation data
IEEE Transactions on Wireless Communications, 2015
null
10.1109/TWC.2015.2414413
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA) which has a physically large aperture, and a practical uniform cylindrical array (UCA) which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.
[ { "created": "Thu, 13 Mar 2014 19:22:17 GMT", "version": "v1" }, { "created": "Mon, 16 Mar 2015 15:17:54 GMT", "version": "v2" }, { "created": "Wed, 8 Apr 2015 20:17:35 GMT", "version": "v3" } ]
2016-11-17
[ [ "Gao", "Xiang", "" ], [ "Edfors", "Ove", "" ], [ "Rusek", "Fredrik", "" ], [ "Tufvesson", "Fredrik", "" ] ]
Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA) which has a physically large aperture, and a practical uniform cylindrical array (UCA) which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.