id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1807.11836
Jean Pierre Char
Jean Pierre Char
Inferring the ground truth through crowdsourcing
6 pages, 1 figure, Intelligent Systems seminar SS18
null
null
null
cs.LG cs.HC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Universally valid ground truth is almost impossible to obtain or would come at a very high cost. For supervised learning without universally valid ground truth, a recommended approach is applying crowdsourcing: Gathering a large data set annotated by multiple individuals of varying possibly expertise levels and inferring the ground truth data to be used as labels to train the classifier. Nevertheless, due to the sensitivity of the problem at hand (e.g. mitosis detection in breast cancer histology images), the obtained data needs verification and proper assessment before being used for classifier training. Even in the context of organic computing systems, an indisputable ground truth might not always exist. Therefore, it should be inferred through the aggregation and verification of the local knowledge of each autonomous agent.
[ { "created": "Tue, 31 Jul 2018 14:21:32 GMT", "version": "v1" } ]
2018-08-01
[ [ "Char", "Jean Pierre", "" ] ]
Universally valid ground truth is almost impossible to obtain or would come at a very high cost. For supervised learning without universally valid ground truth, a recommended approach is applying crowdsourcing: Gathering a large data set annotated by multiple individuals of varying possibly expertise levels and inferring the ground truth data to be used as labels to train the classifier. Nevertheless, due to the sensitivity of the problem at hand (e.g. mitosis detection in breast cancer histology images), the obtained data needs verification and proper assessment before being used for classifier training. Even in the context of organic computing systems, an indisputable ground truth might not always exist. Therefore, it should be inferred through the aggregation and verification of the local knowledge of each autonomous agent.
2102.13355
Andreas Liesenfeld
Andreas Liesenfeld, G\'abor Parti, Yu-Yin Hsu, Chu-Ren Huang
Predicting gender and age categories in English conversations using lexical, non-lexical, and turn-taking features
10 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines gender and age salience and (stereo)typicality in British English talk with the aim to predict gender and age categories based on lexical, phrasal and turn-taking features. We examine the SpokenBNC, a corpus of around 11.4 million words of British English conversations and identify behavioural differences between speakers that are labelled for gender and age categories. We explore differences in language use and turn-taking dynamics and identify a range of characteristics that set the categories apart. We find that female speakers tend to produce more and slightly longer turns, while turns by male speakers feature a higher type-token ratio and a distinct range of minimal particles such as "eh", "uh" and "em". Across age groups, we observe, for instance, that swear words and laughter characterize young speakers' talk, while old speakers tend to produce more truncated words. We then use the observed characteristics to predict gender and age labels of speakers per conversation and per turn as a classification task, showing that non-lexical utterances such as minimal particles that are usually left out of dialog data can contribute to setting the categories apart.
[ { "created": "Fri, 26 Feb 2021 08:23:08 GMT", "version": "v1" } ]
2021-03-01
[ [ "Liesenfeld", "Andreas", "" ], [ "Parti", "Gábor", "" ], [ "Hsu", "Yu-Yin", "" ], [ "Huang", "Chu-Ren", "" ] ]
This paper examines gender and age salience and (stereo)typicality in British English talk with the aim to predict gender and age categories based on lexical, phrasal and turn-taking features. We examine the SpokenBNC, a corpus of around 11.4 million words of British English conversations and identify behavioural differences between speakers that are labelled for gender and age categories. We explore differences in language use and turn-taking dynamics and identify a range of characteristics that set the categories apart. We find that female speakers tend to produce more and slightly longer turns, while turns by male speakers feature a higher type-token ratio and a distinct range of minimal particles such as "eh", "uh" and "em". Across age groups, we observe, for instance, that swear words and laughter characterize young speakers' talk, while old speakers tend to produce more truncated words. We then use the observed characteristics to predict gender and age labels of speakers per conversation and per turn as a classification task, showing that non-lexical utterances such as minimal particles that are usually left out of dialog data can contribute to setting the categories apart.
1804.11088
Wei-Yu Lai
Wei-Yu Lai and Tien-Ruey Hsiang
A Linear-Time Approximation Algorithm for the Orthogonal Terrain Guarding Problem
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the 1.5-dimensional orthogonal terrain guarding problem. In this problem, we assign an x-monotone chain T because each edge is either horizontal or vertical, and determine the minimal number of vertex guards for all vertices of T. A vertex vi sees a point p on T if the line segment connecting vi to p is on or above T. We provide an optimal algorithm with O(n) for a subproblem of the orthogonal terrain guarding problem. In this subproblem, we determine the minimal number of vertex guards for all right(left) convex verteices of T. Finally, we provide a 2-approximation algorithm that solves the 1.5-dimensional orthogonal terrain guarding problem in O(n).
[ { "created": "Mon, 30 Apr 2018 09:12:57 GMT", "version": "v1" }, { "created": "Wed, 9 May 2018 09:36:59 GMT", "version": "v2" } ]
2018-05-10
[ [ "Lai", "Wei-Yu", "" ], [ "Hsiang", "Tien-Ruey", "" ] ]
In this paper, we consider the 1.5-dimensional orthogonal terrain guarding problem. In this problem, we assign an x-monotone chain T because each edge is either horizontal or vertical, and determine the minimal number of vertex guards for all vertices of T. A vertex vi sees a point p on T if the line segment connecting vi to p is on or above T. We provide an optimal algorithm with O(n) for a subproblem of the orthogonal terrain guarding problem. In this subproblem, we determine the minimal number of vertex guards for all right(left) convex verteices of T. Finally, we provide a 2-approximation algorithm that solves the 1.5-dimensional orthogonal terrain guarding problem in O(n).
2005.04177
Jay DeYoung
Jay DeYoung, Eric Lehman, Ben Nye, Iain J. Marshall, Byron C. Wallace
Evidence Inference 2.0: More Data, Better Models
Accepted as workshop paper into BioNLP Updated results from SciBERT to Biomed RoBERTa
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How do we most effectively treat a disease or condition? Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions. Unfortunately, no such database exists; clinical trial results are instead disseminated primarily via lengthy natural language articles. Perusing all such articles would be prohibitively time-consuming for healthcare practitioners; they instead tend to depend on manually compiled systematic reviews of medical literature to inform care. NLP may speed this process up, and eventually facilitate immediate consult of published evidence. The Evidence Inference dataset was recently released to facilitate research toward this end. This task entails inferring the comparative performance of two treatments, with respect to a given outcome, from a particular article (describing a clinical trial) and identifying supporting evidence. For instance: Does this article report that chemotherapy performed better than surgery for five-year survival rates of operable cancers? In this paper, we collect additional annotations to expand the Evidence Inference dataset by 25\%, provide stronger baseline models, systematically inspect the errors that these make, and probe dataset quality. We also release an abstract only (as opposed to full-texts) version of the task for rapid model prototyping. The updated corpus, documentation, and code for new baselines and evaluations are available at http://evidence-inference.ebm-nlp.com/.
[ { "created": "Fri, 8 May 2020 17:16:35 GMT", "version": "v1" }, { "created": "Thu, 14 May 2020 14:55:33 GMT", "version": "v2" } ]
2020-05-15
[ [ "DeYoung", "Jay", "" ], [ "Lehman", "Eric", "" ], [ "Nye", "Ben", "" ], [ "Marshall", "Iain J.", "" ], [ "Wallace", "Byron C.", "" ] ]
How do we most effectively treat a disease or condition? Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions. Unfortunately, no such database exists; clinical trial results are instead disseminated primarily via lengthy natural language articles. Perusing all such articles would be prohibitively time-consuming for healthcare practitioners; they instead tend to depend on manually compiled systematic reviews of medical literature to inform care. NLP may speed this process up, and eventually facilitate immediate consult of published evidence. The Evidence Inference dataset was recently released to facilitate research toward this end. This task entails inferring the comparative performance of two treatments, with respect to a given outcome, from a particular article (describing a clinical trial) and identifying supporting evidence. For instance: Does this article report that chemotherapy performed better than surgery for five-year survival rates of operable cancers? In this paper, we collect additional annotations to expand the Evidence Inference dataset by 25\%, provide stronger baseline models, systematically inspect the errors that these make, and probe dataset quality. We also release an abstract only (as opposed to full-texts) version of the task for rapid model prototyping. The updated corpus, documentation, and code for new baselines and evaluations are available at http://evidence-inference.ebm-nlp.com/.
2211.16773
Xiao Yu
Xiao Yu, Qingyang Wu, Kun Qian, Zhou Yu
KRLS: Improving End-to-End Response Generation in Task Oriented Dialog with Reinforced Keywords Learning
Accepted at EMNLP 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In task-oriented dialogs (TOD), reinforcement learning (RL) algorithms train a model to directly optimize response for task-related metrics. However, RL needs to perform exploration, which can be time-consuming due to the slow auto-regressive sequence generation process. We investigate an approach to create a more efficient RL-based algorithm to improve TOD performance in an offline setting. First, we use a faster generation procedure that samples from independent next-word distributions after training the language model (LM) with supervised learning. We then introduce a fine-grained reward function to help the model focus on learning key information in a dialog, by measuring the importance and semantic closeness of each generated token. Experiments on the MultiWoZ dataset show our new training algorithm, Keywords Reinforcement Learning with Next-word Sampling (KRLS), achieves state-of-the-art performance on the end-to-end response generation task, with a 15% training time reduction compared to a standard RL algorithm using auto-regressive generation.
[ { "created": "Wed, 30 Nov 2022 06:27:46 GMT", "version": "v1" }, { "created": "Tue, 13 Dec 2022 06:56:04 GMT", "version": "v2" }, { "created": "Tue, 20 Dec 2022 05:31:07 GMT", "version": "v3" }, { "created": "Tue, 23 May 2023 04:09:31 GMT", "version": "v4" }, { "created": "Thu, 19 Oct 2023 19:16:12 GMT", "version": "v5" } ]
2023-10-23
[ [ "Yu", "Xiao", "" ], [ "Wu", "Qingyang", "" ], [ "Qian", "Kun", "" ], [ "Yu", "Zhou", "" ] ]
In task-oriented dialogs (TOD), reinforcement learning (RL) algorithms train a model to directly optimize response for task-related metrics. However, RL needs to perform exploration, which can be time-consuming due to the slow auto-regressive sequence generation process. We investigate an approach to create a more efficient RL-based algorithm to improve TOD performance in an offline setting. First, we use a faster generation procedure that samples from independent next-word distributions after training the language model (LM) with supervised learning. We then introduce a fine-grained reward function to help the model focus on learning key information in a dialog, by measuring the importance and semantic closeness of each generated token. Experiments on the MultiWoZ dataset show our new training algorithm, Keywords Reinforcement Learning with Next-word Sampling (KRLS), achieves state-of-the-art performance on the end-to-end response generation task, with a 15% training time reduction compared to a standard RL algorithm using auto-regressive generation.
2104.04040
Paul Beaujean
Paul Beaujean and Florian Sikora and Florian Yger
Scaling up graph homomorphism for classification via sampling
17 pages, 1 figure
null
null
null
cs.LG cs.DS
http://creativecommons.org/licenses/by/4.0/
Feature generation is an open topic of investigation in graph machine learning. In this paper, we study the use of graph homomorphism density features as a scalable alternative to homomorphism numbers which retain similar theoretical properties and ability to take into account inductive bias. For this, we propose a high-performance implementation of a simple sampling algorithm which computes additive approximations of homomorphism densities. In the context of graph machine learning, we demonstrate in experiments that simple linear models trained on sample homomorphism densities can achieve performance comparable to graph neural networks on standard graph classification datasets. Finally, we show in experiments on synthetic data that this algorithm scales to very large graphs when implemented with Bloom filters.
[ { "created": "Thu, 8 Apr 2021 20:25:37 GMT", "version": "v1" } ]
2021-04-12
[ [ "Beaujean", "Paul", "" ], [ "Sikora", "Florian", "" ], [ "Yger", "Florian", "" ] ]
Feature generation is an open topic of investigation in graph machine learning. In this paper, we study the use of graph homomorphism density features as a scalable alternative to homomorphism numbers which retain similar theoretical properties and ability to take into account inductive bias. For this, we propose a high-performance implementation of a simple sampling algorithm which computes additive approximations of homomorphism densities. In the context of graph machine learning, we demonstrate in experiments that simple linear models trained on sample homomorphism densities can achieve performance comparable to graph neural networks on standard graph classification datasets. Finally, we show in experiments on synthetic data that this algorithm scales to very large graphs when implemented with Bloom filters.
1606.00892
Mary Tate
Jacques Louis Du Preez, Mary Tate and Alireza Nili
Developing a Methodology for Online Service Failure Prevention: Reporting on an Action Design Research Project-in-Progress
ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on Information Systems 2015 (arXiv:1605.01032)
null
null
ACIS/2015/47
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The increasing use of online channels for service delivery raises new challenges in service failure prevention. This work-in-progress paper reports on the first phase of an action-design research project to develop a service failure prevention methodology. In this paper we review the literature on online services, failure prevention and failure recovery and develop a theoretical framework for online service failure prevention. This provides the theoretical grounding for the artefact (the methodology) to be developed. We use this framework to develop an initial draft of our methodology. We then outline the remaining phases of the research, and offer some initial conclusions gained from the project to date.
[ { "created": "Sat, 28 May 2016 03:38:17 GMT", "version": "v1" } ]
2016-06-06
[ [ "Preez", "Jacques Louis Du", "" ], [ "Tate", "Mary", "" ], [ "Nili", "Alireza", "" ] ]
The increasing use of online channels for service delivery raises new challenges in service failure prevention. This work-in-progress paper reports on the first phase of an action-design research project to develop a service failure prevention methodology. In this paper we review the literature on online services, failure prevention and failure recovery and develop a theoretical framework for online service failure prevention. This provides the theoretical grounding for the artefact (the methodology) to be developed. We use this framework to develop an initial draft of our methodology. We then outline the remaining phases of the research, and offer some initial conclusions gained from the project to date.
1707.04892
Ahmad Steef
Ahmad Steef, M. N. Shamma, A. Alkhatib
A secure approach for embedding message text on an elliptic curve defined over prime fields, and building 'EC-RSA-ELGamal' Cryptographic System
Elliptic Curve Cryptography, embedding message text on an elliptic curve, RSA Algorithm, https://sites.google.com/site/ijcsis/vol-15-no-6-jun-2017
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new probabilistic approach to embedding message text on an elliptic curve, by using the concept of the RSA Algorithm and its security, and such approach allows us discovering the message from the point, only according to the security of the RSA Algorithm. By mixing between the concept of this approach and the concept of EC-ELGamal Cryptographic System, we have built a cryptographic System and we named it 'EC-RSA-ELGamal'
[ { "created": "Sun, 16 Jul 2017 14:49:36 GMT", "version": "v1" } ]
2017-07-18
[ [ "Steef", "Ahmad", "" ], [ "Shamma", "M. N.", "" ], [ "Alkhatib", "A.", "" ] ]
This paper presents a new probabilistic approach to embedding message text on an elliptic curve, by using the concept of the RSA Algorithm and its security, and such approach allows us discovering the message from the point, only according to the security of the RSA Algorithm. By mixing between the concept of this approach and the concept of EC-ELGamal Cryptographic System, we have built a cryptographic System and we named it 'EC-RSA-ELGamal'
2006.04787
Sitan Chen
Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau
Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability
52 pages, v2: updated references
null
null
null
cs.LG cs.DS math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we revisit some classic problems on classification under misspecification. In particular, we study the problem of learning halfspaces under Massart noise with rate $\eta$. In a recent work, Diakonikolas, Goulekakis, and Tzamos resolved a long-standing problem by giving the first efficient algorithm for learning to accuracy $\eta + \epsilon$ for any $\epsilon > 0$. However, their algorithm outputs a complicated hypothesis, which partitions space into $\text{poly}(d,1/\epsilon)$ regions. Here we give a much simpler algorithm and in the process resolve a number of outstanding open questions: (1) We give the first proper learner for Massart halfspaces that achieves $\eta + \epsilon$. We also give improved bounds on the sample complexity achievable by polynomial time algorithms. (2) Based on (1), we develop a blackbox knowledge distillation procedure to convert an arbitrarily complex classifier to an equally good proper classifier. (3) By leveraging a simple but overlooked connection to evolvability, we show any SQ algorithm requires super-polynomially many queries to achieve $\mathsf{OPT} + \epsilon$. Moreover we study generalized linear models where $\mathbb{E}[Y|\mathbf{X}] = \sigma(\langle \mathbf{w}^*, \mathbf{X}\rangle)$ for any odd, monotone, and Lipschitz function $\sigma$. This family includes the previously mentioned halfspace models as a special case, but is much richer and includes other fundamental models like logistic regression. We introduce a challenging new corruption model that generalizes Massart noise, and give a general algorithm for learning in this setting. Our algorithms are based on a small set of core recipes for learning to classify in the presence of misspecification. Finally we study our algorithm for learning halfspaces under Massart noise empirically and find that it exhibits some appealing fairness properties.
[ { "created": "Mon, 8 Jun 2020 17:59:11 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2023 14:40:02 GMT", "version": "v2" } ]
2023-09-21
[ [ "Chen", "Sitan", "" ], [ "Koehler", "Frederic", "" ], [ "Moitra", "Ankur", "" ], [ "Yau", "Morris", "" ] ]
In this paper we revisit some classic problems on classification under misspecification. In particular, we study the problem of learning halfspaces under Massart noise with rate $\eta$. In a recent work, Diakonikolas, Goulekakis, and Tzamos resolved a long-standing problem by giving the first efficient algorithm for learning to accuracy $\eta + \epsilon$ for any $\epsilon > 0$. However, their algorithm outputs a complicated hypothesis, which partitions space into $\text{poly}(d,1/\epsilon)$ regions. Here we give a much simpler algorithm and in the process resolve a number of outstanding open questions: (1) We give the first proper learner for Massart halfspaces that achieves $\eta + \epsilon$. We also give improved bounds on the sample complexity achievable by polynomial time algorithms. (2) Based on (1), we develop a blackbox knowledge distillation procedure to convert an arbitrarily complex classifier to an equally good proper classifier. (3) By leveraging a simple but overlooked connection to evolvability, we show any SQ algorithm requires super-polynomially many queries to achieve $\mathsf{OPT} + \epsilon$. Moreover we study generalized linear models where $\mathbb{E}[Y|\mathbf{X}] = \sigma(\langle \mathbf{w}^*, \mathbf{X}\rangle)$ for any odd, monotone, and Lipschitz function $\sigma$. This family includes the previously mentioned halfspace models as a special case, but is much richer and includes other fundamental models like logistic regression. We introduce a challenging new corruption model that generalizes Massart noise, and give a general algorithm for learning in this setting. Our algorithms are based on a small set of core recipes for learning to classify in the presence of misspecification. Finally we study our algorithm for learning halfspaces under Massart noise empirically and find that it exhibits some appealing fairness properties.
2407.03877
Daniel P. Szabo
Krist\'of B\'erczi, Tam\'as Kir\'aly, Daniel P. Szabo
Multiway Cuts with a Choice of Representatives
null
null
10.4230/LIPIcs.MFCS.2024.18
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study several generalizations of multiway cut where the terminals can be chosen as \emph{representatives} from sets of \emph{candidates} $T_1,\ldots,T_q$. In this setting, one is allowed to choose these representatives so that the minimum-weight cut separating these sets \emph{via their representatives} is as small as possible. We distinguish different cases depending on (A) whether the representative of a candidate set has to be separated from the other candidate sets completely or only from the representatives, and (B) whether there is a single representative for each candidate set or the choice of representative is independent for each pair of candidate sets. For fixed $q$, we give approximation algorithms for each of these problems that match the best known approximation guarantee for multiway cut. Our technical contribution is a new extension of the CKR relaxation that preserves approximation guarantees. For general $q$, we show $o(\log q)$-inapproximability for all cases where the choice of representatives may depend on the pair of candidate sets, as well as for the case where the goal is to separate a fixed node from a single representative from each candidate set. As a positive result, we give a $2$-approximation algorithm for the case where we need to choose a single representative from each candidate set. This is a generalization of the $(2-2/k)$-approximation for k-cut, and we can solve it by relating the tree case to optimization over a gammoid.
[ { "created": "Thu, 4 Jul 2024 12:14:37 GMT", "version": "v1" } ]
2024-07-08
[ [ "Bérczi", "Kristóf", "" ], [ "Király", "Tamás", "" ], [ "Szabo", "Daniel P.", "" ] ]
In this paper, we study several generalizations of multiway cut where the terminals can be chosen as \emph{representatives} from sets of \emph{candidates} $T_1,\ldots,T_q$. In this setting, one is allowed to choose these representatives so that the minimum-weight cut separating these sets \emph{via their representatives} is as small as possible. We distinguish different cases depending on (A) whether the representative of a candidate set has to be separated from the other candidate sets completely or only from the representatives, and (B) whether there is a single representative for each candidate set or the choice of representative is independent for each pair of candidate sets. For fixed $q$, we give approximation algorithms for each of these problems that match the best known approximation guarantee for multiway cut. Our technical contribution is a new extension of the CKR relaxation that preserves approximation guarantees. For general $q$, we show $o(\log q)$-inapproximability for all cases where the choice of representatives may depend on the pair of candidate sets, as well as for the case where the goal is to separate a fixed node from a single representative from each candidate set. As a positive result, we give a $2$-approximation algorithm for the case where we need to choose a single representative from each candidate set. This is a generalization of the $(2-2/k)$-approximation for k-cut, and we can solve it by relating the tree case to optimization over a gammoid.
2405.10845
Jin L.C. Guo
Jin L.C. Guo, Jan-Philipp Stegh\"ofer, Andreas Vogelsang, Jane Cleland-Huang
Natural Language Processing for Requirements Traceability
Book Chapter in the Handbook of Natural Language Processing for Requirements Engineering
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traceability, the ability to trace relevant software artifacts to support reasoning about the quality of the software and its development process, plays a crucial role in requirements and software engineering, particularly for safety-critical systems. In this chapter, we provide a comprehensive overview of the representative tasks in requirement traceability for which natural language processing (NLP) and related techniques have made considerable progress in the past decade. We first present the definition of traceability in the context of requirements and the overall engineering process, as well as other important concepts related to traceability tasks. Then, we discuss two tasks in detail, including trace link recovery and trace link maintenance. We also introduce two other related tasks concerning when trace links are used in practical contexts. For each task, we explain the characteristics of the task, how it can be approached through NLP techniques, and how to design and conduct the experiment to demonstrate the performance of the NLP techniques. We further discuss practical considerations on how to effectively apply NLP techniques and assess their effectiveness regarding the data set collection, the metrics selection, and the role of humans when evaluating the NLP approaches. Overall, this chapter prepares the readers with the fundamental knowledge of designing automated traceability solutions enabled by NLP in practice.
[ { "created": "Fri, 17 May 2024 15:17:00 GMT", "version": "v1" } ]
2024-05-20
[ [ "Guo", "Jin L. C.", "" ], [ "Steghöfer", "Jan-Philipp", "" ], [ "Vogelsang", "Andreas", "" ], [ "Cleland-Huang", "Jane", "" ] ]
Traceability, the ability to trace relevant software artifacts to support reasoning about the quality of the software and its development process, plays a crucial role in requirements and software engineering, particularly for safety-critical systems. In this chapter, we provide a comprehensive overview of the representative tasks in requirement traceability for which natural language processing (NLP) and related techniques have made considerable progress in the past decade. We first present the definition of traceability in the context of requirements and the overall engineering process, as well as other important concepts related to traceability tasks. Then, we discuss two tasks in detail, including trace link recovery and trace link maintenance. We also introduce two other related tasks concerning when trace links are used in practical contexts. For each task, we explain the characteristics of the task, how it can be approached through NLP techniques, and how to design and conduct the experiment to demonstrate the performance of the NLP techniques. We further discuss practical considerations on how to effectively apply NLP techniques and assess their effectiveness regarding the data set collection, the metrics selection, and the role of humans when evaluating the NLP approaches. Overall, this chapter prepares the readers with the fundamental knowledge of designing automated traceability solutions enabled by NLP in practice.
2306.16050
Yao Li
Jie Ning, Jiebao Sun, Yao Li, Zhichang Guo, Wangmeng Zuo
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack
null
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Deep neural networks (DNNs) have shown superior performance comparing to traditional image denoising algorithms. However, DNNs are inevitably vulnerable while facing adversarial attacks. In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models while keep the noise distribution almost unchanged. We surprisingly find that the current mainstream non-blind denoising models (DnCNN, FFDNet, ECNDNet, BRDNet), blind denoising models (DnCNN-B, Noise2Noise, RDDCNN-B, FAN), plug-and-play (DPIR, CurvPnP) and unfolding denoising models (DeamNet) almost share the same adversarial sample set on both grayscale and color images, respectively. Shared adversarial sample set indicates that all these models are similar in term of local behaviors at the neighborhood of all the test samples. Thus, we further propose an indicator to measure the local similarity of models, called robustness similitude. Non-blind denoising models are found to have high robustness similitude across each other, while hybrid-driven models are also found to have high robustness similitude with pure data-driven non-blind denoising models. According to our robustness assessment, data-driven non-blind denoising models are the most robust. We use adversarial training to complement the vulnerability to adversarial attacks. Moreover, the model-driven image denoising BM3D shows resistance on adversarial attacks.
[ { "created": "Wed, 28 Jun 2023 09:30:59 GMT", "version": "v1" }, { "created": "Fri, 7 Jul 2023 02:40:02 GMT", "version": "v2" } ]
2023-07-10
[ [ "Ning", "Jie", "" ], [ "Sun", "Jiebao", "" ], [ "Li", "Yao", "" ], [ "Guo", "Zhichang", "" ], [ "Zuo", "Wangmeng", "" ] ]
Deep neural networks (DNNs) have shown superior performance comparing to traditional image denoising algorithms. However, DNNs are inevitably vulnerable while facing adversarial attacks. In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models while keep the noise distribution almost unchanged. We surprisingly find that the current mainstream non-blind denoising models (DnCNN, FFDNet, ECNDNet, BRDNet), blind denoising models (DnCNN-B, Noise2Noise, RDDCNN-B, FAN), plug-and-play (DPIR, CurvPnP) and unfolding denoising models (DeamNet) almost share the same adversarial sample set on both grayscale and color images, respectively. Shared adversarial sample set indicates that all these models are similar in term of local behaviors at the neighborhood of all the test samples. Thus, we further propose an indicator to measure the local similarity of models, called robustness similitude. Non-blind denoising models are found to have high robustness similitude across each other, while hybrid-driven models are also found to have high robustness similitude with pure data-driven non-blind denoising models. According to our robustness assessment, data-driven non-blind denoising models are the most robust. We use adversarial training to complement the vulnerability to adversarial attacks. Moreover, the model-driven image denoising BM3D shows resistance on adversarial attacks.
2209.14426
Kaveh Fathian
Jacqueline Ankenbauer, Kaveh Fathian, Jonathan P. How
View-Invariant Localization using Semantic Objects in Changing Environments
null
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper proposes a novel framework for real-time localization and egomotion tracking of a vehicle in a reference map. The core idea is to map the semantic objects observed by the vehicle and register them to their corresponding objects in the reference map. While several recent works have leveraged semantic information for cross-view localization, the main contribution of this work is a view-invariant formulation that makes the approach directly applicable to any viewpoint configuration for which objects are detectable. Another distinctive feature is robustness to changes in the environment/objects due to a data association scheme suited for extreme outlier regimes (e.g., 90% association outliers). To demonstrate our framework, we consider an example of localizing a ground vehicle in a reference object map using only cars as objects. While only a stereo camera is used for the ground vehicle, we consider reference maps constructed a priori from ground viewpoints using stereo cameras and Lidar scans, and georeferenced aerial images captured at a different date to demonstrate the framework's robustness to different modalities, viewpoints, and environment changes. Evaluations on the KITTI dataset show that over a 3.7 km trajectory, localization occurs in 36 sec and is followed by real-time egomotion tracking with an average position error of 8.5 m in a Lidar reference map, and on an aerial object map where 77% of objects are outliers, localization is achieved in 71 sec with an average position error of 7.9 m.
[ { "created": "Wed, 28 Sep 2022 21:26:38 GMT", "version": "v1" } ]
2022-09-30
[ [ "Ankenbauer", "Jacqueline", "" ], [ "Fathian", "Kaveh", "" ], [ "How", "Jonathan P.", "" ] ]
This paper proposes a novel framework for real-time localization and egomotion tracking of a vehicle in a reference map. The core idea is to map the semantic objects observed by the vehicle and register them to their corresponding objects in the reference map. While several recent works have leveraged semantic information for cross-view localization, the main contribution of this work is a view-invariant formulation that makes the approach directly applicable to any viewpoint configuration for which objects are detectable. Another distinctive feature is robustness to changes in the environment/objects due to a data association scheme suited for extreme outlier regimes (e.g., 90% association outliers). To demonstrate our framework, we consider an example of localizing a ground vehicle in a reference object map using only cars as objects. While only a stereo camera is used for the ground vehicle, we consider reference maps constructed a priori from ground viewpoints using stereo cameras and Lidar scans, and georeferenced aerial images captured at a different date to demonstrate the framework's robustness to different modalities, viewpoints, and environment changes. Evaluations on the KITTI dataset show that over a 3.7 km trajectory, localization occurs in 36 sec and is followed by real-time egomotion tracking with an average position error of 8.5 m in a Lidar reference map, and on an aerial object map where 77% of objects are outliers, localization is achieved in 71 sec with an average position error of 7.9 m.
1306.3767
Simon Harper
Simon Harper, Tianyi Chen, and Yeliz Yesilada
Controlled Experimentation in Naturalistic Mobile Settings
12 pages, 3 tables
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performing controlled user experiments on small devices in naturalistic mobile settings has always proved to be a difficult undertaking for many Human Factors researchers. Difficulties exist, not least, because mimicking natural small device usage suffers from a lack of unobtrusive data to guide experimental design, and then validate that the experiment is proceeding naturally.Here we use observational data to derive a set of protocols and a simple checklist of validations which can be built into the design of any controlled experiment focused on the user interface of a small device. These, have been used within a series of experimental designs to measure the utility and application of experimental software. The key-point is the validation checks -- based on the observed behaviour of 400 mobile users -- to ratify that a controlled experiment is being perceived as natural by the user. While the design of the experimental route which the user follows is a major factor in the experimental setup, without check validations based on unobtrusive observed data there can be no certainty that an experiment designed to be natural is actually progressing as the design implies.
[ { "created": "Mon, 17 Jun 2013 08:36:11 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2013 06:57:13 GMT", "version": "v2" } ]
2013-06-19
[ [ "Harper", "Simon", "" ], [ "Chen", "Tianyi", "" ], [ "Yesilada", "Yeliz", "" ] ]
Performing controlled user experiments on small devices in naturalistic mobile settings has always proved to be a difficult undertaking for many Human Factors researchers. Difficulties exist, not least, because mimicking natural small device usage suffers from a lack of unobtrusive data to guide experimental design, and then validate that the experiment is proceeding naturally.Here we use observational data to derive a set of protocols and a simple checklist of validations which can be built into the design of any controlled experiment focused on the user interface of a small device. These, have been used within a series of experimental designs to measure the utility and application of experimental software. The key-point is the validation checks -- based on the observed behaviour of 400 mobile users -- to ratify that a controlled experiment is being perceived as natural by the user. While the design of the experimental route which the user follows is a major factor in the experimental setup, without check validations based on unobtrusive observed data there can be no certainty that an experiment designed to be natural is actually progressing as the design implies.
1909.05700
Ante Qu
Alejandro M. Castro, Ante Qu, Naveen Kuppuswamy, Alex Alspach, and Michael Sherman
A Transition-Aware Method for the Simulation of Compliant Contact with Regularized Friction
Published in IEEE RA-L and accepted to ICRA 2020. The first two authors contributed equally to this work. Copyright 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media. The supplemental video is available publicly at https://youtu.be/p2p0Z1Bf91Y . 8 pages with 9 figures
in IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1859-1866, April 2020
10.1109/LRA.2020.2969933
null
cs.RO cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multibody simulation with frictional contact has been a challenging subject of research for the past thirty years. Rigid-body assumptions are commonly used to approximate the physics of contact, and together with Coulomb friction, lead to challenging-to-solve nonlinear complementarity problems (NCP). On the other hand, robot grippers often introduce significant compliance. Compliant contact, combined with regularized friction, can be modeled entirely with ODEs, avoiding NCP solves. Unfortunately, regularized friction introduces high-frequency stiff dynamics and even implicit methods struggle with these systems, especially during slip-stick transitions. To improve the performance of implicit integration for these systems we introduce a Transition-Aware Line Search (TALS), which greatly improves the convergence of the Newton-Raphson iterations performed by implicit integrators. We find that TALS works best with semi-implicit integration, but that the explicit treatment of normal compliance can be problematic. To address this, we develop a Transition-Aware Modified Semi-Implicit (TAMSI) integrator that has similar computational cost to semi-implicit methods but implicitly couples compliant contact forces, leading to a more robust method. We evaluate the robustness, accuracy and performance of TAMSI and demonstrate our approach alongside relevant sim-to-real manipulation tasks.
[ { "created": "Thu, 12 Sep 2019 14:16:23 GMT", "version": "v1" }, { "created": "Tue, 17 Sep 2019 15:16:33 GMT", "version": "v2" }, { "created": "Mon, 20 Apr 2020 00:49:53 GMT", "version": "v3" } ]
2020-04-21
[ [ "Castro", "Alejandro M.", "" ], [ "Qu", "Ante", "" ], [ "Kuppuswamy", "Naveen", "" ], [ "Alspach", "Alex", "" ], [ "Sherman", "Michael", "" ] ]
Multibody simulation with frictional contact has been a challenging subject of research for the past thirty years. Rigid-body assumptions are commonly used to approximate the physics of contact, and together with Coulomb friction, lead to challenging-to-solve nonlinear complementarity problems (NCP). On the other hand, robot grippers often introduce significant compliance. Compliant contact, combined with regularized friction, can be modeled entirely with ODEs, avoiding NCP solves. Unfortunately, regularized friction introduces high-frequency stiff dynamics and even implicit methods struggle with these systems, especially during slip-stick transitions. To improve the performance of implicit integration for these systems we introduce a Transition-Aware Line Search (TALS), which greatly improves the convergence of the Newton-Raphson iterations performed by implicit integrators. We find that TALS works best with semi-implicit integration, but that the explicit treatment of normal compliance can be problematic. To address this, we develop a Transition-Aware Modified Semi-Implicit (TAMSI) integrator that has similar computational cost to semi-implicit methods but implicitly couples compliant contact forces, leading to a more robust method. We evaluate the robustness, accuracy and performance of TAMSI and demonstrate our approach alongside relevant sim-to-real manipulation tasks.
2406.07933
Chris Yuhao Liu
Chris Yuhao Liu, Yaxuan Wang, Jeffrey Flanigan, Yang Liu
Large Language Model Unlearning via Embedding-Corrupted Prompts
55 pages, 4 figures, 66 tables
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present Embedding-COrrupted (ECO) Prompts, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at nearly zero side effects in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases.
[ { "created": "Wed, 12 Jun 2024 06:56:20 GMT", "version": "v1" } ]
2024-06-13
[ [ "Liu", "Chris Yuhao", "" ], [ "Wang", "Yaxuan", "" ], [ "Flanigan", "Jeffrey", "" ], [ "Liu", "Yang", "" ] ]
Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present Embedding-COrrupted (ECO) Prompts, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at nearly zero side effects in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases.
1101.0133
Nihar Shah
K. V. Rashmi, Nihar B. Shah, P. Vijay Kumar
Enabling Node Repair in Any Erasure Code for Distributed Storage
IEEE International Symposium on Information Theory (ISIT) 2011 (to be presented)
null
10.1109/ISIT.2011.6033732
null
cs.IT cs.DC cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two 'types' and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
[ { "created": "Thu, 30 Dec 2010 19:00:00 GMT", "version": "v1" }, { "created": "Thu, 30 Jun 2011 10:10:00 GMT", "version": "v2" } ]
2016-11-17
[ [ "Rashmi", "K. V.", "" ], [ "Shah", "Nihar B.", "" ], [ "Kumar", "P. Vijay", "" ] ]
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two 'types' and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
2401.17168
Sergey Pupyrev
Amir Ayupov and Maksim Panchenko and Sergey Pupyrev
Stale Profile Matching
ACM SIGPLAN 33rd International Conference on Compiler Construction (CC 2024)
null
null
null
cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Profile-guided optimizations rely on profile data for directing compilers to generate optimized code. To achieve the maximum performance boost, profile data needs to be collected on the same version of the binary that is being optimized. In practice however, there is typically a gap between the profile collection and the release, which makes a portion of the profile invalid for optimizations. This phenomenon is known as profile staleness, and it is a serious practical problem for data-center workloads both for compilers and binary optimizers. In this paper we thoroughly study the staleness problem and propose the first practical solution for utilizing profiles collected on binaries built from several revisions behind the release. Our algorithm is developed and implemented in a mainstream open-source post-link optimizer, BOLT. An extensive evaluation on a variety of standalone benchmarks and production services indicates that the new method recovers up to $0.8$ of the maximum BOLT benefit, even when most of the input profile data is stale and would have been discarded by the optimizer otherwise.
[ { "created": "Tue, 30 Jan 2024 16:56:32 GMT", "version": "v1" } ]
2024-01-31
[ [ "Ayupov", "Amir", "" ], [ "Panchenko", "Maksim", "" ], [ "Pupyrev", "Sergey", "" ] ]
Profile-guided optimizations rely on profile data for directing compilers to generate optimized code. To achieve the maximum performance boost, profile data needs to be collected on the same version of the binary that is being optimized. In practice however, there is typically a gap between the profile collection and the release, which makes a portion of the profile invalid for optimizations. This phenomenon is known as profile staleness, and it is a serious practical problem for data-center workloads both for compilers and binary optimizers. In this paper we thoroughly study the staleness problem and propose the first practical solution for utilizing profiles collected on binaries built from several revisions behind the release. Our algorithm is developed and implemented in a mainstream open-source post-link optimizer, BOLT. An extensive evaluation on a variety of standalone benchmarks and production services indicates that the new method recovers up to $0.8$ of the maximum BOLT benefit, even when most of the input profile data is stale and would have been discarded by the optimizer otherwise.
2011.11860
Yixin Liu
Zhao Li, Yixin Liu, Zhen Zhang, Shirui Pan, Jianliang Gao, Jiajun Bu
Cyclic Label Propagation for Graph Semi-supervised Learning
19 pages, 4 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) have emerged as effective approaches for graph analysis, especially in the scenario of semi-supervised learning. Despite its success, GNN often suffers from over-smoothing and over-fitting problems, which affects its performance on node classification tasks. We analyze that an alternative method, the label propagation algorithm (LPA), avoids the aforementioned problems thus it is a promising choice for graph semi-supervised learning. Nevertheless, the intrinsic limitations of LPA on feature exploitation and relation modeling make propagating labels become less effective. To overcome these limitations, we introduce a novel framework for graph semi-supervised learning termed as Cyclic Label Propagation (CycProp for abbreviation), which integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner to exploit the advantages of both GNNs and LPA. In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation, while fine-tunes the weighted graph of label propagation with the help of node embedding in turn. After the model converges, reliably predicted labels and informative node embeddings are obtained with the LPA and GNN modules respectively. Extensive experiments on various real-world datasets are conducted, and the experimental results empirically demonstrate that the proposed CycProp model can achieve relatively significant gains over the state-of-the-art methods.
[ { "created": "Tue, 24 Nov 2020 02:55:40 GMT", "version": "v1" } ]
2020-11-25
[ [ "Li", "Zhao", "" ], [ "Liu", "Yixin", "" ], [ "Zhang", "Zhen", "" ], [ "Pan", "Shirui", "" ], [ "Gao", "Jianliang", "" ], [ "Bu", "Jiajun", "" ] ]
Graph neural networks (GNNs) have emerged as effective approaches for graph analysis, especially in the scenario of semi-supervised learning. Despite its success, GNN often suffers from over-smoothing and over-fitting problems, which affects its performance on node classification tasks. We analyze that an alternative method, the label propagation algorithm (LPA), avoids the aforementioned problems thus it is a promising choice for graph semi-supervised learning. Nevertheless, the intrinsic limitations of LPA on feature exploitation and relation modeling make propagating labels become less effective. To overcome these limitations, we introduce a novel framework for graph semi-supervised learning termed as Cyclic Label Propagation (CycProp for abbreviation), which integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner to exploit the advantages of both GNNs and LPA. In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation, while fine-tunes the weighted graph of label propagation with the help of node embedding in turn. After the model converges, reliably predicted labels and informative node embeddings are obtained with the LPA and GNN modules respectively. Extensive experiments on various real-world datasets are conducted, and the experimental results empirically demonstrate that the proposed CycProp model can achieve relatively significant gains over the state-of-the-art methods.
1308.3174
Jesse Shore
Benjamin Lubin, Jesse Shore and Vatche Ishakian
Communication Network Design: Balancing Modularity and Mixing via Optimal Graph Spectra
null
null
null
null
cs.SI cs.GT cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By leveraging information technologies, organizations now have the ability to design their communication networks and crowdsourcing platforms to pursue various performance goals, but existing research on network design does not account for the specific features of social networks, such as the notion of teams. We fill this gap by demonstrating how desirable aspects of organizational structure can be mapped parsimoniously onto the spectrum of the graph Laplacian allowing the specification of structural objectives and build on recent advances in non-convex programming to optimize them. This design framework is general, but we focus here on the problem of creating graphs that balance high modularity and low mixing time, and show how "liaisons" rather than brokers maximize this objective.
[ { "created": "Wed, 14 Aug 2013 16:29:28 GMT", "version": "v1" } ]
2013-08-15
[ [ "Lubin", "Benjamin", "" ], [ "Shore", "Jesse", "" ], [ "Ishakian", "Vatche", "" ] ]
By leveraging information technologies, organizations now have the ability to design their communication networks and crowdsourcing platforms to pursue various performance goals, but existing research on network design does not account for the specific features of social networks, such as the notion of teams. We fill this gap by demonstrating how desirable aspects of organizational structure can be mapped parsimoniously onto the spectrum of the graph Laplacian allowing the specification of structural objectives and build on recent advances in non-convex programming to optimize them. This design framework is general, but we focus here on the problem of creating graphs that balance high modularity and low mixing time, and show how "liaisons" rather than brokers maximize this objective.
2106.08013
Man Zhou
Man Zhou, Qian Wang (Senior Member, IEEE), Qi Li (Senior Member, IEEE), Peipei Jiang, Jingxiao Yang, Chao Shen (Senior Member, IEEE), Cong Wang (Fellow, IEEE), and Shouhong Ding
Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face authentication usually utilizes deep learning models to verify users with high recognition accuracy. However, face authentication systems are vulnerable to various attacks that cheat the models by manipulating the digital counterparts of human faces. So far, lots of liveness detection schemes have been developed to prevent such attacks. Unfortunately, the attacker can still bypass these schemes by constructing wide-ranging sophisticated attacks. We study the security of existing face authentication services (e.g., Microsoft, Amazon, and Face++) and typical liveness detection approaches. Particularly, we develop a new type of attack, i.e., the low-cost 3D projection attack that projects manipulated face videos on a 3D face model, which can easily evade these face authentication services and liveness detection approaches. To this end, we propose FaceLip, a novel liveness detection scheme for face authentication, which utilizes unforgeable lip motion patterns built upon well-designed acoustic signals to enable a strong security guarantee. The unique lip motion patterns for each user are unforgeable because FaceLip verifies the patterns by capturing and analyzing the acoustic signals that are dynamically generated according to random challenges, which ensures that our signals for liveness detection cannot be manipulated. Specially, we develop robust algorithms for FaceLip to eliminate the impact of noisy signals in the environment and thus can accurately infer the lip motions at larger distances. We prototype FaceLip on off-the-shelf smartphones and conduct extensive experiments under different settings. Our evaluation with 44 participants validates the effectiveness and robustness of FaceLip.
[ { "created": "Tue, 15 Jun 2021 09:46:46 GMT", "version": "v1" } ]
2021-06-16
[ [ "Zhou", "Man", "", "Senior Member, IEEE" ], [ "Wang", "Qian", "", "Senior Member, IEEE" ], [ "Li", "Qi", "", "Senior Member,\n IEEE" ], [ "Jiang", "Peipei", "", "Senior Member, IEEE" ], [ "Yang", "Jingxiao", "", "Senior Member, IEEE" ], [ "Shen", "Chao", "", "Senior Member, IEEE" ], [ "Wang", "Cong", "", "Fellow, IEEE" ], [ "Ding", "Shouhong", "" ] ]
Face authentication usually utilizes deep learning models to verify users with high recognition accuracy. However, face authentication systems are vulnerable to various attacks that cheat the models by manipulating the digital counterparts of human faces. So far, lots of liveness detection schemes have been developed to prevent such attacks. Unfortunately, the attacker can still bypass these schemes by constructing wide-ranging sophisticated attacks. We study the security of existing face authentication services (e.g., Microsoft, Amazon, and Face++) and typical liveness detection approaches. Particularly, we develop a new type of attack, i.e., the low-cost 3D projection attack that projects manipulated face videos on a 3D face model, which can easily evade these face authentication services and liveness detection approaches. To this end, we propose FaceLip, a novel liveness detection scheme for face authentication, which utilizes unforgeable lip motion patterns built upon well-designed acoustic signals to enable a strong security guarantee. The unique lip motion patterns for each user are unforgeable because FaceLip verifies the patterns by capturing and analyzing the acoustic signals that are dynamically generated according to random challenges, which ensures that our signals for liveness detection cannot be manipulated. Specially, we develop robust algorithms for FaceLip to eliminate the impact of noisy signals in the environment and thus can accurately infer the lip motions at larger distances. We prototype FaceLip on off-the-shelf smartphones and conduct extensive experiments under different settings. Our evaluation with 44 participants validates the effectiveness and robustness of FaceLip.
2404.17452
Richard Michael
Richard Michael, Simon Bartels, Miguel Gonz\'alez-Duque, Yevgen Zainchkovskyy, Jes Frellsen, S{\o}ren Hauberg, Wouter Boomsma
A Continuous Relaxation for Discrete Bayesian Optimization
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization. We propose a continuous relaxation of the objective function and show that inference and optimization can be computationally tractable. We consider in particular the optimization domain where very few observations and strict budgets exist; motivated by optimizing protein sequences for expensive to evaluate bio-chemical properties. The advantages of our approach are two-fold: the problem is treated in the continuous setting, and available prior knowledge over sequences can be incorporated directly. More specifically, we utilize available and learned distributions over the problem domain for a weighting of the Hellinger distance which yields a covariance function. We show that the resulting acquisition function can be optimized with both continuous or discrete optimization algorithms and empirically assess our method on two bio-chemical sequence optimization tasks.
[ { "created": "Fri, 26 Apr 2024 14:47:40 GMT", "version": "v1" } ]
2024-04-29
[ [ "Michael", "Richard", "" ], [ "Bartels", "Simon", "" ], [ "González-Duque", "Miguel", "" ], [ "Zainchkovskyy", "Yevgen", "" ], [ "Frellsen", "Jes", "" ], [ "Hauberg", "Søren", "" ], [ "Boomsma", "Wouter", "" ] ]
To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization. We propose a continuous relaxation of the objective function and show that inference and optimization can be computationally tractable. We consider in particular the optimization domain where very few observations and strict budgets exist; motivated by optimizing protein sequences for expensive to evaluate bio-chemical properties. The advantages of our approach are two-fold: the problem is treated in the continuous setting, and available prior knowledge over sequences can be incorporated directly. More specifically, we utilize available and learned distributions over the problem domain for a weighting of the Hellinger distance which yields a covariance function. We show that the resulting acquisition function can be optimized with both continuous or discrete optimization algorithms and empirically assess our method on two bio-chemical sequence optimization tasks.
2309.10801
Diane Uwacu
Diane Uwacu, Ananya Yammanuru, Keerthana Nallamotu, Vasu Chalasani, Marco Morales, Nancy M. Amato
Hierarchical Annotated Skeleton-Guided Tree-based Motion Planning
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a hierarchical tree-based motion planning strategy, HAS-RRT, guided by the workspace skeleton to solve motion planning problems in robotics and computational biology. Relying on the information about the connectivity of the workspace and the ranking of available paths in the workspace, the strategy prioritizes paths indicated by the workspace guidance to find a valid motion plan for the moving object efficiently. In instances of suboptimal guidance, the strategy adapts its reliance on the guidance by hierarchically reverting to local exploration of the planning space. We offer an extensive comparative analysis against other tree-based planning strategies and demonstrate that HAS-RRT reliably and efficiently finds low-cost paths. In contrast to methods prone to inconsistent performance across different environments or reliance on specific parameters, HAS-RRT is robust to workspace variability.
[ { "created": "Tue, 19 Sep 2023 17:46:36 GMT", "version": "v1" } ]
2023-09-20
[ [ "Uwacu", "Diane", "" ], [ "Yammanuru", "Ananya", "" ], [ "Nallamotu", "Keerthana", "" ], [ "Chalasani", "Vasu", "" ], [ "Morales", "Marco", "" ], [ "Amato", "Nancy M.", "" ] ]
We present a hierarchical tree-based motion planning strategy, HAS-RRT, guided by the workspace skeleton to solve motion planning problems in robotics and computational biology. Relying on the information about the connectivity of the workspace and the ranking of available paths in the workspace, the strategy prioritizes paths indicated by the workspace guidance to find a valid motion plan for the moving object efficiently. In instances of suboptimal guidance, the strategy adapts its reliance on the guidance by hierarchically reverting to local exploration of the planning space. We offer an extensive comparative analysis against other tree-based planning strategies and demonstrate that HAS-RRT reliably and efficiently finds low-cost paths. In contrast to methods prone to inconsistent performance across different environments or reliance on specific parameters, HAS-RRT is robust to workspace variability.
2104.12574
Yang Liu
Yang Liu, Luiz G. Hafemann, Michael Jamieson, Mehrsan Javan
Detecting and Matching Related Objects with One Proposal Multiple Predictions
CVPR workshop 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking players in sports videos is commonly done in a tracking-by-detection framework, first detecting players in each frame, and then performing association over time. While for some sports tracking players is sufficient for game analysis, sports like hockey, tennis and polo may require additional detections, that include the object the player is holding (e.g. racket, stick). The baseline solution for this problem involves detecting these objects as separate classes, and matching them to player detections based on the intersection over union (IoU). This approach, however, leads to poor matching performance in crowded situations, as it does not model the relationship between players and objects. In this paper, we propose a simple yet efficient way to detect and match players and related objects at once without extra cost, by considering an implicit association for prediction of multiple objects through the same proposal box. We evaluate the method on a dataset of broadcast ice hockey videos, and also a new public dataset we introduce called COCO +Torso. On the ice hockey dataset, the proposed method boosts matching performance from 57.1% to 81.4%, while also improving the meanAP of player+stick detections from 68.4% to 88.3%. On the COCO +Torso dataset, we see matching improving from 47.9% to 65.2%. The COCO +Torso dataset, code and pre-trained models will be released at https://github.com/foreverYoungGitHub/detect-and-match-related-objects.
[ { "created": "Fri, 23 Apr 2021 14:37:10 GMT", "version": "v1" } ]
2021-04-27
[ [ "Liu", "Yang", "" ], [ "Hafemann", "Luiz G.", "" ], [ "Jamieson", "Michael", "" ], [ "Javan", "Mehrsan", "" ] ]
Tracking players in sports videos is commonly done in a tracking-by-detection framework, first detecting players in each frame, and then performing association over time. While for some sports tracking players is sufficient for game analysis, sports like hockey, tennis and polo may require additional detections, that include the object the player is holding (e.g. racket, stick). The baseline solution for this problem involves detecting these objects as separate classes, and matching them to player detections based on the intersection over union (IoU). This approach, however, leads to poor matching performance in crowded situations, as it does not model the relationship between players and objects. In this paper, we propose a simple yet efficient way to detect and match players and related objects at once without extra cost, by considering an implicit association for prediction of multiple objects through the same proposal box. We evaluate the method on a dataset of broadcast ice hockey videos, and also a new public dataset we introduce called COCO +Torso. On the ice hockey dataset, the proposed method boosts matching performance from 57.1% to 81.4%, while also improving the meanAP of player+stick detections from 68.4% to 88.3%. On the COCO +Torso dataset, we see matching improving from 47.9% to 65.2%. The COCO +Torso dataset, code and pre-trained models will be released at https://github.com/foreverYoungGitHub/detect-and-match-related-objects.
2202.11055
Mihir Kulkarni
Paolo De Petris, Huan Nguyen, Mihir Dharmadhikari, Mihir Kulkarni, Nikhil Khedekar, Frank Mascarich, Kostas Alexis
RMF-Owl: A Collision-Tolerant Flying Robot for Autonomous Subterranean Exploration
8 pages, 9 figures. Submitted to the International Conference on Unmanned Aircraft Systems, 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents the design, hardware realization, autonomous exploration and object detection capabilities of RMF-Owl, a new collision-tolerant aerial robot tailored for resilient autonomous subterranean exploration. The system is custom built for underground exploration with focus on collision tolerance, resilient autonomy with robust localization and mapping, alongside high-performance exploration path planning in confined, obstacle-filled and topologically complex underground environments. Moreover, RMF-Owl offers the ability to search, detect and locate objects of interest which can be particularly useful in search and rescue missions. A series of results from field experiments are presented in order to demonstrate the system's ability to autonomously explore challenging unknown underground environments.
[ { "created": "Tue, 22 Feb 2022 17:36:29 GMT", "version": "v1" } ]
2022-02-23
[ [ "De Petris", "Paolo", "" ], [ "Nguyen", "Huan", "" ], [ "Dharmadhikari", "Mihir", "" ], [ "Kulkarni", "Mihir", "" ], [ "Khedekar", "Nikhil", "" ], [ "Mascarich", "Frank", "" ], [ "Alexis", "Kostas", "" ] ]
This work presents the design, hardware realization, autonomous exploration and object detection capabilities of RMF-Owl, a new collision-tolerant aerial robot tailored for resilient autonomous subterranean exploration. The system is custom built for underground exploration with focus on collision tolerance, resilient autonomy with robust localization and mapping, alongside high-performance exploration path planning in confined, obstacle-filled and topologically complex underground environments. Moreover, RMF-Owl offers the ability to search, detect and locate objects of interest which can be particularly useful in search and rescue missions. A series of results from field experiments are presented in order to demonstrate the system's ability to autonomously explore challenging unknown underground environments.
1601.01228
Maumita Bhattacharya
J. West and Maumita Bhattacharya
Some Experimental Issues in Financial Fraud Detection: An Investigation
J. West and Maumita Bhattacharya. "Some Experimental Issues in Financial Fraud Detection: An Investigation", In the Proceedings of The 5th International Symposium on Cloud and Service Computing (SC2 2015), IEEE CS Press
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Financial fraud detection is an important problem with a number of design aspects to consider. Issues such as algorithm selection and performance analysis will affect the perceived ability of proposed solutions, so for auditors and re-searchers to be able to sufficiently detect financial fraud it is necessary that these issues be thoroughly explored. In this paper we will revisit the key performance metrics used for financial fraud detection with a focus on credit card fraud, critiquing the prevailing ideas and offering our own understandings. There are many different performance metrics that have been employed in prior financial fraud detection research. We will analyse several of the popular metrics and compare their effectiveness at measuring the ability of detection mechanisms. We further investigated the performance of a range of computational intelligence techniques when applied to this problem domain, and explored the efficacy of several binary classification methods.
[ { "created": "Wed, 6 Jan 2016 16:18:43 GMT", "version": "v1" } ]
2016-01-07
[ [ "West", "J.", "" ], [ "Bhattacharya", "Maumita", "" ] ]
Financial fraud detection is an important problem with a number of design aspects to consider. Issues such as algorithm selection and performance analysis will affect the perceived ability of proposed solutions, so for auditors and re-searchers to be able to sufficiently detect financial fraud it is necessary that these issues be thoroughly explored. In this paper we will revisit the key performance metrics used for financial fraud detection with a focus on credit card fraud, critiquing the prevailing ideas and offering our own understandings. There are many different performance metrics that have been employed in prior financial fraud detection research. We will analyse several of the popular metrics and compare their effectiveness at measuring the ability of detection mechanisms. We further investigated the performance of a range of computational intelligence techniques when applied to this problem domain, and explored the efficacy of several binary classification methods.
1904.00338
Huazhen Fang
Chuan Yan and Huazhen Fang
Observer-Based Distributed Leader-Follower Tracking Control: A New Perspective and Results
International Journal of Control 2019
null
10.1080/00207179.2019.1580770
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Leader-follower tracking control design has received significant attention in recent years due to its important and wide applications. Considering a multi-agent system composed of a leader and multiple followers, this paper proposes and investigates a new perspective into this problem: can we enable a follower to estimate the leader's driving input and leverage this idea to develop new observer-based tracking control approaches? With this motivation, we develop an input-observer-based leader-follower tracking control framework, which features distributed input observers that allow a follower to locally estimate the leader's input toward enhancing tracking control. This work first studies the first-order tracking problem. It then extends to the more sophisticated case of second-order tracking and considers a challenging situation when the leader's and followers' velocities are not measured. The proposed approaches exhibit interesting and useful advantages as revealed by a comparison with the literature. Convergence properties of the proposed approaches are rigorously analyzed. Simulation results further illustrate the efficacy of the proposed perspective, framework and approaches.
[ { "created": "Sun, 31 Mar 2019 04:25:21 GMT", "version": "v1" } ]
2019-04-02
[ [ "Yan", "Chuan", "" ], [ "Fang", "Huazhen", "" ] ]
Leader-follower tracking control design has received significant attention in recent years due to its important and wide applications. Considering a multi-agent system composed of a leader and multiple followers, this paper proposes and investigates a new perspective into this problem: can we enable a follower to estimate the leader's driving input and leverage this idea to develop new observer-based tracking control approaches? With this motivation, we develop an input-observer-based leader-follower tracking control framework, which features distributed input observers that allow a follower to locally estimate the leader's input toward enhancing tracking control. This work first studies the first-order tracking problem. It then extends to the more sophisticated case of second-order tracking and considers a challenging situation when the leader's and followers' velocities are not measured. The proposed approaches exhibit interesting and useful advantages as revealed by a comparison with the literature. Convergence properties of the proposed approaches are rigorously analyzed. Simulation results further illustrate the efficacy of the proposed perspective, framework and approaches.
1903.08969
Sayed Chhattan Shah
Sayed Chhattan Shah
An Energy-Efficient Resource Management System for a Mobile Ad Hoc Cloud
19 Pages
IEEE ACCESS 2019
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, mobile ad hoc clouds have emerged as a promising technology for mobile cyber-physical system applications, such as mobile intelligent video surveillance and smart homes. Resource management plays a key role in maximizing resource utilization and application performance in mobile ad hoc clouds. Unlike resource management in traditional distributed computing systems, such as clouds, resource management in a mobile ad hoc cloud poses numerous challenges owing to the node mobility, limited battery power, high latency, and the dynamic network environment. The real-time requirements associated with mobile cyber-physical system applications make the problem even more challenging. Currently, existing resource management systems for mobile ad hoc clouds are not designed to support mobile cyber-physical system applications and energy-efficient communication between application tasks. In this paper, we propose a new energy-efficient resource management system for mobile ad hoc clouds. The proposed system consists of two layers: a network layer and a middleware layer. The network layer provides ad hoc network and communication services to the middleware layer and shares the collected information in order to allow efficient and robust resource management decisions. It uses (1) a transmission power control mechanism to improve energy efficiency and network capacity, (2) link lifetimes to reduce communication and energy consumption costs, and (3) link quality to estimate data transfer times. The middleware layer is responsible for the discovery, monitoring, migration, and allocation of resources. It receives application tasks from users and allocates tasks to nodes on the basis of network and node-level information.
[ { "created": "Thu, 21 Mar 2019 13:03:53 GMT", "version": "v1" } ]
2019-03-22
[ [ "Shah", "Sayed Chhattan", "" ] ]
Recently, mobile ad hoc clouds have emerged as a promising technology for mobile cyber-physical system applications, such as mobile intelligent video surveillance and smart homes. Resource management plays a key role in maximizing resource utilization and application performance in mobile ad hoc clouds. Unlike resource management in traditional distributed computing systems, such as clouds, resource management in a mobile ad hoc cloud poses numerous challenges owing to the node mobility, limited battery power, high latency, and the dynamic network environment. The real-time requirements associated with mobile cyber-physical system applications make the problem even more challenging. Currently, existing resource management systems for mobile ad hoc clouds are not designed to support mobile cyber-physical system applications and energy-efficient communication between application tasks. In this paper, we propose a new energy-efficient resource management system for mobile ad hoc clouds. The proposed system consists of two layers: a network layer and a middleware layer. The network layer provides ad hoc network and communication services to the middleware layer and shares the collected information in order to allow efficient and robust resource management decisions. It uses (1) a transmission power control mechanism to improve energy efficiency and network capacity, (2) link lifetimes to reduce communication and energy consumption costs, and (3) link quality to estimate data transfer times. The middleware layer is responsible for the discovery, monitoring, migration, and allocation of resources. It receives application tasks from users and allocates tasks to nodes on the basis of network and node-level information.
2309.15782
Vipin Gautam
Vipin Gautam, Shitala Prasad and Sharad Sinha
Joint-YODNet: A Light-weight Object Detector for UAVs to Achieve Above 100fps
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Small object detection via UAV (Unmanned Aerial Vehicle) images captured from drones and radar is a complex task with several formidable challenges. This domain encompasses numerous complexities that impede the accurate detection and localization of small objects. To address these challenges, we propose a novel method called JointYODNet for UAVs to detect small objects, leveraging a joint loss function specifically designed for this task. Our method revolves around the development of a joint loss function tailored to enhance the detection performance of small objects. Through extensive experimentation on a diverse dataset of UAV images captured under varying environmental conditions, we evaluated different variations of the loss function and determined the most effective formulation. The results demonstrate that our proposed joint loss function outperforms existing methods in accurately localizing small objects. Specifically, our method achieves a recall of 0.971, and a F1Score of 0.975, surpassing state-of-the-art techniques. Additionally, our method achieves a mAP@.5(%) of 98.6, indicating its robustness in detecting small objects across varying scales
[ { "created": "Wed, 27 Sep 2023 16:57:04 GMT", "version": "v1" } ]
2023-09-28
[ [ "Gautam", "Vipin", "" ], [ "Prasad", "Shitala", "" ], [ "Sinha", "Sharad", "" ] ]
Small object detection via UAV (Unmanned Aerial Vehicle) images captured from drones and radar is a complex task with several formidable challenges. This domain encompasses numerous complexities that impede the accurate detection and localization of small objects. To address these challenges, we propose a novel method called JointYODNet for UAVs to detect small objects, leveraging a joint loss function specifically designed for this task. Our method revolves around the development of a joint loss function tailored to enhance the detection performance of small objects. Through extensive experimentation on a diverse dataset of UAV images captured under varying environmental conditions, we evaluated different variations of the loss function and determined the most effective formulation. The results demonstrate that our proposed joint loss function outperforms existing methods in accurately localizing small objects. Specifically, our method achieves a recall of 0.971, and a F1Score of 0.975, surpassing state-of-the-art techniques. Additionally, our method achieves a mAP@.5(%) of 98.6, indicating its robustness in detecting small objects across varying scales
2104.12874
Soo Hyun Ryu
Soo Hyun Ryu and Richard L. Lewis
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
CMCL 2021
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
We advance a novel explanation of similarity-based interference effects in subject-verb and reflexive pronoun agreement processing, grounded in surprisal values computed from a pretrained large-scale Transformer model, GPT-2. Specifically, we show that surprisal of the verb or reflexive pronoun predicts facilitatory interference effects in ungrammatical sentences, where a distractor noun that matches in number with the verb or pronoun leads to faster reading times, despite the distractor not participating in the agreement relation. We review the human empirical evidence for such effects, including recent meta-analyses and large-scale studies. We also show that attention patterns (indexed by entropy and other measures) in the Transformer show patterns of diffuse attention in the presence of similar distractors, consistent with cue-based retrieval models of parsing. But in contrast to these models, the attentional cues and memory representations are learned entirely from the simple self-supervised task of predicting the next word.
[ { "created": "Mon, 26 Apr 2021 20:46:54 GMT", "version": "v1" } ]
2021-04-28
[ [ "Ryu", "Soo Hyun", "" ], [ "Lewis", "Richard L.", "" ] ]
We advance a novel explanation of similarity-based interference effects in subject-verb and reflexive pronoun agreement processing, grounded in surprisal values computed from a pretrained large-scale Transformer model, GPT-2. Specifically, we show that surprisal of the verb or reflexive pronoun predicts facilitatory interference effects in ungrammatical sentences, where a distractor noun that matches in number with the verb or pronoun leads to faster reading times, despite the distractor not participating in the agreement relation. We review the human empirical evidence for such effects, including recent meta-analyses and large-scale studies. We also show that attention patterns (indexed by entropy and other measures) in the Transformer show patterns of diffuse attention in the presence of similar distractors, consistent with cue-based retrieval models of parsing. But in contrast to these models, the attentional cues and memory representations are learned entirely from the simple self-supervised task of predicting the next word.
2009.03586
Aijun Zhang
Zebin Yang and Aijun Zhang
Hyperparameter Optimization via Sequential Uniform Designs
null
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hyperparameter optimization (HPO) plays a central role in the automated machine learning (AutoML). It is a challenging task as the response surfaces of hyperparameters are generally unknown, hence essentially a global optimization problem. This paper reformulates HPO as a computer experiment and proposes a novel sequential uniform design (SeqUD) strategy with three-fold advantages: a) the hyperparameter space is adaptively explored with evenly spread design points, without the need of expensive meta-modeling and acquisition optimization; b) the batch-by-batch design points are sequentially generated with parallel processing support; c) a new augmented uniform design algorithm is developed for the efficient real-time generation of follow-up design points. Extensive experiments are conducted on both global optimization tasks and HPO applications. The numerical results show that the proposed SeqUD strategy outperforms benchmark HPO methods, and it can be therefore a promising and competitive alternative to existing AutoML tools.
[ { "created": "Tue, 8 Sep 2020 08:55:02 GMT", "version": "v1" }, { "created": "Thu, 17 Jun 2021 09:12:26 GMT", "version": "v2" } ]
2021-06-18
[ [ "Yang", "Zebin", "" ], [ "Zhang", "Aijun", "" ] ]
Hyperparameter optimization (HPO) plays a central role in the automated machine learning (AutoML). It is a challenging task as the response surfaces of hyperparameters are generally unknown, hence essentially a global optimization problem. This paper reformulates HPO as a computer experiment and proposes a novel sequential uniform design (SeqUD) strategy with three-fold advantages: a) the hyperparameter space is adaptively explored with evenly spread design points, without the need of expensive meta-modeling and acquisition optimization; b) the batch-by-batch design points are sequentially generated with parallel processing support; c) a new augmented uniform design algorithm is developed for the efficient real-time generation of follow-up design points. Extensive experiments are conducted on both global optimization tasks and HPO applications. The numerical results show that the proposed SeqUD strategy outperforms benchmark HPO methods, and it can be therefore a promising and competitive alternative to existing AutoML tools.
2007.05828
Ka-Ho Chow
Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu
Understanding Object Detection Through An Adversarial Lens
null
null
null
null
cs.CR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks based object detection models have revolutionized computer vision and fueled the development of a wide range of visual recognition applications. However, recent studies have revealed that deep object detectors can be compromised under adversarial attacks, causing a victim detector to detect no object, fake objects, or mislabeled objects. With object detection being used pervasively in many security-critical applications, such as autonomous vehicles and smart cities, we argue that a holistic approach for an in-depth understanding of adversarial attacks and vulnerabilities of deep object detection systems is of utmost importance for the research community to develop robust defense mechanisms. This paper presents a framework for analyzing and evaluating vulnerabilities of the state-of-the-art object detectors under an adversarial lens, aiming to analyze and demystify the attack strategies, adverse effects, and costs, as well as the cross-model and cross-resolution transferability of attacks. Using a set of quantitative metrics, extensive experiments are performed on six representative deep object detectors from three popular families (YOLOv3, SSD, and Faster R-CNN) with two benchmark datasets (PASCAL VOC and MS COCO). We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems. We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
[ { "created": "Sat, 11 Jul 2020 18:41:47 GMT", "version": "v1" } ]
2020-07-14
[ [ "Chow", "Ka-Ho", "" ], [ "Liu", "Ling", "" ], [ "Gursoy", "Mehmet Emre", "" ], [ "Truex", "Stacey", "" ], [ "Wei", "Wenqi", "" ], [ "Wu", "Yanzhao", "" ] ]
Deep neural networks based object detection models have revolutionized computer vision and fueled the development of a wide range of visual recognition applications. However, recent studies have revealed that deep object detectors can be compromised under adversarial attacks, causing a victim detector to detect no object, fake objects, or mislabeled objects. With object detection being used pervasively in many security-critical applications, such as autonomous vehicles and smart cities, we argue that a holistic approach for an in-depth understanding of adversarial attacks and vulnerabilities of deep object detection systems is of utmost importance for the research community to develop robust defense mechanisms. This paper presents a framework for analyzing and evaluating vulnerabilities of the state-of-the-art object detectors under an adversarial lens, aiming to analyze and demystify the attack strategies, adverse effects, and costs, as well as the cross-model and cross-resolution transferability of attacks. Using a set of quantitative metrics, extensive experiments are performed on six representative deep object detectors from three popular families (YOLOv3, SSD, and Faster R-CNN) with two benchmark datasets (PASCAL VOC and MS COCO). We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems. We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
2105.01423
Bilal Thonnam Thodi
Bilal Thonnam Thodi, Zaid Saeed Khan, Saif Eddin Jabari and Monica Menendez
Learning Traffic Speed Dynamics from Visualizations
8 pages, 9 figures; Submitted to the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC)
The 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, pp.1239-1244
10.1109/ITSC48978.2021.9564541
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Space-time visualizations of macroscopic or microscopic traffic variables is a qualitative tool used by traffic engineers to understand and analyze different aspects of road traffic dynamics. We present a deep learning method to learn the macroscopic traffic speed dynamics from these space-time visualizations, and demonstrate its application in the framework of traffic state estimation. Compared to existing estimation approaches, our approach allows a finer estimation resolution, eliminates the dependence on the initial conditions, and is agnostic to external factors such as traffic demand, road inhomogeneities and driving behaviors. Our model respects causality in traffic dynamics, which improves the robustness of estimation. We present the high-resolution traffic speed fields estimated for several freeway sections using the data obtained from the Next Generation Simulation Program (NGSIM) and German Highway (HighD) datasets. We further demonstrate the quality and utility of the estimation by inferring vehicle trajectories from the estimated speed fields, and discuss the benefits of deep neural network models in approximating the traffic dynamics.
[ { "created": "Tue, 4 May 2021 11:17:43 GMT", "version": "v1" } ]
2022-04-12
[ [ "Thodi", "Bilal Thonnam", "" ], [ "Khan", "Zaid Saeed", "" ], [ "Jabari", "Saif Eddin", "" ], [ "Menendez", "Monica", "" ] ]
Space-time visualizations of macroscopic or microscopic traffic variables is a qualitative tool used by traffic engineers to understand and analyze different aspects of road traffic dynamics. We present a deep learning method to learn the macroscopic traffic speed dynamics from these space-time visualizations, and demonstrate its application in the framework of traffic state estimation. Compared to existing estimation approaches, our approach allows a finer estimation resolution, eliminates the dependence on the initial conditions, and is agnostic to external factors such as traffic demand, road inhomogeneities and driving behaviors. Our model respects causality in traffic dynamics, which improves the robustness of estimation. We present the high-resolution traffic speed fields estimated for several freeway sections using the data obtained from the Next Generation Simulation Program (NGSIM) and German Highway (HighD) datasets. We further demonstrate the quality and utility of the estimation by inferring vehicle trajectories from the estimated speed fields, and discuss the benefits of deep neural network models in approximating the traffic dynamics.
1911.12303
Shantanu Chakraborty D.Eng.
Shantanu Chakraborty, Tim Baarslag, Michael Kaisers
Automated Peer-to-peer Negotiation for Energy Contract Settlements in Residential Cooperatives
arXiv admin note: substantial text overlap with arXiv:1807.10978
null
null
null
cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an automated peer-to-peer negotiation strategy for settling energy contracts among prosumers in a Residential Energy Cooperative considering heterogeneity prosumer preferences. The heterogeneity arises from prosumers' evaluation of energy contracts through multiple societal and environmental criteria and the prosumers' private preferences over those criteria. The prosumers engage in bilateral negotiations with peers to mutually agree on periodical energy contracts/loans consisting of the energy volume to be exchanged at that period and the return time of the exchanged energy. The negotiating prosumers navigate through a common negotiation domain consisting of potential energy contracts and evaluate those contracts from their valuations on the entailed criteria against a utility function that is robust against generation and demand uncertainty. From the repeated interactions, a prosumer gradually learns about the compatibility of its peers in reaching energy contracts that are closer to Nash solutions. Empirical evaluation on real demand, generation and storage profiles -- in multiple system scales -- illustrates that the proposed negotiation based strategy can increase the system efficiency (measured by utilitarian social welfare) and fairness (measured by Nash social welfare) over a baseline strategy and an individual flexibility control strategy representing the status quo strategy. We thus elicit system benefits from peer-to-peer flexibility exchange already without any central coordination and market operator, providing a simple yet flexible and effective paradigm that complements existing markets.
[ { "created": "Tue, 26 Nov 2019 02:28:01 GMT", "version": "v1" } ]
2019-11-28
[ [ "Chakraborty", "Shantanu", "" ], [ "Baarslag", "Tim", "" ], [ "Kaisers", "Michael", "" ] ]
This paper presents an automated peer-to-peer negotiation strategy for settling energy contracts among prosumers in a Residential Energy Cooperative considering heterogeneity prosumer preferences. The heterogeneity arises from prosumers' evaluation of energy contracts through multiple societal and environmental criteria and the prosumers' private preferences over those criteria. The prosumers engage in bilateral negotiations with peers to mutually agree on periodical energy contracts/loans consisting of the energy volume to be exchanged at that period and the return time of the exchanged energy. The negotiating prosumers navigate through a common negotiation domain consisting of potential energy contracts and evaluate those contracts from their valuations on the entailed criteria against a utility function that is robust against generation and demand uncertainty. From the repeated interactions, a prosumer gradually learns about the compatibility of its peers in reaching energy contracts that are closer to Nash solutions. Empirical evaluation on real demand, generation and storage profiles -- in multiple system scales -- illustrates that the proposed negotiation based strategy can increase the system efficiency (measured by utilitarian social welfare) and fairness (measured by Nash social welfare) over a baseline strategy and an individual flexibility control strategy representing the status quo strategy. We thus elicit system benefits from peer-to-peer flexibility exchange already without any central coordination and market operator, providing a simple yet flexible and effective paradigm that complements existing markets.
2102.11530
Kanji Tanaka
Kanji Tanaka
Domain-invariant NBV Planner for Active Cross-domain Self-localization
5 pages, 5 figures, technical report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pole-like landmark has received increasing attention as a domain-invariant visual cue for visual robot self-localization across domains (e.g., seasons, times of day, weathers). However, self-localization using pole-like landmarks can be ill-posed for a passive observer, as many viewpoints may not provide any pole-like landmark view. To alleviate this problem, we consider an active observer and explore a novel "domain-invariant" next-best-view (NBV) planner that attains consistent performance over different domains (i.e., maintenance-free), without requiring the expensive task of training data collection and retraining. In our approach, a novel multi-encoder deep convolutional neural network enables to detect domain invariant pole-like landmarks, which are then used as the sole input to a model-free deep reinforcement learning -based domain-invariant NBV planner. Further, we develop a practical system for active self-localization using sparse invariant landmarks and dense discriminative landmarks. In experiments, we demonstrate that the proposed method is effective both in efficient landmark detection and in discriminative self-localization.
[ { "created": "Tue, 23 Feb 2021 07:36:45 GMT", "version": "v1" } ]
2021-02-24
[ [ "Tanaka", "Kanji", "" ] ]
Pole-like landmark has received increasing attention as a domain-invariant visual cue for visual robot self-localization across domains (e.g., seasons, times of day, weathers). However, self-localization using pole-like landmarks can be ill-posed for a passive observer, as many viewpoints may not provide any pole-like landmark view. To alleviate this problem, we consider an active observer and explore a novel "domain-invariant" next-best-view (NBV) planner that attains consistent performance over different domains (i.e., maintenance-free), without requiring the expensive task of training data collection and retraining. In our approach, a novel multi-encoder deep convolutional neural network enables to detect domain invariant pole-like landmarks, which are then used as the sole input to a model-free deep reinforcement learning -based domain-invariant NBV planner. Further, we develop a practical system for active self-localization using sparse invariant landmarks and dense discriminative landmarks. In experiments, we demonstrate that the proposed method is effective both in efficient landmark detection and in discriminative self-localization.
2212.13245
Roozbeh Aghili
Roozbeh Aghili, Heng Li, Foutse Khomh
Studying the Characteristics of AIOps Projects on GitHub
46 pages, 8 pages of references, 14 figures, 16 tables
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Artificial Intelligence for IT Operations (AIOps) leverages AI approaches to handle the massive amount of data generated during the operations of software systems. Prior works have proposed various AIOps solutions to support different tasks in system operations and maintenance, such as anomaly detection. In this study, we conduct an in-depth analysis of open-source AIOps projects to understand the characteristics of AIOps in practice. We first carefully identify a set of AIOps projects from GitHub and analyze their repository metrics (e.g., the used programming languages). Then, we qualitatively examine the projects to understand their input data, analysis techniques, and goals. Finally, we assess the quality of these projects using different quality metrics, such as the number of bugs. To provide context, we also sample two sets of baseline projects from GitHub: a random sample of machine learning projects and a random sample of general-purposed projects. By comparing different metrics between our identified AIOps projects and these baselines, we derive meaningful insights. Our results reveal a recent and growing interest in AIOps solutions. However, the quality metrics indicate that AIOps projects suffer from more issues than our baseline projects. We also pinpoint the most common issues in AIOps approaches and discuss potential solutions to address these challenges. Our findings offer valuable guidance to researchers and practitioners, enabling them to comprehend the current state of AIOps practices and shed light on different ways of improving AIOps' weaker aspects. To the best of our knowledge, this work marks the first attempt to characterize open-source AIOps projects.
[ { "created": "Mon, 26 Dec 2022 18:24:45 GMT", "version": "v1" }, { "created": "Tue, 5 Sep 2023 22:02:17 GMT", "version": "v2" } ]
2023-09-07
[ [ "Aghili", "Roozbeh", "" ], [ "Li", "Heng", "" ], [ "Khomh", "Foutse", "" ] ]
Artificial Intelligence for IT Operations (AIOps) leverages AI approaches to handle the massive amount of data generated during the operations of software systems. Prior works have proposed various AIOps solutions to support different tasks in system operations and maintenance, such as anomaly detection. In this study, we conduct an in-depth analysis of open-source AIOps projects to understand the characteristics of AIOps in practice. We first carefully identify a set of AIOps projects from GitHub and analyze their repository metrics (e.g., the used programming languages). Then, we qualitatively examine the projects to understand their input data, analysis techniques, and goals. Finally, we assess the quality of these projects using different quality metrics, such as the number of bugs. To provide context, we also sample two sets of baseline projects from GitHub: a random sample of machine learning projects and a random sample of general-purposed projects. By comparing different metrics between our identified AIOps projects and these baselines, we derive meaningful insights. Our results reveal a recent and growing interest in AIOps solutions. However, the quality metrics indicate that AIOps projects suffer from more issues than our baseline projects. We also pinpoint the most common issues in AIOps approaches and discuss potential solutions to address these challenges. Our findings offer valuable guidance to researchers and practitioners, enabling them to comprehend the current state of AIOps practices and shed light on different ways of improving AIOps' weaker aspects. To the best of our knowledge, this work marks the first attempt to characterize open-source AIOps projects.
2011.07368
Bhaskar Mitra
Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani and Nick Craswell
Conformer-Kernel with Query Term Independence at TREC 2020 Deep Learning Track
null
null
null
null
cs.IR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track. In particular, we study the impact of incorporating: (i) Explicit term matching to complement matching based on learned representations (i.e., the "Duet principle"), (ii) query term independence (i.e., the "QTI assumption") to scale the model to the full retrieval setting, and (iii) the ORCAS click data as an additional document description field. We find evidence which supports that all three aforementioned strategies can lead to improved retrieval quality.
[ { "created": "Sat, 14 Nov 2020 19:03:24 GMT", "version": "v1" }, { "created": "Thu, 11 Feb 2021 23:57:45 GMT", "version": "v2" } ]
2021-02-15
[ [ "Mitra", "Bhaskar", "" ], [ "Hofstatter", "Sebastian", "" ], [ "Zamani", "Hamed", "" ], [ "Craswell", "Nick", "" ] ]
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track. In particular, we study the impact of incorporating: (i) Explicit term matching to complement matching based on learned representations (i.e., the "Duet principle"), (ii) query term independence (i.e., the "QTI assumption") to scale the model to the full retrieval setting, and (iii) the ORCAS click data as an additional document description field. We find evidence which supports that all three aforementioned strategies can lead to improved retrieval quality.
1505.00157
Yang Huang
Yang Huang and Bruno Clerckx
Joint Wireless Information and Power Transfer for an Autonomous Multiple Antenna Relay System
Accepted to IEEE Communications Letters
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Considering a three-node multiple antenna relay system, this paper proposes a two-phase amplify-and-forward (AF) relaying protocol, which enables the autonomous relay to simultaneously harvest wireless power from the source information signal and from an energy signal conveyed by the destination. We first study this energy-flow-assisted (EFA) relaying in a single-input single-output (SISO) relay system and aim at maximizing the rate. By transforming the optimization problem into an equivalent convex form, a global optimum can be found. We then extend the protocol to a multiple antenna relay system. The relay processing matrix is optimized to maximize the rate. The optimization problem can be efficiently solved by eigenvalue decomposition, after linear algebra manipulation. It is observed that the benefits of the energy flow are interestingly shown only in the multiple antenna case, and it is revealed that the received information signal and the energy leakage at the relay can be nearly separated by making use of the signal space, such that the desired signal can be amplified with a larger coefficient.
[ { "created": "Fri, 1 May 2015 11:23:30 GMT", "version": "v1" } ]
2015-05-04
[ [ "Huang", "Yang", "" ], [ "Clerckx", "Bruno", "" ] ]
Considering a three-node multiple antenna relay system, this paper proposes a two-phase amplify-and-forward (AF) relaying protocol, which enables the autonomous relay to simultaneously harvest wireless power from the source information signal and from an energy signal conveyed by the destination. We first study this energy-flow-assisted (EFA) relaying in a single-input single-output (SISO) relay system and aim at maximizing the rate. By transforming the optimization problem into an equivalent convex form, a global optimum can be found. We then extend the protocol to a multiple antenna relay system. The relay processing matrix is optimized to maximize the rate. The optimization problem can be efficiently solved by eigenvalue decomposition, after linear algebra manipulation. It is observed that the benefits of the energy flow are interestingly shown only in the multiple antenna case, and it is revealed that the received information signal and the energy leakage at the relay can be nearly separated by making use of the signal space, such that the desired signal can be amplified with a larger coefficient.
2312.12133
Wooju Lee
Wooju Lee, Dasol Hong, Hyungtae Lim, and Hyun Myung
Object-Aware Domain Generalization for Object Detection
Accepted by AAAI-24. The first two authors contributed equally
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-domain generalization (S-DG) aims to generalize a model to unseen environments with a single-source domain. However, most S-DG approaches have been conducted in the field of classification. When these approaches are applied to object detection, the semantic features of some objects can be damaged, which can lead to imprecise object localization and misclassification. To address these problems, we propose an object-aware domain generalization (OA-DG) method for single-domain generalization in object detection. Our method consists of data augmentation and training strategy, which are called OA-Mix and OA-Loss, respectively. OA-Mix generates multi-domain data with multi-level transformation and object-aware mixing strategy. OA-Loss enables models to learn domain-invariant representations for objects and backgrounds from the original and OA-Mixed images. Our proposed method outperforms state-of-the-art works on standard benchmarks. Our code is available at https://github.com/WoojuLee24/OA-DG.
[ { "created": "Tue, 19 Dec 2023 13:11:35 GMT", "version": "v1" } ]
2023-12-20
[ [ "Lee", "Wooju", "" ], [ "Hong", "Dasol", "" ], [ "Lim", "Hyungtae", "" ], [ "Myung", "Hyun", "" ] ]
Single-domain generalization (S-DG) aims to generalize a model to unseen environments with a single-source domain. However, most S-DG approaches have been conducted in the field of classification. When these approaches are applied to object detection, the semantic features of some objects can be damaged, which can lead to imprecise object localization and misclassification. To address these problems, we propose an object-aware domain generalization (OA-DG) method for single-domain generalization in object detection. Our method consists of data augmentation and training strategy, which are called OA-Mix and OA-Loss, respectively. OA-Mix generates multi-domain data with multi-level transformation and object-aware mixing strategy. OA-Loss enables models to learn domain-invariant representations for objects and backgrounds from the original and OA-Mixed images. Our proposed method outperforms state-of-the-art works on standard benchmarks. Our code is available at https://github.com/WoojuLee24/OA-DG.
1708.05847
Patricia Bouyer
Patricia Bouyer, Serge Haddad, Vincent Jug\'e
Unbounded product-form Petri nets
31 pages
null
null
null
cs.PF cs.DM cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing steady-state distributions in infinite-state stochastic systems is in general a very dificult task. Product-form Petri nets are those Petri nets for which the steady-state distribution can be described as a natural product corresponding, up to a normalising constant, to an exponentiation of the markings. However, even though some classes of nets are known to have a product-form distribution, computing the normalising constant can be hard. The class of (closed) {\Pi}3-nets has been proposed in an earlier work, for which it is shown that one can compute the steady-state distribution efficiently. However these nets are bounded. In this paper, we generalise queuing Markovian networks and closed {\Pi}3-nets to obtain the class of open {\Pi}3-nets, that generate infinite-state systems. We show interesting properties of these nets: (1) we prove that liveness can be decided in polynomial time, and that reachability in live {\Pi}3-nets can be decided in polynomial time; (2) we show that we can decide ergodicity of such nets in polynomial time as well; (3) we provide a pseudo-polynomial time algorithm to compute the normalising constant.
[ { "created": "Sat, 19 Aug 2017 14:06:03 GMT", "version": "v1" } ]
2017-08-22
[ [ "Bouyer", "Patricia", "" ], [ "Haddad", "Serge", "" ], [ "Jugé", "Vincent", "" ] ]
Computing steady-state distributions in infinite-state stochastic systems is in general a very dificult task. Product-form Petri nets are those Petri nets for which the steady-state distribution can be described as a natural product corresponding, up to a normalising constant, to an exponentiation of the markings. However, even though some classes of nets are known to have a product-form distribution, computing the normalising constant can be hard. The class of (closed) {\Pi}3-nets has been proposed in an earlier work, for which it is shown that one can compute the steady-state distribution efficiently. However these nets are bounded. In this paper, we generalise queuing Markovian networks and closed {\Pi}3-nets to obtain the class of open {\Pi}3-nets, that generate infinite-state systems. We show interesting properties of these nets: (1) we prove that liveness can be decided in polynomial time, and that reachability in live {\Pi}3-nets can be decided in polynomial time; (2) we show that we can decide ergodicity of such nets in polynomial time as well; (3) we provide a pseudo-polynomial time algorithm to compute the normalising constant.
1608.06132
J\"orn Fischer
J. Fischer, P. Manoonpong, S. Lackner
Reconstructing Neural Parameters and Synapses of arbitrary interconnected Neurons from their Simulated Spiking Activity
6 pages, 7 figures
null
null
null
cs.NE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To understand the behavior of a neural circuit it is a presupposition that we have a model of the dynamical system describing this circuit. This model is determined by several parameters, including not only the synaptic weights, but also the parameters of each neuron. Existing works mainly concentrate on either the synaptic weights or the neural parameters. In this paper we present an algorithm to reconstruct all parameters including the synaptic weights of a spiking neuron model. The model based on works of Eugene M. Izhikevich (Izhikevich 2007) consists of two differential equations and covers different types of cortical neurons. It combines the dynamical properties of Hodgkin-Huxley-type dynamics with a high computational efficiency. The presented algorithm uses the recordings of the corresponding membrane potentials of the model for the reconstruction and consists of two main components. The first component is a rank based Genetic Algorithm (GA) which is used to find the neural parameters of the model. The second one is a Least Mean Squares approach which computes the synaptic weights of all interconnected neurons by minimizing the squared error between the calculated and the measured membrane potentials for each time step. In preparation for the reconstruction of the neural parameters and of the synaptic weights from real measured membrane potentials, promising results based on simulated data generated with a randomly parametrized Izhikevich model are presented. The reconstruction does not only converge to a global minimum of neural parameters, but also approximates the synaptic weights with high precision.
[ { "created": "Mon, 22 Aug 2016 11:47:18 GMT", "version": "v1" } ]
2016-08-23
[ [ "Fischer", "J.", "" ], [ "Manoonpong", "P.", "" ], [ "Lackner", "S.", "" ] ]
To understand the behavior of a neural circuit it is a presupposition that we have a model of the dynamical system describing this circuit. This model is determined by several parameters, including not only the synaptic weights, but also the parameters of each neuron. Existing works mainly concentrate on either the synaptic weights or the neural parameters. In this paper we present an algorithm to reconstruct all parameters including the synaptic weights of a spiking neuron model. The model based on works of Eugene M. Izhikevich (Izhikevich 2007) consists of two differential equations and covers different types of cortical neurons. It combines the dynamical properties of Hodgkin-Huxley-type dynamics with a high computational efficiency. The presented algorithm uses the recordings of the corresponding membrane potentials of the model for the reconstruction and consists of two main components. The first component is a rank based Genetic Algorithm (GA) which is used to find the neural parameters of the model. The second one is a Least Mean Squares approach which computes the synaptic weights of all interconnected neurons by minimizing the squared error between the calculated and the measured membrane potentials for each time step. In preparation for the reconstruction of the neural parameters and of the synaptic weights from real measured membrane potentials, promising results based on simulated data generated with a randomly parametrized Izhikevich model are presented. The reconstruction does not only converge to a global minimum of neural parameters, but also approximates the synaptic weights with high precision.
2202.03814
Maarten Buyl
Maarten Buyl, Tijl De Bie
Optimal Transport of Classifiers to Fairness
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In past work on fairness in machine learning, the focus has been on forcing the prediction of classifiers to have similar statistical properties for people of different demographics. To reduce the violation of these properties, fairness methods usually simply rescale the classifier scores, ignoring similarities and dissimilarities between members of different groups. Yet, we hypothesize that such information is relevant in quantifying the unfairness of a given classifier. To validate this hypothesis, we introduce Optimal Transport to Fairness (OTF), a method that quantifies the violation of fairness constraints as the smallest Optimal Transport cost between a probabilistic classifier and any score function that satisfies these constraints. For a flexible class of linear fairness constraints, we construct a practical way to compute OTF as a differentiable fairness regularizer that can be added to any standard classification setting. Experiments show that OTF can be used to achieve an improved trade-off between predictive power and fairness.
[ { "created": "Tue, 8 Feb 2022 12:16:24 GMT", "version": "v1" }, { "created": "Tue, 31 May 2022 14:20:06 GMT", "version": "v2" }, { "created": "Tue, 29 Nov 2022 22:11:08 GMT", "version": "v3" } ]
2022-12-02
[ [ "Buyl", "Maarten", "" ], [ "De Bie", "Tijl", "" ] ]
In past work on fairness in machine learning, the focus has been on forcing the prediction of classifiers to have similar statistical properties for people of different demographics. To reduce the violation of these properties, fairness methods usually simply rescale the classifier scores, ignoring similarities and dissimilarities between members of different groups. Yet, we hypothesize that such information is relevant in quantifying the unfairness of a given classifier. To validate this hypothesis, we introduce Optimal Transport to Fairness (OTF), a method that quantifies the violation of fairness constraints as the smallest Optimal Transport cost between a probabilistic classifier and any score function that satisfies these constraints. For a flexible class of linear fairness constraints, we construct a practical way to compute OTF as a differentiable fairness regularizer that can be added to any standard classification setting. Experiments show that OTF can be used to achieve an improved trade-off between predictive power and fairness.
2010.15665
Noah Goodall
Noah J. Goodall
Machine Ethics and Automated Vehicles
12 pages
In: Meyer G., Beiker S. (eds) Road Vehicle Automation. Lecture Notes in Mobility. Springer, Cham (2014)
10.1007/978-3-319-05990-7_9
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Road vehicle travel at a reasonable speed involves some risk, even when using computer-controlled driving with failure-free hardware and perfect sensing. A fully-automated vehicle must continuously decide how to allocate this risk without a human driver's oversight. These are ethical decisions, particularly in instances where an automated vehicle cannot avoid crashing. In this chapter, I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research.
[ { "created": "Thu, 29 Oct 2020 15:14:47 GMT", "version": "v1" } ]
2020-10-30
[ [ "Goodall", "Noah J.", "" ] ]
Road vehicle travel at a reasonable speed involves some risk, even when using computer-controlled driving with failure-free hardware and perfect sensing. A fully-automated vehicle must continuously decide how to allocate this risk without a human driver's oversight. These are ethical decisions, particularly in instances where an automated vehicle cannot avoid crashing. In this chapter, I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research.
1603.07779
Thomas Watson
Thomas Watson
Nonnegative Rank vs. Binary Rank
null
Chicago Journal of Theoretical Computer Science 2016, Article 2, pages 1-13
10.4086/cjtcs.2016.002
null
cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by (and using tools from) communication complexity, we investigate the relationship between the following two ranks of a $0$-$1$ matrix: its nonnegative rank and its binary rank (the $\log$ of the latter being the unambiguous nondeterministic communication complexity). We prove that for partial $0$-$1$ matrices, there can be an exponential separation. For total $0$-$1$ matrices, we show that if the nonnegative rank is at most $3$ then the two ranks are equal, and we show a separation by exhibiting a matrix with nonnegative rank $4$ and binary rank $5$, as well as a family of matrices for which the binary rank is $4/3$ times the nonnegative rank.
[ { "created": "Thu, 24 Mar 2016 23:21:41 GMT", "version": "v1" } ]
2016-03-28
[ [ "Watson", "Thomas", "" ] ]
Motivated by (and using tools from) communication complexity, we investigate the relationship between the following two ranks of a $0$-$1$ matrix: its nonnegative rank and its binary rank (the $\log$ of the latter being the unambiguous nondeterministic communication complexity). We prove that for partial $0$-$1$ matrices, there can be an exponential separation. For total $0$-$1$ matrices, we show that if the nonnegative rank is at most $3$ then the two ranks are equal, and we show a separation by exhibiting a matrix with nonnegative rank $4$ and binary rank $5$, as well as a family of matrices for which the binary rank is $4/3$ times the nonnegative rank.
2012.05901
Jia-Bin Huang
Johannes Kopf, Xuejian Rong, Jia-Bin Huang
Robust Consistent Video Depth Estimation
Project website: https://robust-cvd.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video. We integrate a learning-based depth prior, in the form of a convolutional neural network trained for single-image depth estimation, with geometric optimization, to estimate a smooth camera trajectory as well as detailed and stable depth reconstruction. Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details. In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations. Our method quantitatively outperforms state-of-the-arts on the Sintel benchmark for both depth and pose estimations and attains favorable qualitative results across diverse wild datasets.
[ { "created": "Thu, 10 Dec 2020 18:59:48 GMT", "version": "v1" }, { "created": "Tue, 22 Jun 2021 03:33:03 GMT", "version": "v2" } ]
2021-06-23
[ [ "Kopf", "Johannes", "" ], [ "Rong", "Xuejian", "" ], [ "Huang", "Jia-Bin", "" ] ]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video. We integrate a learning-based depth prior, in the form of a convolutional neural network trained for single-image depth estimation, with geometric optimization, to estimate a smooth camera trajectory as well as detailed and stable depth reconstruction. Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details. In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations. Our method quantitatively outperforms state-of-the-arts on the Sintel benchmark for both depth and pose estimations and attains favorable qualitative results across diverse wild datasets.
1410.0478
Subhadip Basu
Nibaran Das, Sandip Pramanik, Subhadip Basu, Punam Kumar Saha, Ram Sarkar, Mahantapas Kundu, Mita Nasipuri
Recognition of Handwritten Bangla Basic Characters and Digits using Convex Hull based Feature Set
null
2009 International Conference on Artificial Intelligence and Pattern Recognition, At Orlando, Florida pp. 380-386
10.13140/2.1.3689.4089
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In dealing with the problem of recognition of handwritten character patterns of varying shapes and sizes, selection of a proper feature set is important to achieve high recognition performance. The current research aims to evaluate the performance of the convex hull based feature set, i.e. 125 features in all computed over different bays attributes of the convex hull of a pattern, for effective recognition of isolated handwritten Bangla basic characters and digits. On experimentation with a database of 10000 samples, the maximum recognition rate of 76.86% is observed for handwritten Bangla characters. For Bangla numerals the maximum success rate of 99.45%. is achieved on a database of 12000 sample. The current work validates the usefulness of a new kind of feature set for recognition of handwritten Bangla basic characters and numerals.
[ { "created": "Thu, 2 Oct 2014 08:26:38 GMT", "version": "v1" } ]
2014-10-03
[ [ "Das", "Nibaran", "" ], [ "Pramanik", "Sandip", "" ], [ "Basu", "Subhadip", "" ], [ "Saha", "Punam Kumar", "" ], [ "Sarkar", "Ram", "" ], [ "Kundu", "Mahantapas", "" ], [ "Nasipuri", "Mita", "" ] ]
In dealing with the problem of recognition of handwritten character patterns of varying shapes and sizes, selection of a proper feature set is important to achieve high recognition performance. The current research aims to evaluate the performance of the convex hull based feature set, i.e. 125 features in all computed over different bays attributes of the convex hull of a pattern, for effective recognition of isolated handwritten Bangla basic characters and digits. On experimentation with a database of 10000 samples, the maximum recognition rate of 76.86% is observed for handwritten Bangla characters. For Bangla numerals the maximum success rate of 99.45%. is achieved on a database of 12000 sample. The current work validates the usefulness of a new kind of feature set for recognition of handwritten Bangla basic characters and numerals.
1403.6331
P{\aa}l Gr{\o}n{\aa}s Drange
P{\aa}l Gr{\o}n{\aa}s Drange, Markus Sortland Dregi, Pim van 't Hof
On the Computational Complexity of Vertex Integrity and Component Order Connectivity
A preliminary version of this paper already appeared in the conference proceedings of ISAAC 2014
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Weighted Vertex Integrity (wVI) problem takes as input an $n$-vertex graph $G$, a weight function $w:V(G)\to\mathbb{N}$, and an integer $p$. The task is to decide if there exists a set $X\subseteq V(G)$ such that the weight of $X$ plus the weight of a heaviest component of $G-X$ is at most $p$. Among other results, we prove that: (1) wVI is NP-complete on co-comparability graphs, even if each vertex has weight $1$; (2) wVI can be solved in $O(p^{p+1}n)$ time; (3) wVI admits a kernel with at most $p^3$ vertices. Result (1) refutes a conjecture by Ray and Deogun and answers an open question by Ray et al. It also complements a result by Kratsch et al., stating that the unweighted version of the problem can be solved in polynomial time on co-comparability graphs of bounded dimension, provided that an intersection model of the input graph is given as part of the input. An instance of the Weighted Component Order Connectivity (wCOC) problem consists of an $n$-vertex graph $G$, a weight function $w:V(G)\to \mathbb{N}$, and two integers $k$ and $l$, and the task is to decide if there exists a set $X\subseteq V(G)$ such that the weight of $X$ is at most $k$ and the weight of a heaviest component of $G-X$ is at most $l$. In some sense, the wCOC problem can be seen as a refined version of the wVI problem. We prove, among other results, that: (4) wCOC can be solved in $O(\min\{k,l\}\cdot n^3)$ time on interval graphs, while the unweighted version can be solved in $O(n^2)$ time on this graph class; (5) wCOC is W[1]-hard on split graphs when parameterized by $k$ or by $l$; (6) wCOC can be solved in $2^{O(k\log l)} n$ time; (7) wCOC admits a kernel with at most $kl(k+l)+k$ vertices. We also show that result (6) is essentially tight by proving that wCOC cannot be solved in $2^{o(k \log l)}n^{O(1)}$ time, unless the ETH fails.
[ { "created": "Tue, 25 Mar 2014 13:11:09 GMT", "version": "v1" }, { "created": "Thu, 4 Dec 2014 11:32:17 GMT", "version": "v2" } ]
2014-12-05
[ [ "Drange", "Pål Grønås", "" ], [ "Dregi", "Markus Sortland", "" ], [ "Hof", "Pim van 't", "" ] ]
The Weighted Vertex Integrity (wVI) problem takes as input an $n$-vertex graph $G$, a weight function $w:V(G)\to\mathbb{N}$, and an integer $p$. The task is to decide if there exists a set $X\subseteq V(G)$ such that the weight of $X$ plus the weight of a heaviest component of $G-X$ is at most $p$. Among other results, we prove that: (1) wVI is NP-complete on co-comparability graphs, even if each vertex has weight $1$; (2) wVI can be solved in $O(p^{p+1}n)$ time; (3) wVI admits a kernel with at most $p^3$ vertices. Result (1) refutes a conjecture by Ray and Deogun and answers an open question by Ray et al. It also complements a result by Kratsch et al., stating that the unweighted version of the problem can be solved in polynomial time on co-comparability graphs of bounded dimension, provided that an intersection model of the input graph is given as part of the input. An instance of the Weighted Component Order Connectivity (wCOC) problem consists of an $n$-vertex graph $G$, a weight function $w:V(G)\to \mathbb{N}$, and two integers $k$ and $l$, and the task is to decide if there exists a set $X\subseteq V(G)$ such that the weight of $X$ is at most $k$ and the weight of a heaviest component of $G-X$ is at most $l$. In some sense, the wCOC problem can be seen as a refined version of the wVI problem. We prove, among other results, that: (4) wCOC can be solved in $O(\min\{k,l\}\cdot n^3)$ time on interval graphs, while the unweighted version can be solved in $O(n^2)$ time on this graph class; (5) wCOC is W[1]-hard on split graphs when parameterized by $k$ or by $l$; (6) wCOC can be solved in $2^{O(k\log l)} n$ time; (7) wCOC admits a kernel with at most $kl(k+l)+k$ vertices. We also show that result (6) is essentially tight by proving that wCOC cannot be solved in $2^{o(k \log l)}n^{O(1)}$ time, unless the ETH fails.
2403.05049
Yunpeng Qu
Yunpeng Qu, Kun Yuan, Kai Zhao, Qizhi Xie, Jinhua Hao, Ming Sun and Chao Zhou
XPSR: Cross-modal Priors for Diffusion-based Image Super-Resolution
19 pages, 7 figures; including supplementary material
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion-based methods, endowed with a formidable generative prior, have received increasing attention in Image Super-Resolution (ISR) recently. However, as low-resolution (LR) images often undergo severe degradation, it is challenging for ISR models to perceive the semantic and degradation information, resulting in restoration images with incorrect content or unrealistic artifacts. To address these issues, we propose a \textit{Cross-modal Priors for Super-Resolution (XPSR)} framework. Within XPSR, to acquire precise and comprehensive semantic conditions for the diffusion model, cutting-edge Multimodal Large Language Models (MLLMs) are utilized. To facilitate better fusion of cross-modal priors, a \textit{Semantic-Fusion Attention} is raised. To distill semantic-preserved information instead of undesired degradations, a \textit{Degradation-Free Constraint} is attached between LR and its high-resolution (HR) counterpart. Quantitative and qualitative results show that XPSR is capable of generating high-fidelity and high-realism images across synthetic and real-world datasets. Codes are released at \url{https://github.com/qyp2000/XPSR}.
[ { "created": "Fri, 8 Mar 2024 04:52:22 GMT", "version": "v1" }, { "created": "Fri, 19 Jul 2024 16:31:19 GMT", "version": "v2" } ]
2024-07-22
[ [ "Qu", "Yunpeng", "" ], [ "Yuan", "Kun", "" ], [ "Zhao", "Kai", "" ], [ "Xie", "Qizhi", "" ], [ "Hao", "Jinhua", "" ], [ "Sun", "Ming", "" ], [ "Zhou", "Chao", "" ] ]
Diffusion-based methods, endowed with a formidable generative prior, have received increasing attention in Image Super-Resolution (ISR) recently. However, as low-resolution (LR) images often undergo severe degradation, it is challenging for ISR models to perceive the semantic and degradation information, resulting in restoration images with incorrect content or unrealistic artifacts. To address these issues, we propose a \textit{Cross-modal Priors for Super-Resolution (XPSR)} framework. Within XPSR, to acquire precise and comprehensive semantic conditions for the diffusion model, cutting-edge Multimodal Large Language Models (MLLMs) are utilized. To facilitate better fusion of cross-modal priors, a \textit{Semantic-Fusion Attention} is raised. To distill semantic-preserved information instead of undesired degradations, a \textit{Degradation-Free Constraint} is attached between LR and its high-resolution (HR) counterpart. Quantitative and qualitative results show that XPSR is capable of generating high-fidelity and high-realism images across synthetic and real-world datasets. Codes are released at \url{https://github.com/qyp2000/XPSR}.
2001.11164
Zihan Liu
Zihan Liu, Genta Indra Winata, Samuel Cahyawijaya, Andrea Madotto, Zhaojiang Lin, Pascale Fung
On the Importance of Word Order Information in Cross-lingual Sequence Labeling
Accepted in AAAI-2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Word order variances generally exist in different languages. In this paper, we hypothesize that cross-lingual models that fit into the word order of the source language might fail to handle target languages. To verify this hypothesis, we investigate whether making models insensitive to the word order of the source language can improve the adaptation performance in target languages. To do so, we reduce the source language word order information fitted to sequence encoders and observe the performance changes. In addition, based on this hypothesis, we propose a new method for fine-tuning multilingual BERT in downstream cross-lingual sequence labeling tasks. Experimental results on dialogue natural language understanding, part-of-speech tagging, and named entity recognition tasks show that reducing word order information fitted to the model can achieve better zero-shot cross-lingual performance. Furthermore, our proposed methods can also be applied to strong cross-lingual baselines, and improve their performances.
[ { "created": "Thu, 30 Jan 2020 03:35:44 GMT", "version": "v1" }, { "created": "Wed, 26 Feb 2020 12:18:32 GMT", "version": "v2" }, { "created": "Thu, 19 Mar 2020 15:31:19 GMT", "version": "v3" }, { "created": "Tue, 8 Dec 2020 11:04:04 GMT", "version": "v4" } ]
2020-12-09
[ [ "Liu", "Zihan", "" ], [ "Winata", "Genta Indra", "" ], [ "Cahyawijaya", "Samuel", "" ], [ "Madotto", "Andrea", "" ], [ "Lin", "Zhaojiang", "" ], [ "Fung", "Pascale", "" ] ]
Word order variances generally exist in different languages. In this paper, we hypothesize that cross-lingual models that fit into the word order of the source language might fail to handle target languages. To verify this hypothesis, we investigate whether making models insensitive to the word order of the source language can improve the adaptation performance in target languages. To do so, we reduce the source language word order information fitted to sequence encoders and observe the performance changes. In addition, based on this hypothesis, we propose a new method for fine-tuning multilingual BERT in downstream cross-lingual sequence labeling tasks. Experimental results on dialogue natural language understanding, part-of-speech tagging, and named entity recognition tasks show that reducing word order information fitted to the model can achieve better zero-shot cross-lingual performance. Furthermore, our proposed methods can also be applied to strong cross-lingual baselines, and improve their performances.
2308.13249
Yunzhu Pan
Yunzhu Pan, Nian Li, Chen Gao, Jianxin Chang, Yanan Niu, Yang Song, Depeng Jin, Yong Li
Learning and Optimization of Implicit Negative Feedback for Industrial Short-video Recommender System
Accepted by CIKM'23
null
10.1145/3583780.3615482
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short-video recommendation is one of the most important recommendation applications in today's industrial information systems. Compared with other recommendation tasks, the enormous amount of feedback is the most typical characteristic. Specifically, in short-video recommendation, the easiest-to-collect user feedback is the skipping behavior, which leads to two critical challenges for the recommendation model. First, the skipping behavior reflects implicit user preferences, and thus, it is challenging for interest extraction. Second, this kind of special feedback involves multiple objectives, such as total watching time and skipping rate, which is also very challenging. In this paper, we present our industrial solution in Kuaishou, which serves billion-level users every day. Specifically, we deploy a feedback-aware encoding module that extracts user preferences, taking the impact of context into consideration. We further design a multi-objective prediction module which well distinguishes the relation and differences among different model objectives in the short-video recommendation. We conduct extensive online A/B tests, along with detailed and careful analysis, which verify the effectiveness of our solution.
[ { "created": "Fri, 25 Aug 2023 08:54:27 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 11:51:01 GMT", "version": "v2" } ]
2024-03-06
[ [ "Pan", "Yunzhu", "" ], [ "Li", "Nian", "" ], [ "Gao", "Chen", "" ], [ "Chang", "Jianxin", "" ], [ "Niu", "Yanan", "" ], [ "Song", "Yang", "" ], [ "Jin", "Depeng", "" ], [ "Li", "Yong", "" ] ]
Short-video recommendation is one of the most important recommendation applications in today's industrial information systems. Compared with other recommendation tasks, the enormous amount of feedback is the most typical characteristic. Specifically, in short-video recommendation, the easiest-to-collect user feedback is the skipping behavior, which leads to two critical challenges for the recommendation model. First, the skipping behavior reflects implicit user preferences, and thus, it is challenging for interest extraction. Second, this kind of special feedback involves multiple objectives, such as total watching time and skipping rate, which is also very challenging. In this paper, we present our industrial solution in Kuaishou, which serves billion-level users every day. Specifically, we deploy a feedback-aware encoding module that extracts user preferences, taking the impact of context into consideration. We further design a multi-objective prediction module which well distinguishes the relation and differences among different model objectives in the short-video recommendation. We conduct extensive online A/B tests, along with detailed and careful analysis, which verify the effectiveness of our solution.
1607.06676
Grasha Jacob Mrs
Grasha Jacob, R. Shenbagavalli, S. Karthika
Detection of surface defects on ceramic tiles based on morphological techniques
9 pages, 11 figures
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Ceramic tiles have become very popular and are used in the flooring of offices and shopping malls. As testing the quality of tiles manually in a highly polluted environment in the manufacturing industry is a labor-intensive and time consuming process, analysis is carried out on the tile images. This paper discusses an automated system to detect the defects on the surface of ceramic tiles based on dilation, erosion, SMEE and boundary detection techniques.
[ { "created": "Thu, 21 Jul 2016 08:25:41 GMT", "version": "v1" } ]
2016-07-25
[ [ "Jacob", "Grasha", "" ], [ "Shenbagavalli", "R.", "" ], [ "Karthika", "S.", "" ] ]
Ceramic tiles have become very popular and are used in the flooring of offices and shopping malls. As testing the quality of tiles manually in a highly polluted environment in the manufacturing industry is a labor-intensive and time consuming process, analysis is carried out on the tile images. This paper discusses an automated system to detect the defects on the surface of ceramic tiles based on dilation, erosion, SMEE and boundary detection techniques.
2010.10117
Peter Gangl
Peter Gangl and Stefan K\"othe and Christiane Mellak and Alessio Cesarano and Annette M\"utze
Multi-objective free-form shape optimization of a synchronous reluctance machine
6 pages, 7 figures, proceedings to IGTE Symposium Graz 2020
null
null
null
cs.CE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the design optimization of a synchronous reluctance machine to be used in an X-ray tube, where the goal is to maximize the torque, by means of gradient-based free-form shape optimization. The presented approach is based on the mathematical concept of shape derivatives and allows to obtain new motor designs without the need to introduce a geometric parametrization. We validate our results by comparing them to a parametric geometry optimization in JMAG by means of a stochastic optimization algorithm. While the obtained designs are of similar shape, the computational time used by the gradient-based algorithm is in the order of minutes, compared to several hours taken by the stochastic optimization algorithm. Finally, we show an extension of the free-form shape optimization algorithm to the case of multiple objective functions and illustrate a way to obtain an approximate Pareto front.
[ { "created": "Tue, 20 Oct 2020 08:16:54 GMT", "version": "v1" } ]
2020-10-21
[ [ "Gangl", "Peter", "" ], [ "Köthe", "Stefan", "" ], [ "Mellak", "Christiane", "" ], [ "Cesarano", "Alessio", "" ], [ "Mütze", "Annette", "" ] ]
This paper deals with the design optimization of a synchronous reluctance machine to be used in an X-ray tube, where the goal is to maximize the torque, by means of gradient-based free-form shape optimization. The presented approach is based on the mathematical concept of shape derivatives and allows to obtain new motor designs without the need to introduce a geometric parametrization. We validate our results by comparing them to a parametric geometry optimization in JMAG by means of a stochastic optimization algorithm. While the obtained designs are of similar shape, the computational time used by the gradient-based algorithm is in the order of minutes, compared to several hours taken by the stochastic optimization algorithm. Finally, we show an extension of the free-form shape optimization algorithm to the case of multiple objective functions and illustrate a way to obtain an approximate Pareto front.
1709.08359
Robert Brijder
Robert Brijder, Floris Geerts, Jan Van den Bussche, Timmy Weerwag
On the expressive power of query languages for matrices
21 pages, 3 figures
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the expressive power of $\mathsf{MATLANG}$, a formal language for matrix manipulation based on common matrix operations and linear algebra. The language can be extended with the operation $\mathsf{inv}$ of inverting a matrix. In $\mathsf{MATLANG}+\mathsf{inv}$ we can compute the transitive closure of directed graphs, whereas we show that this is not possible without inversion. Indeed we show that the basic language can be simulated in the relational algebra with arithmetic operations, grouping, and summation. We also consider an operation $\mathsf{eigen}$ for diagonalizing a matrix, which is defined so that different eigenvectors returned for a same eigenvalue are orthogonal. We show that $\mathsf{inv}$ can be expressed in $\mathsf{MATLANG}+\mathsf{eigen}$. We put forward the open question whether there are boolean queries about matrices, or generic queries about graphs, expressible in $\mathsf{MATLANG} + \mathsf{eigen}$ but not in $\mathsf{MATLANG}+\mathsf{inv}$. The evaluation problem for $\mathsf{MATLANG} + \mathsf{eigen}$ is shown to be complete for the complexity class $\exists \mathbf{R}$.
[ { "created": "Mon, 25 Sep 2017 08:05:00 GMT", "version": "v1" } ]
2017-09-26
[ [ "Brijder", "Robert", "" ], [ "Geerts", "Floris", "" ], [ "Bussche", "Jan Van den", "" ], [ "Weerwag", "Timmy", "" ] ]
We investigate the expressive power of $\mathsf{MATLANG}$, a formal language for matrix manipulation based on common matrix operations and linear algebra. The language can be extended with the operation $\mathsf{inv}$ of inverting a matrix. In $\mathsf{MATLANG}+\mathsf{inv}$ we can compute the transitive closure of directed graphs, whereas we show that this is not possible without inversion. Indeed we show that the basic language can be simulated in the relational algebra with arithmetic operations, grouping, and summation. We also consider an operation $\mathsf{eigen}$ for diagonalizing a matrix, which is defined so that different eigenvectors returned for a same eigenvalue are orthogonal. We show that $\mathsf{inv}$ can be expressed in $\mathsf{MATLANG}+\mathsf{eigen}$. We put forward the open question whether there are boolean queries about matrices, or generic queries about graphs, expressible in $\mathsf{MATLANG} + \mathsf{eigen}$ but not in $\mathsf{MATLANG}+\mathsf{inv}$. The evaluation problem for $\mathsf{MATLANG} + \mathsf{eigen}$ is shown to be complete for the complexity class $\exists \mathbf{R}$.
2209.04126
Tatsuya Hiraoka
Tatsuya Hiraoka
MaxMatch-Dropout: Subword Regularization for WordPiece
Accepted to appear at COLING2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a subword regularization method for WordPiece, which uses a maximum matching algorithm for tokenization. The proposed method, MaxMatch-Dropout, randomly drops words in a search using the maximum matching algorithm. It realizes finetuning with subword regularization for popular pretrained language models such as BERT-base. The experimental results demonstrate that MaxMatch-Dropout improves the performance of text classification and machine translation tasks as well as other subword regularization methods. Moreover, we provide a comparative analysis of subword regularization methods: subword regularization with SentencePiece (Unigram), BPE-Dropout, and MaxMatch-Dropout.
[ { "created": "Fri, 9 Sep 2022 05:41:26 GMT", "version": "v1" } ]
2022-09-12
[ [ "Hiraoka", "Tatsuya", "" ] ]
We present a subword regularization method for WordPiece, which uses a maximum matching algorithm for tokenization. The proposed method, MaxMatch-Dropout, randomly drops words in a search using the maximum matching algorithm. It realizes finetuning with subword regularization for popular pretrained language models such as BERT-base. The experimental results demonstrate that MaxMatch-Dropout improves the performance of text classification and machine translation tasks as well as other subword regularization methods. Moreover, we provide a comparative analysis of subword regularization methods: subword regularization with SentencePiece (Unigram), BPE-Dropout, and MaxMatch-Dropout.
2206.04992
Xiaoxia Xu
Xiaoxia Xu, Yuanwei Liu, Xidong Mu, Qimei Chen, Hao Jiang, and Zhiguo Ding
Artificial Intelligence Enabled NOMA Towards Next Generation Multiple Access
This article has been accepted by IEEE Wireless Communications Magazine
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article focuses on the application of artificial intelligence (AI) in non-orthogonal multiple-access (NOMA), which aims to achieve automated, adaptive, and high-efficiency multi-user communications towards next generation multiple access (NGMA). First, the limitations of current scenario-specific multiple-antenna NOMA schemes are discussed, and the importance of AI for NGMA is highlighted. Then, to achieve the vision of NGMA, a novel cluster-free NOMA framework is proposed for providing scenario-adaptive NOMA communications, and several promising machine learning solutions are identified. To elaborate further, novel centralized and distributed machine learning paradigms are conceived for efficiently employing the proposed cluster-free NOMA framework in single-cell and multi-cell networks, where numerical results are provided to demonstrate the effectiveness. Furthermore, the interplays between the proposed cluster-free NOMA and emerging wireless techniques are presented. Finally, several open research issues of AI enabled NGMA are discussed.
[ { "created": "Fri, 10 Jun 2022 10:59:56 GMT", "version": "v1" }, { "created": "Thu, 1 Sep 2022 19:29:03 GMT", "version": "v2" }, { "created": "Tue, 13 Dec 2022 09:28:57 GMT", "version": "v3" } ]
2022-12-14
[ [ "Xu", "Xiaoxia", "" ], [ "Liu", "Yuanwei", "" ], [ "Mu", "Xidong", "" ], [ "Chen", "Qimei", "" ], [ "Jiang", "Hao", "" ], [ "Ding", "Zhiguo", "" ] ]
This article focuses on the application of artificial intelligence (AI) in non-orthogonal multiple-access (NOMA), which aims to achieve automated, adaptive, and high-efficiency multi-user communications towards next generation multiple access (NGMA). First, the limitations of current scenario-specific multiple-antenna NOMA schemes are discussed, and the importance of AI for NGMA is highlighted. Then, to achieve the vision of NGMA, a novel cluster-free NOMA framework is proposed for providing scenario-adaptive NOMA communications, and several promising machine learning solutions are identified. To elaborate further, novel centralized and distributed machine learning paradigms are conceived for efficiently employing the proposed cluster-free NOMA framework in single-cell and multi-cell networks, where numerical results are provided to demonstrate the effectiveness. Furthermore, the interplays between the proposed cluster-free NOMA and emerging wireless techniques are presented. Finally, several open research issues of AI enabled NGMA are discussed.
1907.01647
Yong Liu Stephen
Yong Liu, Yingtai Xiao, Qiong Wu, Chunyan Miao, Juyong Zhang
Bandit Learning for Diversified Interactive Recommendation
null
null
null
null
cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attentions. Previous methods mainly focus on optimizing recommendation accuracy. However, they usually ignore the diversity of the recommendation results, thus usually results in unsatisfying user experiences. In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC$^2$B), for interactive recommendation with users' implicit feedback. Specifically, DC$^2$B employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results. To learn the model parameters, a Thompson sampling-type algorithm based on variational Bayesian inference is proposed. In addition, theoretical regret analysis is also provided to guarantee the performance of DC$^2$B. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed method.
[ { "created": "Mon, 1 Jul 2019 03:52:55 GMT", "version": "v1" } ]
2019-07-04
[ [ "Liu", "Yong", "" ], [ "Xiao", "Yingtai", "" ], [ "Wu", "Qiong", "" ], [ "Miao", "Chunyan", "" ], [ "Zhang", "Juyong", "" ] ]
Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attentions. Previous methods mainly focus on optimizing recommendation accuracy. However, they usually ignore the diversity of the recommendation results, thus usually results in unsatisfying user experiences. In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC$^2$B), for interactive recommendation with users' implicit feedback. Specifically, DC$^2$B employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results. To learn the model parameters, a Thompson sampling-type algorithm based on variational Bayesian inference is proposed. In addition, theoretical regret analysis is also provided to guarantee the performance of DC$^2$B. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed method.
2107.05328
Xiaofeng Liu
Yinchuan Li, Xiaofeng Liu, Yunfeng Shao, Qing Wang and Yanhui Geng
Structured Directional Pruning via Perturbation Orthogonal Projection
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss. A more reasonable approach is to find a sparse minimizer along the flat minimum valley found by optimizers, i.e. stochastic gradient descent, which keeps the training loss constant. To achieve this goal, we propose the structured directional pruning based on orthogonal projecting the perturbations onto the flat minimum valley. We also propose a fast solver sDprun and further prove that it achieves directional pruning asymptotically after sufficient training. Experiments using VGG-Net and ResNet on CIFAR-10 and CIFAR-100 datasets show that our method obtains the state-of-the-art pruned accuracy (i.e. 93.97% on VGG16, CIFAR-10 task) without retraining. Experiments using DNN, VGG-Net and WRN28X10 on MNIST, CIFAR-10 and CIFAR-100 datasets demonstrate our method performs structured directional pruning, reaching the same minimum valley as the optimizer.
[ { "created": "Mon, 12 Jul 2021 11:35:47 GMT", "version": "v1" }, { "created": "Thu, 21 Oct 2021 14:18:35 GMT", "version": "v2" } ]
2021-10-22
[ [ "Li", "Yinchuan", "" ], [ "Liu", "Xiaofeng", "" ], [ "Shao", "Yunfeng", "" ], [ "Wang", "Qing", "" ], [ "Geng", "Yanhui", "" ] ]
Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss. A more reasonable approach is to find a sparse minimizer along the flat minimum valley found by optimizers, i.e. stochastic gradient descent, which keeps the training loss constant. To achieve this goal, we propose the structured directional pruning based on orthogonal projecting the perturbations onto the flat minimum valley. We also propose a fast solver sDprun and further prove that it achieves directional pruning asymptotically after sufficient training. Experiments using VGG-Net and ResNet on CIFAR-10 and CIFAR-100 datasets show that our method obtains the state-of-the-art pruned accuracy (i.e. 93.97% on VGG16, CIFAR-10 task) without retraining. Experiments using DNN, VGG-Net and WRN28X10 on MNIST, CIFAR-10 and CIFAR-100 datasets demonstrate our method performs structured directional pruning, reaching the same minimum valley as the optimizer.
2406.12769
Xiangming Zhu
Xiangming Zhu, Huayu Deng, Haochen Yuan, Yunbo Wang, Xiaokang Yang
Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video
Published as a conference paper at ICLR 2024
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce latent intuitive physics, a transfer learning framework for physics simulation that can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes. Our key insight is to use latent features drawn from a learnable prior distribution conditioned on the underlying particle states to capture the invisible and complex physical properties. To achieve this, we train a parametrized prior learner given visual observations to approximate the visual posterior of inverse graphics, and both the particle states and the visual posterior are obtained from a learned neural renderer. The converged prior learner is embedded in our probabilistic physics engine, allowing us to perform novel simulations on unseen geometries, boundaries, and dynamics without knowledge of the true physical parameters. We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation. Our model demonstrates strong performance in all three tasks.
[ { "created": "Tue, 18 Jun 2024 16:37:44 GMT", "version": "v1" } ]
2024-08-06
[ [ "Zhu", "Xiangming", "" ], [ "Deng", "Huayu", "" ], [ "Yuan", "Haochen", "" ], [ "Wang", "Yunbo", "" ], [ "Yang", "Xiaokang", "" ] ]
We introduce latent intuitive physics, a transfer learning framework for physics simulation that can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes. Our key insight is to use latent features drawn from a learnable prior distribution conditioned on the underlying particle states to capture the invisible and complex physical properties. To achieve this, we train a parametrized prior learner given visual observations to approximate the visual posterior of inverse graphics, and both the particle states and the visual posterior are obtained from a learned neural renderer. The converged prior learner is embedded in our probabilistic physics engine, allowing us to perform novel simulations on unseen geometries, boundaries, and dynamics without knowledge of the true physical parameters. We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation. Our model demonstrates strong performance in all three tasks.
2308.10649
Pawan Kumar
Sajal Khandelwal, Pawan Kumar, Syed Azeemuddin
Reinforcement Learning Based Sensor Optimization for Bio-markers
7 pages, 4 tables
null
null
null
cs.LG cs.NE eess.SP
http://creativecommons.org/licenses/by/4.0/
Radio frequency (RF) biosensors, in particular those based on inter-digitated capacitors (IDCs), are pivotal in areas like biomedical diagnosis, remote sensing, and wireless communication. Despite their advantages of low cost and easy fabrication, their sensitivity can be hindered by design imperfections, environmental factors, and circuit noise. This paper investigates enhancing the sensitivity of IDC-based RF sensors using novel reinforcement learning based Binary Particle Swarm Optimization (RLBPSO), and it is compared to Ant Colony Optimization (ACO), and other state-of-the-art methods. By focusing on optimizing design parameters like electrode design and finger width, the proposed study found notable improvements in sensor sensitivity. The proposed RLBPSO method shows best optimized design for various frequency ranges when compared to current state-of-the-art methods.
[ { "created": "Mon, 21 Aug 2023 11:36:54 GMT", "version": "v1" } ]
2023-08-22
[ [ "Khandelwal", "Sajal", "" ], [ "Kumar", "Pawan", "" ], [ "Azeemuddin", "Syed", "" ] ]
Radio frequency (RF) biosensors, in particular those based on inter-digitated capacitors (IDCs), are pivotal in areas like biomedical diagnosis, remote sensing, and wireless communication. Despite their advantages of low cost and easy fabrication, their sensitivity can be hindered by design imperfections, environmental factors, and circuit noise. This paper investigates enhancing the sensitivity of IDC-based RF sensors using novel reinforcement learning based Binary Particle Swarm Optimization (RLBPSO), and it is compared to Ant Colony Optimization (ACO), and other state-of-the-art methods. By focusing on optimizing design parameters like electrode design and finger width, the proposed study found notable improvements in sensor sensitivity. The proposed RLBPSO method shows best optimized design for various frequency ranges when compared to current state-of-the-art methods.
2302.01984
Nathan Roll
Nathan Roll, Calbert Graham, Simon Todd
PSST! Prosodic Speech Segmentation with Transformers
5 pages, 3 figures. For associated repository, see https://github.com/Nathan-Roll1/psst
null
10.18653/v1/2023.conll-1.31
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Self-attention mechanisms have enabled transformers to achieve superhuman-level performance on many speech-to-text (STT) tasks, yet the challenge of automatic prosodic segmentation has remained unsolved. In this paper we finetune Whisper, a pretrained STT model, to annotate intonation unit (IU) boundaries by repurposing low-frequency tokens. Our approach achieves an accuracy of 95.8%, outperforming previous methods without the need for large-scale labeled data or enterprise grade compute resources. We also diminish input signals by applying a series of filters, finding that low pass filters at a 3.2 kHz level improve segmentation performance in out of sample and out of distribution contexts. We release our model as both a transcription tool and a baseline for further improvements in prosodic segmentation.
[ { "created": "Fri, 3 Feb 2023 20:09:17 GMT", "version": "v1" } ]
2024-03-19
[ [ "Roll", "Nathan", "" ], [ "Graham", "Calbert", "" ], [ "Todd", "Simon", "" ] ]
Self-attention mechanisms have enabled transformers to achieve superhuman-level performance on many speech-to-text (STT) tasks, yet the challenge of automatic prosodic segmentation has remained unsolved. In this paper we finetune Whisper, a pretrained STT model, to annotate intonation unit (IU) boundaries by repurposing low-frequency tokens. Our approach achieves an accuracy of 95.8%, outperforming previous methods without the need for large-scale labeled data or enterprise grade compute resources. We also diminish input signals by applying a series of filters, finding that low pass filters at a 3.2 kHz level improve segmentation performance in out of sample and out of distribution contexts. We release our model as both a transcription tool and a baseline for further improvements in prosodic segmentation.
2004.04686
Niklas K\"uhl
Niklas K\"uhl, Marc Goutier, Robin Hirt, Gerhard Satzger
Machine Learning in Artificial Intelligence: Towards a Common Understanding
Hawaii International Conference on System Sciences (HICSS-52) 2019
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The application of "machine learning" and "artificial intelligence" has become popular within the last decade. Both terms are frequently used in science and media, sometimes interchangeably, sometimes with different meanings. In this work, we aim to clarify the relationship between these terms and, in particular, to specify the contribution of machine learning to artificial intelligence. We review relevant literature and present a conceptual framework which clarifies the role of machine learning to build (artificial) intelligent agents. Hence, we seek to provide more terminological clarity and a starting point for (interdisciplinary) discussions and future research.
[ { "created": "Fri, 27 Mar 2020 19:09:57 GMT", "version": "v1" } ]
2020-04-10
[ [ "Kühl", "Niklas", "" ], [ "Goutier", "Marc", "" ], [ "Hirt", "Robin", "" ], [ "Satzger", "Gerhard", "" ] ]
The application of "machine learning" and "artificial intelligence" has become popular within the last decade. Both terms are frequently used in science and media, sometimes interchangeably, sometimes with different meanings. In this work, we aim to clarify the relationship between these terms and, in particular, to specify the contribution of machine learning to artificial intelligence. We review relevant literature and present a conceptual framework which clarifies the role of machine learning to build (artificial) intelligent agents. Hence, we seek to provide more terminological clarity and a starting point for (interdisciplinary) discussions and future research.
2302.11443
Tim Niklas Uhl
Peter Sanders and Tim Niklas Uhl
Engineering a Distributed-Memory Triangle Counting Algorithm
11 pages, 8 figures, to be published in 2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS), St. Petersburg, FL, USA, pp. 702-712
2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS)
10.1109/IPDPS54959.2023.00076
null
cs.DC cs.DS cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Counting triangles in a graph and incident to each vertex is a fundamental and frequently considered task of graph analysis. We consider how to efficiently do this for huge graphs using massively parallel distributed-memory machines. Unsurprisingly, the main issue is to reduce communication between processors. We achieve this by counting locally whenever possible and reducing the amount of information that needs to be sent in order to handle (possible) nonlocal triangles. We also achieve linear memory requirements despite superlinear communication volume by introducing a new asynchronous sparse-all-to-all operation. Furthermore, we dramatically reduce startup overheads by allowing this communication to use indirect routing. Our algorithms scale (at least) up to 32 768 cores and are up to 18 times faster than the previous state of the art.
[ { "created": "Wed, 22 Feb 2023 15:26:44 GMT", "version": "v1" }, { "created": "Fri, 21 Jul 2023 11:03:53 GMT", "version": "v2" } ]
2023-07-24
[ [ "Sanders", "Peter", "" ], [ "Uhl", "Tim Niklas", "" ] ]
Counting triangles in a graph and incident to each vertex is a fundamental and frequently considered task of graph analysis. We consider how to efficiently do this for huge graphs using massively parallel distributed-memory machines. Unsurprisingly, the main issue is to reduce communication between processors. We achieve this by counting locally whenever possible and reducing the amount of information that needs to be sent in order to handle (possible) nonlocal triangles. We also achieve linear memory requirements despite superlinear communication volume by introducing a new asynchronous sparse-all-to-all operation. Furthermore, we dramatically reduce startup overheads by allowing this communication to use indirect routing. Our algorithms scale (at least) up to 32 768 cores and are up to 18 times faster than the previous state of the art.
2303.00284
Yichi Zhang
Yichi Zhang, Zijian Zhu, Hang Su, Jun Zhu, Shibao Zheng, Yuan He, Hui Xue
To Make Yourself Invisible with Adversarial Semantic Contours
11 pages, 7 figures, published in Computer Vision and Image Understanding in 2023
Computer Vision and Image Understanding 230C (2023) 103659
10.1016/j.cviu.2023.103659
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Modern object detectors are vulnerable to adversarial examples, which may bring risks to real-world applications. The sparse attack is an important task which, compared with the popular adversarial perturbation on the whole image, needs to select the potential pixels that is generally regularized by an $\ell_0$-norm constraint, and simultaneously optimize the corresponding texture. The non-differentiability of $\ell_0$ norm brings challenges and many works on attacking object detection adopted manually-designed patterns to address them, which are meaningless and independent of objects, and therefore lead to relatively poor attack performance. In this paper, we propose Adversarial Semantic Contour (ASC), an MAP estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour. The object contour prior effectively reduces the search space of pixel selection and improves the attack by introducing more semantic bias. Extensive experiments demonstrate that ASC can corrupt the prediction of 9 modern detectors with different architectures (\e.g., one-stage, two-stage and Transformer) by modifying fewer than 5\% of the pixels of the object area in COCO in white-box scenario and around 10\% of those in black-box scenario. We further extend the attack to datasets for autonomous driving systems to verify the effectiveness. We conclude with cautions about contour being the common weakness of object detectors with various architecture and the care needed in applying them in safety-sensitive scenarios.
[ { "created": "Wed, 1 Mar 2023 07:22:39 GMT", "version": "v1" } ]
2023-03-02
[ [ "Zhang", "Yichi", "" ], [ "Zhu", "Zijian", "" ], [ "Su", "Hang", "" ], [ "Zhu", "Jun", "" ], [ "Zheng", "Shibao", "" ], [ "He", "Yuan", "" ], [ "Xue", "Hui", "" ] ]
Modern object detectors are vulnerable to adversarial examples, which may bring risks to real-world applications. The sparse attack is an important task which, compared with the popular adversarial perturbation on the whole image, needs to select the potential pixels that is generally regularized by an $\ell_0$-norm constraint, and simultaneously optimize the corresponding texture. The non-differentiability of $\ell_0$ norm brings challenges and many works on attacking object detection adopted manually-designed patterns to address them, which are meaningless and independent of objects, and therefore lead to relatively poor attack performance. In this paper, we propose Adversarial Semantic Contour (ASC), an MAP estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour. The object contour prior effectively reduces the search space of pixel selection and improves the attack by introducing more semantic bias. Extensive experiments demonstrate that ASC can corrupt the prediction of 9 modern detectors with different architectures (\e.g., one-stage, two-stage and Transformer) by modifying fewer than 5\% of the pixels of the object area in COCO in white-box scenario and around 10\% of those in black-box scenario. We further extend the attack to datasets for autonomous driving systems to verify the effectiveness. We conclude with cautions about contour being the common weakness of object detectors with various architecture and the care needed in applying them in safety-sensitive scenarios.
2308.14029
Zhenghao Liu PhD.
Zhenghao Liu, Sen Mei, Chenyan Xiong, Xiaohua Li, Shi Yu, Zhiyuan Liu, Yu Gu, Ge Yu
Text Matching Improves Sequential Recommendation by Reducing Popularity Biases
Accepted by CIKM 2023
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper proposes Text mAtching based SequenTial rEcommendation model (TASTE), which maps items and users in an embedding space and recommends items by matching their text representations. TASTE verbalizes items and user-item interactions using identifiers and attributes of items. To better characterize user behaviors, TASTE additionally proposes an attention sparsity method, which enables TASTE to model longer user-item interactions by reducing the self-attention computations during encoding. Our experiments show that TASTE outperforms the state-of-the-art methods on widely used sequential recommendation datasets. TASTE alleviates the cold start problem by representing long-tail items using full-text modeling and bringing the benefits of pretrained language models to recommendation systems. Our further analyses illustrate that TASTE significantly improves the recommendation accuracy by reducing the popularity bias of previous item id based recommendation models and returning more appropriate and text-relevant items to satisfy users. All codes are available at https://github.com/OpenMatch/TASTE.
[ { "created": "Sun, 27 Aug 2023 07:44:33 GMT", "version": "v1" } ]
2023-08-29
[ [ "Liu", "Zhenghao", "" ], [ "Mei", "Sen", "" ], [ "Xiong", "Chenyan", "" ], [ "Li", "Xiaohua", "" ], [ "Yu", "Shi", "" ], [ "Liu", "Zhiyuan", "" ], [ "Gu", "Yu", "" ], [ "Yu", "Ge", "" ] ]
This paper proposes Text mAtching based SequenTial rEcommendation model (TASTE), which maps items and users in an embedding space and recommends items by matching their text representations. TASTE verbalizes items and user-item interactions using identifiers and attributes of items. To better characterize user behaviors, TASTE additionally proposes an attention sparsity method, which enables TASTE to model longer user-item interactions by reducing the self-attention computations during encoding. Our experiments show that TASTE outperforms the state-of-the-art methods on widely used sequential recommendation datasets. TASTE alleviates the cold start problem by representing long-tail items using full-text modeling and bringing the benefits of pretrained language models to recommendation systems. Our further analyses illustrate that TASTE significantly improves the recommendation accuracy by reducing the popularity bias of previous item id based recommendation models and returning more appropriate and text-relevant items to satisfy users. All codes are available at https://github.com/OpenMatch/TASTE.
1204.6445
Heng Guo
Jin-Yi Cai, Heng Guo and Tyson Williams
A Complete Dichotomy Rises from the Capture of Vanishing Signatures
Author accepted manuscript
SIAM J. Comput. 45(5), 1671-1728, 2016
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove a complexity dichotomy theorem for Holant problems over an arbitrary set of complex-valued symmetric constraint functions F on Boolean variables. This extends and unifies all previous dichotomies for Holant problems on symmetric constraint functions (taking values without a finite modulus). We define and characterize all symmetric vanishing signatures. They turned out to be essential to the complete classification of Holant problems. The dichotomy theorem has an explicit tractability criterion expressible in terms of holographic transformations. A Holant problem defined by a set of constraint functions F is solvable in polynomial time if it satisfies this tractability criterion, and is #P-hard otherwise. The tractability criterion can be intuitively stated as follows: A set F is tractable if (1) every function in F has arity at most two, or (2) F is transformable to an affine type, or (3) F is transformable to a product type, or (4) F is vanishing, combined with the right type of binary functions, or (5) F belongs to a special category of vanishing type Fibonacci gates. The proof of this theorem utilizes many previous dichotomy theorems on Holant problems and Boolean #CSP. Holographic transformations play an indispensable role as both a proof technique and in the statement of the tractability criterion.
[ { "created": "Sun, 29 Apr 2012 00:22:39 GMT", "version": "v1" }, { "created": "Mon, 22 Jul 2013 13:42:13 GMT", "version": "v2" }, { "created": "Wed, 10 Jan 2018 13:53:53 GMT", "version": "v3" } ]
2018-01-11
[ [ "Cai", "Jin-Yi", "" ], [ "Guo", "Heng", "" ], [ "Williams", "Tyson", "" ] ]
We prove a complexity dichotomy theorem for Holant problems over an arbitrary set of complex-valued symmetric constraint functions F on Boolean variables. This extends and unifies all previous dichotomies for Holant problems on symmetric constraint functions (taking values without a finite modulus). We define and characterize all symmetric vanishing signatures. They turned out to be essential to the complete classification of Holant problems. The dichotomy theorem has an explicit tractability criterion expressible in terms of holographic transformations. A Holant problem defined by a set of constraint functions F is solvable in polynomial time if it satisfies this tractability criterion, and is #P-hard otherwise. The tractability criterion can be intuitively stated as follows: A set F is tractable if (1) every function in F has arity at most two, or (2) F is transformable to an affine type, or (3) F is transformable to a product type, or (4) F is vanishing, combined with the right type of binary functions, or (5) F belongs to a special category of vanishing type Fibonacci gates. The proof of this theorem utilizes many previous dichotomy theorems on Holant problems and Boolean #CSP. Holographic transformations play an indispensable role as both a proof technique and in the statement of the tractability criterion.
1808.03736
Renata Wong
Renata Wong
An Implementation, Empirical Evaluation and Proposed Improvement for Bidirectional Splitting Method for Argumentation Frameworks under Stable Semantics
19 pages
Journal of Artificial Intelligence and Applications, Vol.9, No.4, 2018, pp. 11-29
10.5121/ijaia.2018.9402
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abstract argumentation frameworks are formal systems that facilitate obtaining conclusions from non-monotonic knowledge systems. Within such a system, an argumentation semantics is defined as a set of arguments with some desired qualities, for example, that the elements are not in conflict with each other. Splitting an argumentation framework can efficiently speed up the computation of argumentation semantics. With respect to stable semantics, two methods have been proposed to split an argumentation framework either in a unidirectional or bidirectional fashion. The advantage of bidirectional splitting is that it is not structure-dependent and, unlike unidirectional splitting, it can be used for frameworks consisting of a single strongly connected component. Bidirectional splitting makes use of a minimum cut. In this paper, we implement and test the performance of the bidirectional splitting method, along with two types of graph cut algorithms. Experimental data suggest that using a minimum cut will not improve the performance of computing stable semantics in most cases. Hence, instead of a minimum cut, we propose to use a balanced cut, where the framework is split into two sub-frameworks of equal size. Experimental results conducted on bidirectional splitting using the balanced cut show a significant improvement in the performance of computing semantics.
[ { "created": "Sat, 11 Aug 2018 01:52:57 GMT", "version": "v1" } ]
2018-08-14
[ [ "Wong", "Renata", "" ] ]
Abstract argumentation frameworks are formal systems that facilitate obtaining conclusions from non-monotonic knowledge systems. Within such a system, an argumentation semantics is defined as a set of arguments with some desired qualities, for example, that the elements are not in conflict with each other. Splitting an argumentation framework can efficiently speed up the computation of argumentation semantics. With respect to stable semantics, two methods have been proposed to split an argumentation framework either in a unidirectional or bidirectional fashion. The advantage of bidirectional splitting is that it is not structure-dependent and, unlike unidirectional splitting, it can be used for frameworks consisting of a single strongly connected component. Bidirectional splitting makes use of a minimum cut. In this paper, we implement and test the performance of the bidirectional splitting method, along with two types of graph cut algorithms. Experimental data suggest that using a minimum cut will not improve the performance of computing stable semantics in most cases. Hence, instead of a minimum cut, we propose to use a balanced cut, where the framework is split into two sub-frameworks of equal size. Experimental results conducted on bidirectional splitting using the balanced cut show a significant improvement in the performance of computing semantics.
1807.01620
Dominique Duval
Dominique Duval (LJK)
Logical rules as fractions and logics as sketches
null
null
null
null
cs.LO math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this short paper, using category theory, we argue that logical rules can be seen as fractions and logics as limit sketches.
[ { "created": "Wed, 4 Jul 2018 14:44:44 GMT", "version": "v1" } ]
2018-07-05
[ [ "Duval", "Dominique", "", "LJK" ] ]
In this short paper, using category theory, we argue that logical rules can be seen as fractions and logics as limit sketches.
2406.19804
Cheng Peng
Cheng Peng, Rulong Wang, Yong Xiao
Rateless Stochastic Coding for Delay-constrained Semantic Communication
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of joint source-channel coding with distortion and perception constraints from a rateless perspective, the purpose of which is to settle the balance between reliability (distortion/perception) and effectiveness (rate) of transmission over uncertain channels. We find a new finite-blocklength bound for the achievable joint source-channel code rate with the above two constraints. To achieve a superior rateless characteristic of JSCC coding, we perform multi-level optimization on various finite-blocklength codes. Based on these two, we then propose a new JSCC coding scheme called rateless stochastic coding (RSC). We experimentally demonstrate that the proposed RSC can achieve variable rates of transmission maintaining an excellent trade-off between distortion and perception.
[ { "created": "Fri, 28 Jun 2024 10:27:06 GMT", "version": "v1" } ]
2024-07-01
[ [ "Peng", "Cheng", "" ], [ "Wang", "Rulong", "" ], [ "Xiao", "Yong", "" ] ]
We consider the problem of joint source-channel coding with distortion and perception constraints from a rateless perspective, the purpose of which is to settle the balance between reliability (distortion/perception) and effectiveness (rate) of transmission over uncertain channels. We find a new finite-blocklength bound for the achievable joint source-channel code rate with the above two constraints. To achieve a superior rateless characteristic of JSCC coding, we perform multi-level optimization on various finite-blocklength codes. Based on these two, we then propose a new JSCC coding scheme called rateless stochastic coding (RSC). We experimentally demonstrate that the proposed RSC can achieve variable rates of transmission maintaining an excellent trade-off between distortion and perception.
1902.01107
Boya Di
Boya Di, Lingyang Song, Yonghui Li, Geoffrey Ye Li
TCM-NOMA: Joint Multi-user Codeword Design and Detection in Trellis Coded Modulation based NOMA for Beyond 5G
null
null
10.1109/JSTSP.2019.2899500
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel non-orthogonal multiple access (NOMA) scheme based on trellis-coded modulation (TCM). Different from those in the traditional code-domain NOMA, the incoming data streams of multiple users are jointly coded and mapped to the codewords so as to improve the coding gain of the system without any bandwidth extension. Both codeword design and multi-user detection are considered to optimize the TCM for NOMA. The multi-dimensional (MD) constellation is carefully selected and a new set partitioning algorithm is developed for MD signal set labeling. To achieve the trade-off between the BER performance and complexity, a suboptimal two-layer Viterbi algorithm is proposed for joint decoding. Simulation results show that our proposed TCM-based NOMA scheme performs significantly better than the traditional code-domain NOMA and OFDMA schemes in terms of the BER performance under the same channel conditions.
[ { "created": "Mon, 4 Feb 2019 10:13:14 GMT", "version": "v1" } ]
2019-06-26
[ [ "Di", "Boya", "" ], [ "Song", "Lingyang", "" ], [ "Li", "Yonghui", "" ], [ "Li", "Geoffrey Ye", "" ] ]
In this paper, we propose a novel non-orthogonal multiple access (NOMA) scheme based on trellis-coded modulation (TCM). Different from those in the traditional code-domain NOMA, the incoming data streams of multiple users are jointly coded and mapped to the codewords so as to improve the coding gain of the system without any bandwidth extension. Both codeword design and multi-user detection are considered to optimize the TCM for NOMA. The multi-dimensional (MD) constellation is carefully selected and a new set partitioning algorithm is developed for MD signal set labeling. To achieve the trade-off between the BER performance and complexity, a suboptimal two-layer Viterbi algorithm is proposed for joint decoding. Simulation results show that our proposed TCM-based NOMA scheme performs significantly better than the traditional code-domain NOMA and OFDMA schemes in terms of the BER performance under the same channel conditions.
1507.00522
Young JIn Chun
Young Jin Chun, Simon L. Cotton, Mazen O. Hasna, Ali Ghrayeb
A Stochastic Geometry Based Approach to Modeling Interference Correlation in Cooperative Relay Networks
Submitted to IEEE Transactions on Wireless Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Future wireless networks are expected to be a convergence of many diverse network technologies and architectures, such as cellular networks, wireless local area networks, sensor networks, and device to device communications. Through cooperation between dissimilar wireless devices, this new combined network topology promises to unlock ever larger data rates and provide truly ubiquitous coverage for end users, as well as enabling higher spectral efficiency. However, it also increases the risk of co-channel interference and introduces the possibility of correlation in the aggregated interference that not only impacts the communication performance, but also makes the associated mathematical analysis much more complex. To address this problem and evaluate the communication performance of cooperative relay networks, we adopt a stochastic geometry based approach by assuming that the interfering nodes are randomly distributed according to a Poisson point process (PPP). We also use a random medium access protocol to counteract the effects of interference correlation. Using this approach, we derive novel closed-form expressions for the successful transmission probability and local delay of a relay network with correlated interference. As well as this, we find the optimal transmission probability $p$ that jointly maximizes the successful transmission probability and minimizes the local delay. Finally numerical results are provided to confirm that the proposed joint optimization strategy achieves a significant performance gain compared to a conventional scheme.
[ { "created": "Thu, 2 Jul 2015 11:09:19 GMT", "version": "v1" } ]
2015-07-03
[ [ "Chun", "Young Jin", "" ], [ "Cotton", "Simon L.", "" ], [ "Hasna", "Mazen O.", "" ], [ "Ghrayeb", "Ali", "" ] ]
Future wireless networks are expected to be a convergence of many diverse network technologies and architectures, such as cellular networks, wireless local area networks, sensor networks, and device to device communications. Through cooperation between dissimilar wireless devices, this new combined network topology promises to unlock ever larger data rates and provide truly ubiquitous coverage for end users, as well as enabling higher spectral efficiency. However, it also increases the risk of co-channel interference and introduces the possibility of correlation in the aggregated interference that not only impacts the communication performance, but also makes the associated mathematical analysis much more complex. To address this problem and evaluate the communication performance of cooperative relay networks, we adopt a stochastic geometry based approach by assuming that the interfering nodes are randomly distributed according to a Poisson point process (PPP). We also use a random medium access protocol to counteract the effects of interference correlation. Using this approach, we derive novel closed-form expressions for the successful transmission probability and local delay of a relay network with correlated interference. As well as this, we find the optimal transmission probability $p$ that jointly maximizes the successful transmission probability and minimizes the local delay. Finally numerical results are provided to confirm that the proposed joint optimization strategy achieves a significant performance gain compared to a conventional scheme.
2209.01252
Jeannie Lee
Emran Poh, Kyrin Liong, Jeannie Lee
Mixed Reality for Mechanical Design and Assembly Planning
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Design for Manufacturing and Assembly (DFMA) is a crucial design stage within the heavy vehicle manufacturing process that involves optimising the order and feasibility of the parts assembly process to reduce manufacturing complexity and overall cost. Existing work has focused on conducting DFMA within virtual environments to reduce manufacturing costs, but users are less able to relate and compare physical characteristics of a virtual component with real physical objects. Therefore, a Mixed Reality (MR) application is developed for engineers to visualise and manipulate assembly parts virtually, conduct and plan out an assembly within its intended physical environment. Two pilot evaluations were conducted with both engineering professionals and non-engineers to assess effectiveness of the software for assembly planning. Usability results suggest that the application is overall usable (M=56.1, SD=7.89), and participants felt a sense of involvement in the activity (M=13.1, SD=3.3). Engineering professionals see the application as a useful and cost-effective tool for optimising their mechanical assembly designs.
[ { "created": "Fri, 2 Sep 2022 19:41:29 GMT", "version": "v1" } ]
2022-09-07
[ [ "Poh", "Emran", "" ], [ "Liong", "Kyrin", "" ], [ "Lee", "Jeannie", "" ] ]
Design for Manufacturing and Assembly (DFMA) is a crucial design stage within the heavy vehicle manufacturing process that involves optimising the order and feasibility of the parts assembly process to reduce manufacturing complexity and overall cost. Existing work has focused on conducting DFMA within virtual environments to reduce manufacturing costs, but users are less able to relate and compare physical characteristics of a virtual component with real physical objects. Therefore, a Mixed Reality (MR) application is developed for engineers to visualise and manipulate assembly parts virtually, conduct and plan out an assembly within its intended physical environment. Two pilot evaluations were conducted with both engineering professionals and non-engineers to assess effectiveness of the software for assembly planning. Usability results suggest that the application is overall usable (M=56.1, SD=7.89), and participants felt a sense of involvement in the activity (M=13.1, SD=3.3). Engineering professionals see the application as a useful and cost-effective tool for optimising their mechanical assembly designs.
2105.14520
Jianfeng Li
Jianfeng Li, Junqiao Zhao, Shuangfu Song, Tiantian Feng
Unsupervised Joint Learning of Depth, Optical Flow, Ego-motion from Video
9 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Estimating geometric elements such as depth, camera motion, and optical flow from images is an important part of the robot's visual perception. We use a joint self-supervised method to estimate the three geometric elements. Depth network, optical flow network and camera motion network are independent of each other but are jointly optimized during training phase. Compared with independent training, joint training can make full use of the geometric relationship between geometric elements and provide dynamic and static information of the scene. In this paper, we improve the joint self-supervision method from three aspects: network structure, dynamic object segmentation, and geometric constraints. In terms of network structure, we apply the attention mechanism to the camera motion network, which helps to take advantage of the similarity of camera movement between frames. And according to attention mechanism in Transformer, we propose a plug-and-play convolutional attention module. In terms of dynamic object, according to the different influences of dynamic objects in the optical flow self-supervised framework and the depth-pose self-supervised framework, we propose a threshold algorithm to detect dynamic regions, and mask that in the loss function respectively. In terms of geometric constraints, we use traditional methods to estimate the fundamental matrix from the corresponding points to constrain the camera motion network. We demonstrate the effectiveness of our method on the KITTI dataset. Compared with other joint self-supervised methods, our method achieves state-of-the-art performance in the estimation of pose and optical flow, and the depth estimation has also achieved competitive results. Code will be available https://github.com/jianfenglihg/Unsupervised_geometry.
[ { "created": "Sun, 30 May 2021 12:39:48 GMT", "version": "v1" } ]
2021-06-01
[ [ "Li", "Jianfeng", "" ], [ "Zhao", "Junqiao", "" ], [ "Song", "Shuangfu", "" ], [ "Feng", "Tiantian", "" ] ]
Estimating geometric elements such as depth, camera motion, and optical flow from images is an important part of the robot's visual perception. We use a joint self-supervised method to estimate the three geometric elements. Depth network, optical flow network and camera motion network are independent of each other but are jointly optimized during training phase. Compared with independent training, joint training can make full use of the geometric relationship between geometric elements and provide dynamic and static information of the scene. In this paper, we improve the joint self-supervision method from three aspects: network structure, dynamic object segmentation, and geometric constraints. In terms of network structure, we apply the attention mechanism to the camera motion network, which helps to take advantage of the similarity of camera movement between frames. And according to attention mechanism in Transformer, we propose a plug-and-play convolutional attention module. In terms of dynamic object, according to the different influences of dynamic objects in the optical flow self-supervised framework and the depth-pose self-supervised framework, we propose a threshold algorithm to detect dynamic regions, and mask that in the loss function respectively. In terms of geometric constraints, we use traditional methods to estimate the fundamental matrix from the corresponding points to constrain the camera motion network. We demonstrate the effectiveness of our method on the KITTI dataset. Compared with other joint self-supervised methods, our method achieves state-of-the-art performance in the estimation of pose and optical flow, and the depth estimation has also achieved competitive results. Code will be available https://github.com/jianfenglihg/Unsupervised_geometry.
2012.00302
Masaki Aida
Kakeru Ohki, Ayako Hashizume, Masaki Aida
Independence of the Fundamental Equation of the Oscillation Model on Algebraic Representations: Social Media Echo Chamber Effect
4 pages, no figure, IEICE ICETC 2020. arXiv admin note: substantial text overlap with arXiv:2011.13372
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the oscillation model that describes the user dynamics of online social networks, it is known that the fundamental equation can explicitly describe the causal relationship between the network structure and user dynamics. The fundamental equation uses algebra that satisfies the anti-commutation relation, and its matrix representation is not unique. However, even if the matrix representations are different, the same results should be derived from different representations of the fundamental equation if they are describing the same phenomenon. In this paper, we confirm, using the echo-chamber effect as an example, that the fundamental equations of different matrix representations lead to the same result.
[ { "created": "Tue, 1 Dec 2020 06:55:33 GMT", "version": "v1" } ]
2020-12-02
[ [ "Ohki", "Kakeru", "" ], [ "Hashizume", "Ayako", "" ], [ "Aida", "Masaki", "" ] ]
In the oscillation model that describes the user dynamics of online social networks, it is known that the fundamental equation can explicitly describe the causal relationship between the network structure and user dynamics. The fundamental equation uses algebra that satisfies the anti-commutation relation, and its matrix representation is not unique. However, even if the matrix representations are different, the same results should be derived from different representations of the fundamental equation if they are describing the same phenomenon. In this paper, we confirm, using the echo-chamber effect as an example, that the fundamental equations of different matrix representations lead to the same result.
1601.07157
Robert Merkel
Robert Merkel and James Georgeson
Cloud-Based Distributed Mutation Analysis
12 pages including appendix
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutation Testing is a fault-based software testing technique which is too computationally expensive for industrial use. Cloud-based distributed computing clusters, taking advantage of the MapReduce programming paradigm, represent a method by which the long running time can be reduced. In this paper, we describe an architecture, and a prototype implementation, of such a cloud-based distributed mutation testing system. To evaluate the system, we compared the performance of the prototype, with various cluster sizes, to an existing "state-of-the-art" non-distributed tool, PiT. We also analysed different approaches to work distribution, to determine how to most efficiently divide the mutation analysis task. Our tool outperformed PiT, and analysis of the results showed opportunities for substantial further performance improvement.
[ { "created": "Tue, 26 Jan 2016 20:35:45 GMT", "version": "v1" }, { "created": "Thu, 28 Jan 2016 03:38:43 GMT", "version": "v2" } ]
2016-01-29
[ [ "Merkel", "Robert", "" ], [ "Georgeson", "James", "" ] ]
Mutation Testing is a fault-based software testing technique which is too computationally expensive for industrial use. Cloud-based distributed computing clusters, taking advantage of the MapReduce programming paradigm, represent a method by which the long running time can be reduced. In this paper, we describe an architecture, and a prototype implementation, of such a cloud-based distributed mutation testing system. To evaluate the system, we compared the performance of the prototype, with various cluster sizes, to an existing "state-of-the-art" non-distributed tool, PiT. We also analysed different approaches to work distribution, to determine how to most efficiently divide the mutation analysis task. Our tool outperformed PiT, and analysis of the results showed opportunities for substantial further performance improvement.
1910.11073
Geoffroy Chaussonnet
Geoffroy Chaussonnet, Christian Lieber, Yan Yikang, Wenda Gu, Andreas Bartschat, Markus Reischl, Rainer Koch, Ralf Mikut and Hans-J\"org Bauer
Towards DeepSpray: Using Convolutional Neural Network to post-process Shadowgraphy Images of Liquid Atomization
Technical report, 22 pages, 29 figures
null
10.5445/IR/1000097897/v3
null
cs.CV physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report investigates the potential of Convolutional Neural Networks to post-process images from primary atomization. Three tasks are investigated. First, the detection and segmentation of liquid droplets in degraded optical conditions. Second, the detection of overlapping ellipses and the prediction of their geometrical characteristics. This task corresponds to extrapolate the hidden contour of an ellipse with reduced visual information. Third, several features of the liquid surface during primary breakup (ligaments, bags, rims) are manually annotated on 15 experimental images. The detector is trained on this minimal database using simple data augmentation and then applied to other images from numerical simulation and from other experiment. In these three tasks, models from the literature based on Convolutional Neural Networks showed very promising results, thus demonstrating the high potential of Deep Learning to post-process liquid atomization. The next step is to embed these models into a unified framework DeepSpray.
[ { "created": "Fri, 11 Oct 2019 03:00:52 GMT", "version": "v1" } ]
2019-10-25
[ [ "Chaussonnet", "Geoffroy", "" ], [ "Lieber", "Christian", "" ], [ "Yikang", "Yan", "" ], [ "Gu", "Wenda", "" ], [ "Bartschat", "Andreas", "" ], [ "Reischl", "Markus", "" ], [ "Koch", "Rainer", "" ], [ "Mikut", "Ralf", "" ], [ "Bauer", "Hans-Jörg", "" ] ]
This technical report investigates the potential of Convolutional Neural Networks to post-process images from primary atomization. Three tasks are investigated. First, the detection and segmentation of liquid droplets in degraded optical conditions. Second, the detection of overlapping ellipses and the prediction of their geometrical characteristics. This task corresponds to extrapolate the hidden contour of an ellipse with reduced visual information. Third, several features of the liquid surface during primary breakup (ligaments, bags, rims) are manually annotated on 15 experimental images. The detector is trained on this minimal database using simple data augmentation and then applied to other images from numerical simulation and from other experiment. In these three tasks, models from the literature based on Convolutional Neural Networks showed very promising results, thus demonstrating the high potential of Deep Learning to post-process liquid atomization. The next step is to embed these models into a unified framework DeepSpray.
2208.00902
Yitong Zhang
Yitong Zhang, Sophia Bano, Ann-Sophie Page, Jan Deprest, Danail Stoyanov, Francisco Vasconcelos
Retrieval of surgical phase transitions using reinforcement learning
Accepted by MICCAI 2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In minimally invasive surgery, surgical workflow segmentation from video analysis is a well studied topic. The conventional approach defines it as a multi-class classification problem, where individual video frames are attributed a surgical phase label. We introduce a novel reinforcement learning formulation for offline phase transition retrieval. Instead of attempting to classify every video frame, we identify the timestamp of each phase transition. By construction, our model does not produce spurious and noisy phase transitions, but contiguous phase blocks. We investigate two different configurations of this model. The first does not require processing all frames in a video (only <60% and <20% of frames in 2 different applications), while producing results slightly under the state-of-the-art accuracy. The second configuration processes all video frames, and outperforms the state-of-the art at a comparable computational cost. We compare our method against the recent top-performing frame-based approaches TeCNO and Trans-SVNet on the public dataset Cholec80 and also on an in-house dataset of laparoscopic sacrocolpopexy. We perform both a frame-based (accuracy, precision, recall and F1-score) and an event-based (event ratio) evaluation of our algorithms.
[ { "created": "Mon, 1 Aug 2022 14:43:15 GMT", "version": "v1" } ]
2022-08-02
[ [ "Zhang", "Yitong", "" ], [ "Bano", "Sophia", "" ], [ "Page", "Ann-Sophie", "" ], [ "Deprest", "Jan", "" ], [ "Stoyanov", "Danail", "" ], [ "Vasconcelos", "Francisco", "" ] ]
In minimally invasive surgery, surgical workflow segmentation from video analysis is a well studied topic. The conventional approach defines it as a multi-class classification problem, where individual video frames are attributed a surgical phase label. We introduce a novel reinforcement learning formulation for offline phase transition retrieval. Instead of attempting to classify every video frame, we identify the timestamp of each phase transition. By construction, our model does not produce spurious and noisy phase transitions, but contiguous phase blocks. We investigate two different configurations of this model. The first does not require processing all frames in a video (only <60% and <20% of frames in 2 different applications), while producing results slightly under the state-of-the-art accuracy. The second configuration processes all video frames, and outperforms the state-of-the art at a comparable computational cost. We compare our method against the recent top-performing frame-based approaches TeCNO and Trans-SVNet on the public dataset Cholec80 and also on an in-house dataset of laparoscopic sacrocolpopexy. We perform both a frame-based (accuracy, precision, recall and F1-score) and an event-based (event ratio) evaluation of our algorithms.
1903.10885
Kripasindhu Sarkar
Kripasindhu Sarkar, Kiran Varanasi, Didier Stricker
Learning Quadrangulated Patches For 3D Shape Processing
arXiv admin note: substantial text overlap with arXiv:1709.06868
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a system for surface completion and inpainting of 3D shapes using generative models, learnt on local patches. Our method uses a novel encoding of height map based local patches parameterized using 3D mesh quadrangulation of the low resolution input shape. This provides us sufficient amount of local 3D patches to learn a generative model for the task of repairing moderate sized holes. Following the ideas from the recent progress in 2D inpainting, we investigated both linear dictionary based model and convolutional denoising autoencoders based model for the task for inpainting, and show our results to be better than the previous geometry based method of surface inpainting. We validate our method on both synthetic shapes and real world scans.
[ { "created": "Mon, 25 Mar 2019 16:54:59 GMT", "version": "v1" } ]
2019-03-27
[ [ "Sarkar", "Kripasindhu", "" ], [ "Varanasi", "Kiran", "" ], [ "Stricker", "Didier", "" ] ]
We propose a system for surface completion and inpainting of 3D shapes using generative models, learnt on local patches. Our method uses a novel encoding of height map based local patches parameterized using 3D mesh quadrangulation of the low resolution input shape. This provides us sufficient amount of local 3D patches to learn a generative model for the task of repairing moderate sized holes. Following the ideas from the recent progress in 2D inpainting, we investigated both linear dictionary based model and convolutional denoising autoencoders based model for the task for inpainting, and show our results to be better than the previous geometry based method of surface inpainting. We validate our method on both synthetic shapes and real world scans.
1912.11735
Soheil Abbasloo
Soheil Abbasloo, Chen-Yu Yen, H. Jonathan Chao
Wanna Make Your TCP Scheme Great for Cellular Networks? Let Machines Do It for You!
Under Review
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Can we instead of designing just another new TCP, design a TCP \textit{plug-in} which can boost the performance of the existing/future TCP designs in cellular networks? To answer this question, we introduce DeepCC plug-in. DeepCC leverages deep reinforcement learning (DRL), a modern decision-making tool, to steer TCP toward achieving applications' desired delay and high throughput in a highly dynamic network such as the cellular network. The fact that DeepCC does not try to reinvent/replace TCP but aims to boost the performance of it differentiates it from the most (if not all) of the existing reinforcement learning (RL) systems where RL systems are considered clean-slate alternative designs replacing the traditional ones. We used DeepCC plug-in to boost the performance of various old and new TCP schemes including TCP Cubic, Google's BBR, TCP Westwood, and TCP Illinois in cellular networks. Through both extensive trace-based evaluations and in-field tests, we show that not only DeepCC can significantly improve the performance of TCP, but also after accompanied by DeepCC, these schemes can outperform state-of-the-art TCP protocols including Aurora, Sprout, Verus, C2TCP, Copa, Indigo, Remy, PCC-Vivace, and LEDBAT in cellular networks.
[ { "created": "Thu, 26 Dec 2019 00:49:26 GMT", "version": "v1" }, { "created": "Sat, 4 Jan 2020 22:59:01 GMT", "version": "v2" }, { "created": "Tue, 23 Jun 2020 14:14:04 GMT", "version": "v3" } ]
2020-06-24
[ [ "Abbasloo", "Soheil", "" ], [ "Yen", "Chen-Yu", "" ], [ "Chao", "H. Jonathan", "" ] ]
Can we instead of designing just another new TCP, design a TCP \textit{plug-in} which can boost the performance of the existing/future TCP designs in cellular networks? To answer this question, we introduce DeepCC plug-in. DeepCC leverages deep reinforcement learning (DRL), a modern decision-making tool, to steer TCP toward achieving applications' desired delay and high throughput in a highly dynamic network such as the cellular network. The fact that DeepCC does not try to reinvent/replace TCP but aims to boost the performance of it differentiates it from the most (if not all) of the existing reinforcement learning (RL) systems where RL systems are considered clean-slate alternative designs replacing the traditional ones. We used DeepCC plug-in to boost the performance of various old and new TCP schemes including TCP Cubic, Google's BBR, TCP Westwood, and TCP Illinois in cellular networks. Through both extensive trace-based evaluations and in-field tests, we show that not only DeepCC can significantly improve the performance of TCP, but also after accompanied by DeepCC, these schemes can outperform state-of-the-art TCP protocols including Aurora, Sprout, Verus, C2TCP, Copa, Indigo, Remy, PCC-Vivace, and LEDBAT in cellular networks.
1510.07254
Jian-Jia Chen
Jian-Jia Chen
Federated Scheduling Admits No Constant Speedup Factors for Constrained-Deadline DAG Task Systems
in Real-Time Systems Journal 2016
null
10.1007/s11241-016-9255-2
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the federated scheduling approaches in multiprocessor systems, a task either 1) is restricted to execute sequentially on a single processor or 2) has exclusive access to the assigned processors. There have been several positive results to conduct good federated scheduling policies, which have constant speedup factors with respect to any optimal federated scheduling algorithm. This paper answers an open question: "For constrained-deadline task systems with directed acyclic graph (DAG) dependency structures, do federated scheduling policies have a constant speedup factor with respect to any optimal scheduling algorithm?" The answer is "No!" This paper presents an example, which demonstrates that any federated scheduling algorithm has a speedup factor of at least $\Omega(\min\{M, N\})$ with respect to any optimal scheduling algorithm, where $N$ is the number of tasks and $M$ is the number of processors.
[ { "created": "Sun, 25 Oct 2015 14:46:12 GMT", "version": "v1" }, { "created": "Wed, 22 Jun 2016 19:56:30 GMT", "version": "v2" } ]
2016-06-23
[ [ "Chen", "Jian-Jia", "" ] ]
In the federated scheduling approaches in multiprocessor systems, a task either 1) is restricted to execute sequentially on a single processor or 2) has exclusive access to the assigned processors. There have been several positive results to conduct good federated scheduling policies, which have constant speedup factors with respect to any optimal federated scheduling algorithm. This paper answers an open question: "For constrained-deadline task systems with directed acyclic graph (DAG) dependency structures, do federated scheduling policies have a constant speedup factor with respect to any optimal scheduling algorithm?" The answer is "No!" This paper presents an example, which demonstrates that any federated scheduling algorithm has a speedup factor of at least $\Omega(\min\{M, N\})$ with respect to any optimal scheduling algorithm, where $N$ is the number of tasks and $M$ is the number of processors.
2305.17832
Takahiro Wada
Yujiro Tamura, Takahiro Wada, and Hailong Liu
Generating Visual Information for Motion Sickness Reduction Using a Computational Model Based on SVC Theory
null
2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)
10.1109/ITSC57777.2023.10422244
null
cs.HC q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
With the advancements in automated driving, there is concern that motion sickness will increase as non-driving-related tasks increase. Therefore, techniques to reduce motion sickness have drawn much attention. Research studies have attempted to estimate motion sickness using computational models for controlling it. Among them, a computational model for estimating motion sickness incidence (MSI) with visual information as input based on subjective vertical conflict theories was developed. In addition, some studies attempt to mitigate motion sickness by controlling visual information. In particular, it has been confirmed that motion sickness is suppressed by matching head movement and visual information. However, there has been no research on optimal visual information control that suppresses motion sickness in vehicles by utilizing mathematical models. We, therefore, propose a method for generating optimal visual information to suppress motion sickness caused from vehicle motion by utilizing a motion sickness model with vestibular and visual inputs. To confirm the effectiveness of the proposed method, we investigated changes in the motion sickness experienced by the participants according to the visual information displayed on the head-mounted display. The experimental results suggested that the proposed method mitigates the motion sickness of the participants.
[ { "created": "Mon, 29 May 2023 00:27:25 GMT", "version": "v1" } ]
2024-06-25
[ [ "Tamura", "Yujiro", "" ], [ "Wada", "Takahiro", "" ], [ "Liu", "Hailong", "" ] ]
With the advancements in automated driving, there is concern that motion sickness will increase as non-driving-related tasks increase. Therefore, techniques to reduce motion sickness have drawn much attention. Research studies have attempted to estimate motion sickness using computational models for controlling it. Among them, a computational model for estimating motion sickness incidence (MSI) with visual information as input based on subjective vertical conflict theories was developed. In addition, some studies attempt to mitigate motion sickness by controlling visual information. In particular, it has been confirmed that motion sickness is suppressed by matching head movement and visual information. However, there has been no research on optimal visual information control that suppresses motion sickness in vehicles by utilizing mathematical models. We, therefore, propose a method for generating optimal visual information to suppress motion sickness caused from vehicle motion by utilizing a motion sickness model with vestibular and visual inputs. To confirm the effectiveness of the proposed method, we investigated changes in the motion sickness experienced by the participants according to the visual information displayed on the head-mounted display. The experimental results suggested that the proposed method mitigates the motion sickness of the participants.
2305.09973
Abhranil Chatterjee
Abhranil Chatterjee, Sumanta Ghosh, Rohit Gurjar, Roshan Raj
Border Complexity of Symbolic Determinant under Rank One Restriction
null
null
null
null
cs.CC
http://creativecommons.org/licenses/by-nc-nd/4.0/
VBP is the class of polynomial families that can be computed by the determinant of a symbolic matrix of the form $A_0 + \sum_{i=1}^n A_ix_i$ where the size of each $A_i$ is polynomial in the number of variables (equivalently, computable by polynomial-sized algebraic branching programs (ABP)). A major open problem in geometric complexity theory (GCT) is to determine whether VBP is closed under approximation. The power of approximation is well understood for some restricted models of computation, e.g., the class of depth-two circuits, read-once oblivious ABPs (ROABP), monotone ABPs, depth-three circuits of bounded top fan-in, and width-two ABPs. The former three classes are known to be closed under approximation [Bl"{a}ser, Ikenmeyer, Mahajan, Pandey, and Saurabh (2020)], whereas the approximative closure of the last one captures the whole class of polynomial families computable by polynomial-sized formulas [Bringmann, Ikenmeyer, and Zuiddam (2017)]. In this work, we consider the subclass of VBP computed by the determinant of a symbolic matrix of the form $A_0 + \sum_{i=1}^n A_ix_i$ where for each $1\leq i \leq n$, $A_i$ is of rank one. It has been studied extensively [Edmonds(1968), Edmonds(1979)] and efficient identity testing algorithms are known [Lov"{a}sz (1989), Gurjar and Thierauf (2020)]. We show that this class is closed under approximation. In the language of algebraic geometry, we show that the set obtained by taking coordinatewise products of pairs of points from (the Pl\"{u}cker embedding of) a Grassmannian variety is closed.
[ { "created": "Wed, 17 May 2023 06:12:36 GMT", "version": "v1" } ]
2023-05-18
[ [ "Chatterjee", "Abhranil", "" ], [ "Ghosh", "Sumanta", "" ], [ "Gurjar", "Rohit", "" ], [ "Raj", "Roshan", "" ] ]
VBP is the class of polynomial families that can be computed by the determinant of a symbolic matrix of the form $A_0 + \sum_{i=1}^n A_ix_i$ where the size of each $A_i$ is polynomial in the number of variables (equivalently, computable by polynomial-sized algebraic branching programs (ABP)). A major open problem in geometric complexity theory (GCT) is to determine whether VBP is closed under approximation. The power of approximation is well understood for some restricted models of computation, e.g., the class of depth-two circuits, read-once oblivious ABPs (ROABP), monotone ABPs, depth-three circuits of bounded top fan-in, and width-two ABPs. The former three classes are known to be closed under approximation [Bl"{a}ser, Ikenmeyer, Mahajan, Pandey, and Saurabh (2020)], whereas the approximative closure of the last one captures the whole class of polynomial families computable by polynomial-sized formulas [Bringmann, Ikenmeyer, and Zuiddam (2017)]. In this work, we consider the subclass of VBP computed by the determinant of a symbolic matrix of the form $A_0 + \sum_{i=1}^n A_ix_i$ where for each $1\leq i \leq n$, $A_i$ is of rank one. It has been studied extensively [Edmonds(1968), Edmonds(1979)] and efficient identity testing algorithms are known [Lov"{a}sz (1989), Gurjar and Thierauf (2020)]. We show that this class is closed under approximation. In the language of algebraic geometry, we show that the set obtained by taking coordinatewise products of pairs of points from (the Pl\"{u}cker embedding of) a Grassmannian variety is closed.
1605.07905
Thomas Riedl
Thomas J. Riedl, Todd P. Coleman and Andrew C. Singer
Timing Channel: Achievable Rate in the Finite Block-Length Regime
Full technical report on the work originally presented at the Information Theory Workshop (ITW) in 2011
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The exponential server timing channel is known to be the simplest, and in some sense canonical, queuing timing channel. The capacity of this infinite-memory channel is known. Here, we discuss practical finite-length restrictions on the codewords and attempt to understand the maximal rate that can be achieved for a target error probability. By using Markov chain analysis, we prove a lower bound on the maximal channel coding rate achievable at blocklength $n$ and error probability $\epsilon$. The bound is approximated by $C- n^{-1/2} \sigma Q^{-1}(\epsilon)$ where $Q$ denotes the Q-function and $\sigma^2$ is the asymptotic variance of the underlying Markov chain. A closed form expression for $\sigma^2$ is given.
[ { "created": "Wed, 25 May 2016 14:45:19 GMT", "version": "v1" } ]
2016-05-26
[ [ "Riedl", "Thomas J.", "" ], [ "Coleman", "Todd P.", "" ], [ "Singer", "Andrew C.", "" ] ]
The exponential server timing channel is known to be the simplest, and in some sense canonical, queuing timing channel. The capacity of this infinite-memory channel is known. Here, we discuss practical finite-length restrictions on the codewords and attempt to understand the maximal rate that can be achieved for a target error probability. By using Markov chain analysis, we prove a lower bound on the maximal channel coding rate achievable at blocklength $n$ and error probability $\epsilon$. The bound is approximated by $C- n^{-1/2} \sigma Q^{-1}(\epsilon)$ where $Q$ denotes the Q-function and $\sigma^2$ is the asymptotic variance of the underlying Markov chain. A closed form expression for $\sigma^2$ is given.
0706.0869
Edward Aboufadel
Edward Aboufadel, Timothy Armstrong, Elizabeth Smietana
Position Coding
14 pages, 7 figures
null
null
null
cs.IT math.CO math.IT
null
A position coding pattern is an array of symbols in which subarrays of a certain fixed size appear at most once. So, each subarray uniquely identifies a location in the larger array, which means there is a bijection of some sort from this set of subarrays to a set of coordinates. The key to Fly Pentop Computer paper and other examples of position codes is a method to read the subarray and then convert it to coordinates. Position coding makes use of ideas from discrete mathematics and number theory. In this paper, we will describe the underlying mathematics of two position codes, one being the Anoto code that is the basis of "Fly paper". Then, we will present two new codes, one which uses binary wavelets as part of the bijection.
[ { "created": "Wed, 6 Jun 2007 17:09:21 GMT", "version": "v1" } ]
2007-07-13
[ [ "Aboufadel", "Edward", "" ], [ "Armstrong", "Timothy", "" ], [ "Smietana", "Elizabeth", "" ] ]
A position coding pattern is an array of symbols in which subarrays of a certain fixed size appear at most once. So, each subarray uniquely identifies a location in the larger array, which means there is a bijection of some sort from this set of subarrays to a set of coordinates. The key to Fly Pentop Computer paper and other examples of position codes is a method to read the subarray and then convert it to coordinates. Position coding makes use of ideas from discrete mathematics and number theory. In this paper, we will describe the underlying mathematics of two position codes, one being the Anoto code that is the basis of "Fly paper". Then, we will present two new codes, one which uses binary wavelets as part of the bijection.
2401.17602
Yuelyu Ji
Yuelyu Ji, Zeshui Yu and Yanshan Wang
Assertion Detection Large Language Model In-context Learning LoRA Fine-tuning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this study, we aim to address the task of assertion detection when extracting medical concepts from clinical notes, a key process in clinical natural language processing (NLP). Assertion detection in clinical NLP usually involves identifying assertion types for medical concepts in the clinical text, namely certainty (whether the medical concept is positive, negated, possible, or hypothetical), temporality (whether the medical concept is for present or the past history), and experiencer (whether the medical concept is described for the patient or a family member). These assertion types are essential for healthcare professionals to quickly and clearly understand the context of medical conditions from unstructured clinical texts, directly influencing the quality and outcomes of patient care. Although widely used, traditional methods, particularly rule-based NLP systems and machine learning or deep learning models, demand intensive manual efforts to create patterns and tend to overlook less common assertion types, leading to an incomplete understanding of the context. To address this challenge, our research introduces a novel methodology that utilizes Large Language Models (LLMs) pre-trained on a vast array of medical data for assertion detection. We enhanced the current method with advanced reasoning techniques, including Tree of Thought (ToT), Chain of Thought (CoT), and Self-Consistency (SC), and refine it further with Low-Rank Adaptation (LoRA) fine-tuning. We first evaluated the model on the i2b2 2010 assertion dataset. Our method achieved a micro-averaged F-1 of 0.89, with 0.11 improvements over the previous works. To further assess the generalizability of our approach, we extended our evaluation to a local dataset that focused on sleep concept extraction. Our approach achieved an F-1 of 0.74, which is 0.31 higher than the previous method.
[ { "created": "Wed, 31 Jan 2024 05:11:00 GMT", "version": "v1" } ]
2024-02-01
[ [ "Ji", "Yuelyu", "" ], [ "Yu", "Zeshui", "" ], [ "Wang", "Yanshan", "" ] ]
In this study, we aim to address the task of assertion detection when extracting medical concepts from clinical notes, a key process in clinical natural language processing (NLP). Assertion detection in clinical NLP usually involves identifying assertion types for medical concepts in the clinical text, namely certainty (whether the medical concept is positive, negated, possible, or hypothetical), temporality (whether the medical concept is for present or the past history), and experiencer (whether the medical concept is described for the patient or a family member). These assertion types are essential for healthcare professionals to quickly and clearly understand the context of medical conditions from unstructured clinical texts, directly influencing the quality and outcomes of patient care. Although widely used, traditional methods, particularly rule-based NLP systems and machine learning or deep learning models, demand intensive manual efforts to create patterns and tend to overlook less common assertion types, leading to an incomplete understanding of the context. To address this challenge, our research introduces a novel methodology that utilizes Large Language Models (LLMs) pre-trained on a vast array of medical data for assertion detection. We enhanced the current method with advanced reasoning techniques, including Tree of Thought (ToT), Chain of Thought (CoT), and Self-Consistency (SC), and refine it further with Low-Rank Adaptation (LoRA) fine-tuning. We first evaluated the model on the i2b2 2010 assertion dataset. Our method achieved a micro-averaged F-1 of 0.89, with 0.11 improvements over the previous works. To further assess the generalizability of our approach, we extended our evaluation to a local dataset that focused on sleep concept extraction. Our approach achieved an F-1 of 0.74, which is 0.31 higher than the previous method.
2010.08886
Kinjal Shah
Sourabh Kulkarni, Kinjal Divesh Shah, Nimar Arora, Xiaoyan Wang, Yucen Lily Li, Nazanin Khosravani Tehrani, Michael Tingley, David Noursi, Narjes Torabi, Sepehr Akhavan Masouleh, Eric Lippert, and Erik Meijer
PPL Bench: Evaluation Framework For Probabilistic Programming Languages
6 pages, PROBPROG 2020
null
null
null
cs.PL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce PPL Bench, a new benchmark for evaluating Probabilistic Programming Languages (PPLs) on a variety of statistical models. The benchmark includes data generation and evaluation code for a number of models as well as implementations in some common PPLs. All of the benchmark code and PPL implementations are available on Github. We welcome contributions of new models and PPLs and as well as improvements in existing PPL implementations. The purpose of the benchmark is two-fold. First, we want researchers as well as conference reviewers to be able to evaluate improvements in PPLs in a standardized setting. Second, we want end users to be able to pick the PPL that is most suited for their modeling application. In particular, we are interested in evaluating the accuracy and speed of convergence of the inferred posterior. Each PPL only needs to provide posterior samples given a model and observation data. The framework automatically computes and plots growth in predictive log-likelihood on held out data in addition to reporting other common metrics such as effective sample size and $\hat{r}$.
[ { "created": "Sat, 17 Oct 2020 23:12:23 GMT", "version": "v1" } ]
2020-10-20
[ [ "Kulkarni", "Sourabh", "" ], [ "Shah", "Kinjal Divesh", "" ], [ "Arora", "Nimar", "" ], [ "Wang", "Xiaoyan", "" ], [ "Li", "Yucen Lily", "" ], [ "Tehrani", "Nazanin Khosravani", "" ], [ "Tingley", "Michael", "" ], [ "Noursi", "David", "" ], [ "Torabi", "Narjes", "" ], [ "Masouleh", "Sepehr Akhavan", "" ], [ "Lippert", "Eric", "" ], [ "Meijer", "Erik", "" ] ]
We introduce PPL Bench, a new benchmark for evaluating Probabilistic Programming Languages (PPLs) on a variety of statistical models. The benchmark includes data generation and evaluation code for a number of models as well as implementations in some common PPLs. All of the benchmark code and PPL implementations are available on Github. We welcome contributions of new models and PPLs and as well as improvements in existing PPL implementations. The purpose of the benchmark is two-fold. First, we want researchers as well as conference reviewers to be able to evaluate improvements in PPLs in a standardized setting. Second, we want end users to be able to pick the PPL that is most suited for their modeling application. In particular, we are interested in evaluating the accuracy and speed of convergence of the inferred posterior. Each PPL only needs to provide posterior samples given a model and observation data. The framework automatically computes and plots growth in predictive log-likelihood on held out data in addition to reporting other common metrics such as effective sample size and $\hat{r}$.
2405.10143
Debangshu Banerjee
Debangshu Banerjee, Gagandeep Singh
Relational DNN Verification With Cross Executional Bound Refinement
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We focus on verifying relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations (UAP), certified worst-case hamming distance for binary string classifications, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. However, most of the existing works in DNN verification only handle properties defined over single executions and as a result, are imprecise for relational properties. Though few recent works for relational DNN verification, capture linear dependencies between the inputs of multiple executions, they do not leverage dependencies between the outputs of hidden layers producing imprecise results. We develop a scalable relational verifier RACoon that utilizes cross-execution dependencies at all layers of the DNN gaining substantial precision over SOTA baselines on a wide range of datasets, networks, and relational properties.
[ { "created": "Thu, 16 May 2024 14:35:50 GMT", "version": "v1" } ]
2024-05-17
[ [ "Banerjee", "Debangshu", "" ], [ "Singh", "Gagandeep", "" ] ]
We focus on verifying relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations (UAP), certified worst-case hamming distance for binary string classifications, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. However, most of the existing works in DNN verification only handle properties defined over single executions and as a result, are imprecise for relational properties. Though few recent works for relational DNN verification, capture linear dependencies between the inputs of multiple executions, they do not leverage dependencies between the outputs of hidden layers producing imprecise results. We develop a scalable relational verifier RACoon that utilizes cross-execution dependencies at all layers of the DNN gaining substantial precision over SOTA baselines on a wide range of datasets, networks, and relational properties.
1809.00654
Ravi Shankar Mr
Ravi Shankar, Yamini Chandrakar, Radhika Sinha, Ritesh Kumar Mishra
PEP Analysis of Selective Decode and Forward Protocol over Keyhole Fading
MICRO 2017
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a closed form upper bound formulation for the average pairwise-error probability (PEP) of selective decode and forward (SDF) cooperation protocol for a keyhole (pinhole) channel condition. We have employed orthogonal space-time block-code scheme (OSTBC) in conjunction with multi-antenna (MIMO) technology. We have used moment generating function (MGF) based approach for deriving the upper bound of PEP. PEP expression provides information regarding the performance of the wireless system with respect to the channel conditions. We have included simulation results which confirm the analytical results of our proposed upper bound. Simulation results show that due to keyhole effect performance of wireless system degrades.
[ { "created": "Mon, 3 Sep 2018 16:55:38 GMT", "version": "v1" }, { "created": "Mon, 10 Sep 2018 14:43:01 GMT", "version": "v2" } ]
2018-09-11
[ [ "Shankar", "Ravi", "" ], [ "Chandrakar", "Yamini", "" ], [ "Sinha", "Radhika", "" ], [ "Mishra", "Ritesh Kumar", "" ] ]
We provide a closed form upper bound formulation for the average pairwise-error probability (PEP) of selective decode and forward (SDF) cooperation protocol for a keyhole (pinhole) channel condition. We have employed orthogonal space-time block-code scheme (OSTBC) in conjunction with multi-antenna (MIMO) technology. We have used moment generating function (MGF) based approach for deriving the upper bound of PEP. PEP expression provides information regarding the performance of the wireless system with respect to the channel conditions. We have included simulation results which confirm the analytical results of our proposed upper bound. Simulation results show that due to keyhole effect performance of wireless system degrades.
1702.04642
Zhibin Niu
Dawei Cheng, Zhibin Niu, Yi Tu, Liqing Zhang
Prediction defaults for networked-guarantee loans
6 pages,7 figures
2018 24th International Conference on Pattern Recognition (ICPR)
10.1109/ICPR.2018.8545474
null
cs.CE cs.SI q-fin.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networked-guarantee loans may cause the systemic risk related concern of the government and banks in China. The prediction of default of enterprise loans is a typical extremely imbalanced prediction problem, and the networked-guarantee make this problem more difficult to solve. Since the guaranteed loan is a debt obligation promise, if one enterprise in the guarantee network falls into a financial crisis, the debt risk may spread like a virus across the guarantee network, even lead to a systemic financial crisis. In this paper, we propose an imbalanced network risk diffusion model to forecast the enterprise default risk in a short future. Positive weighted k-nearest neighbors (p-wkNN) algorithm is developed for the stand-alone case -- when there is no default contagious; then a data-driven default diffusion model is integrated to further improve the prediction accuracy. We perform the empirical study on a real-world three-years loan record from a major commercial bank. The results show that our proposed method outperforms conventional credit risk methods in terms of AUC. In summary, our quantitative risk evaluation model shows promising prediction performance on real-world data, which could be useful to both regulators and stakeholders.
[ { "created": "Wed, 15 Feb 2017 15:00:12 GMT", "version": "v1" }, { "created": "Wed, 13 May 2020 10:17:11 GMT", "version": "v2" }, { "created": "Fri, 15 May 2020 07:45:23 GMT", "version": "v3" }, { "created": "Mon, 18 May 2020 08:18:12 GMT", "version": "v4" }, { "created": "Sat, 6 Jun 2020 12:35:41 GMT", "version": "v5" } ]
2020-06-09
[ [ "Cheng", "Dawei", "" ], [ "Niu", "Zhibin", "" ], [ "Tu", "Yi", "" ], [ "Zhang", "Liqing", "" ] ]
Networked-guarantee loans may cause the systemic risk related concern of the government and banks in China. The prediction of default of enterprise loans is a typical extremely imbalanced prediction problem, and the networked-guarantee make this problem more difficult to solve. Since the guaranteed loan is a debt obligation promise, if one enterprise in the guarantee network falls into a financial crisis, the debt risk may spread like a virus across the guarantee network, even lead to a systemic financial crisis. In this paper, we propose an imbalanced network risk diffusion model to forecast the enterprise default risk in a short future. Positive weighted k-nearest neighbors (p-wkNN) algorithm is developed for the stand-alone case -- when there is no default contagious; then a data-driven default diffusion model is integrated to further improve the prediction accuracy. We perform the empirical study on a real-world three-years loan record from a major commercial bank. The results show that our proposed method outperforms conventional credit risk methods in terms of AUC. In summary, our quantitative risk evaluation model shows promising prediction performance on real-world data, which could be useful to both regulators and stakeholders.
2312.02599
Chuan Huang
Chuan Huang, Gustaf Hendeby, Hassen Fourati, Christophe Prieur, and Isaac Skog
MAINS: A Magnetic Field Aided Inertial Navigation System for Indoor Positioning
fix a missing reference
null
null
null
cs.RO eess.SP
http://creativecommons.org/licenses/by/4.0/
A Magnetic field Aided Inertial Navigation System (MAINS) for indoor navigation is proposed in this paper. MAINS leverages an array of magnetometers to measure spatial variations in the magnetic field, which are then used to estimate the displacement and orientation changes of the system, thereby aiding the inertial navigation system (INS). Experiments show that MAINS significantly outperforms the stand-alone INS, demonstrating a remarkable two orders of magnitude reduction in position error. Furthermore, when compared to the state-of-the-art magnetic-field-aided navigation approach, the proposed method exhibits slightly improved horizontal position accuracy. On the other hand, it has noticeably larger vertical error on datasets with large magnetic field variations. However, one of the main advantages of MAINS compared to the state-of-the-art is that it enables flexible sensor configurations. The experimental results show that the position error after 2 minutes of navigation in most cases is less than 3 meters when using an array of 30 magnetometers. Thus, the proposed navigation solution has the potential to solve one of the key challenges faced with current magnetic-field simultaneous localization and mapping (SLAM) solutions: the very limited allowable length of the exploration phase during which unvisited areas are mapped.
[ { "created": "Tue, 5 Dec 2023 09:18:12 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2024 13:44:22 GMT", "version": "v2" }, { "created": "Tue, 23 Apr 2024 18:22:46 GMT", "version": "v3" } ]
2024-04-25
[ [ "Huang", "Chuan", "" ], [ "Hendeby", "Gustaf", "" ], [ "Fourati", "Hassen", "" ], [ "Prieur", "Christophe", "" ], [ "Skog", "Isaac", "" ] ]
A Magnetic field Aided Inertial Navigation System (MAINS) for indoor navigation is proposed in this paper. MAINS leverages an array of magnetometers to measure spatial variations in the magnetic field, which are then used to estimate the displacement and orientation changes of the system, thereby aiding the inertial navigation system (INS). Experiments show that MAINS significantly outperforms the stand-alone INS, demonstrating a remarkable two orders of magnitude reduction in position error. Furthermore, when compared to the state-of-the-art magnetic-field-aided navigation approach, the proposed method exhibits slightly improved horizontal position accuracy. On the other hand, it has noticeably larger vertical error on datasets with large magnetic field variations. However, one of the main advantages of MAINS compared to the state-of-the-art is that it enables flexible sensor configurations. The experimental results show that the position error after 2 minutes of navigation in most cases is less than 3 meters when using an array of 30 magnetometers. Thus, the proposed navigation solution has the potential to solve one of the key challenges faced with current magnetic-field simultaneous localization and mapping (SLAM) solutions: the very limited allowable length of the exploration phase during which unvisited areas are mapped.
2403.19113
Vipula Rawte
Vipula Rawte, S.M Towhidul Islam Tonmoy, Krishnav Rajbangshi, Shravani Nag, Aman Chadha, Amit P. Sheth, Amitava Das
FACTOID: FACtual enTailment fOr hallucInation Detection
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The widespread adoption of Large Language Models (LLMs) has facilitated numerous benefits. However, hallucination is a significant concern. In response, Retrieval Augmented Generation (RAG) has emerged as a highly promising paradigm to improve LLM outputs by grounding them in factual information. RAG relies on textual entailment (TE) or similar methods to check if the text produced by LLMs is supported or contradicted, compared to retrieved documents. This paper argues that conventional TE methods are inadequate for spotting hallucinations in content generated by LLMs. For instance, consider a prompt about the 'USA's stance on the Ukraine war''. The AI-generated text states, ...U.S. President Barack Obama says the U.S. will not put troops in Ukraine...'' However, during the war the U.S. president is Joe Biden which contradicts factual reality. Moreover, current TE systems are unable to accurately annotate the given text and identify the exact portion that is contradicted. To address this, we introduces a new type of TE called ``Factual Entailment (FE).'', aims to detect factual inaccuracies in content generated by LLMs while also highlighting the specific text segment that contradicts reality. We present FACTOID (FACTual enTAILment for hallucInation Detection), a benchmark dataset for FE. We propose a multi-task learning (MTL) framework for FE, incorporating state-of-the-art (SoTA) long text embeddings such as e5-mistral-7b-instruct, along with GPT-3, SpanBERT, and RoFormer. The proposed MTL architecture for FE achieves an avg. 40\% improvement in accuracy on the FACTOID benchmark compared to SoTA TE methods. As FE automatically detects hallucinations, we assessed 15 modern LLMs and ranked them using our proposed Auto Hallucination Vulnerability Index (HVI_auto). This index quantifies and offers a comparative scale to evaluate and rank LLMs according to their hallucinations.
[ { "created": "Thu, 28 Mar 2024 03:09:42 GMT", "version": "v1" } ]
2024-03-29
[ [ "Rawte", "Vipula", "" ], [ "Tonmoy", "S. M Towhidul Islam", "" ], [ "Rajbangshi", "Krishnav", "" ], [ "Nag", "Shravani", "" ], [ "Chadha", "Aman", "" ], [ "Sheth", "Amit P.", "" ], [ "Das", "Amitava", "" ] ]
The widespread adoption of Large Language Models (LLMs) has facilitated numerous benefits. However, hallucination is a significant concern. In response, Retrieval Augmented Generation (RAG) has emerged as a highly promising paradigm to improve LLM outputs by grounding them in factual information. RAG relies on textual entailment (TE) or similar methods to check if the text produced by LLMs is supported or contradicted, compared to retrieved documents. This paper argues that conventional TE methods are inadequate for spotting hallucinations in content generated by LLMs. For instance, consider a prompt about the 'USA's stance on the Ukraine war''. The AI-generated text states, ...U.S. President Barack Obama says the U.S. will not put troops in Ukraine...'' However, during the war the U.S. president is Joe Biden which contradicts factual reality. Moreover, current TE systems are unable to accurately annotate the given text and identify the exact portion that is contradicted. To address this, we introduces a new type of TE called ``Factual Entailment (FE).'', aims to detect factual inaccuracies in content generated by LLMs while also highlighting the specific text segment that contradicts reality. We present FACTOID (FACTual enTAILment for hallucInation Detection), a benchmark dataset for FE. We propose a multi-task learning (MTL) framework for FE, incorporating state-of-the-art (SoTA) long text embeddings such as e5-mistral-7b-instruct, along with GPT-3, SpanBERT, and RoFormer. The proposed MTL architecture for FE achieves an avg. 40\% improvement in accuracy on the FACTOID benchmark compared to SoTA TE methods. As FE automatically detects hallucinations, we assessed 15 modern LLMs and ranked them using our proposed Auto Hallucination Vulnerability Index (HVI_auto). This index quantifies and offers a comparative scale to evaluate and rank LLMs according to their hallucinations.
2305.04357
Fabio Massimo Zennaro
Fabio Massimo Zennaro, Paolo Turrini, Theodoros Damoulas
Quantifying Consistency and Information Loss for Causal Abstraction Learning
9 pages, 9 pages appendix, 2 figures, IJCAI 2023
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structural causal models provide a formalism to express causal relations between variables of interest. Models and variables can represent a system at different levels of abstraction, whereby relations may be coarsened and refined according to the need of a modeller. However, switching between different levels of abstraction requires evaluating a trade-off between the consistency and the information loss among different models. In this paper we introduce a family of interventional measures that an agent may use to evaluate such a trade-off. We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions. Finally, we illustrate the flexibility of our setup by empirically showing how different measures and algorithmic choices may lead to different abstractions.
[ { "created": "Sun, 7 May 2023 19:10:28 GMT", "version": "v1" } ]
2023-05-09
[ [ "Zennaro", "Fabio Massimo", "" ], [ "Turrini", "Paolo", "" ], [ "Damoulas", "Theodoros", "" ] ]
Structural causal models provide a formalism to express causal relations between variables of interest. Models and variables can represent a system at different levels of abstraction, whereby relations may be coarsened and refined according to the need of a modeller. However, switching between different levels of abstraction requires evaluating a trade-off between the consistency and the information loss among different models. In this paper we introduce a family of interventional measures that an agent may use to evaluate such a trade-off. We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions. Finally, we illustrate the flexibility of our setup by empirically showing how different measures and algorithmic choices may lead to different abstractions.
1212.5288
Mahdy Nabaee
Mahdy Nabaee and Fabrice Labeau
Quantized Network Coding for Correlated Sources
Submitted for IEEE Transactions on Signal Processing
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-adaptive joint source network coding of correlated sources is discussed in this paper. By studying the information flow in the network, we propose quantized network coding as an alternative for packet forwarding. This technique has both network coding and distributed source coding advantages, simultaneously. Quantized network coding is a combination of random linear network coding in the (infinite) field of real numbers and quantization to cope with the limited capacity of links. With the aid of the results in the literature of compressed sensing, we discuss theoretical and practical feasibility of quantized network coding in lossless networks. We show that, due to the nature of the field it operates on, quantized network coding can provide good quality decoding at a sink node with the reception of a reduced number of packets. Specifically, we discuss the required conditions on local network coding coefficients, by using restricted isometry property and suggest a design, which yields in appropriate linear measurements. Finally, our simulation results show the achieved gain in terms of delivery delay, compared to conventional routing based packet forwarding.
[ { "created": "Thu, 20 Dec 2012 22:28:43 GMT", "version": "v1" } ]
2012-12-24
[ [ "Nabaee", "Mahdy", "" ], [ "Labeau", "Fabrice", "" ] ]
Non-adaptive joint source network coding of correlated sources is discussed in this paper. By studying the information flow in the network, we propose quantized network coding as an alternative for packet forwarding. This technique has both network coding and distributed source coding advantages, simultaneously. Quantized network coding is a combination of random linear network coding in the (infinite) field of real numbers and quantization to cope with the limited capacity of links. With the aid of the results in the literature of compressed sensing, we discuss theoretical and practical feasibility of quantized network coding in lossless networks. We show that, due to the nature of the field it operates on, quantized network coding can provide good quality decoding at a sink node with the reception of a reduced number of packets. Specifically, we discuss the required conditions on local network coding coefficients, by using restricted isometry property and suggest a design, which yields in appropriate linear measurements. Finally, our simulation results show the achieved gain in terms of delivery delay, compared to conventional routing based packet forwarding.
2109.06736
Christian Hansen
Christian Hansen
Sequential Modelling with Applications to Music Recommendation, Fact-Checking, and Speed Reading
PhD Thesis, University of Copenhagen, Faculty of Science
null
null
null
cs.IR cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential modelling entails making sense of sequential data, which naturally occurs in a wide array of domains. One example is systems that interact with users, log user actions and behaviour, and make recommendations of items of potential interest to users on the basis of their previous interactions. In such cases, the sequential order of user interactions is often indicative of what the user is interested in next. Similarly, for systems that automatically infer the semantics of text, capturing the sequential order of words in a sentence is essential, as even a slight re-ordering could significantly alter its original meaning. This thesis makes methodological contributions and new investigations of sequential modelling for the specific application areas of systems that recommend music tracks to listeners and systems that process text semantics in order to automatically fact-check claims, or "speed read" text for efficient further classification. (Rest of abstract omitted due to arXiv abstract limit)
[ { "created": "Sat, 11 Sep 2021 08:05:48 GMT", "version": "v1" } ]
2021-09-15
[ [ "Hansen", "Christian", "" ] ]
Sequential modelling entails making sense of sequential data, which naturally occurs in a wide array of domains. One example is systems that interact with users, log user actions and behaviour, and make recommendations of items of potential interest to users on the basis of their previous interactions. In such cases, the sequential order of user interactions is often indicative of what the user is interested in next. Similarly, for systems that automatically infer the semantics of text, capturing the sequential order of words in a sentence is essential, as even a slight re-ordering could significantly alter its original meaning. This thesis makes methodological contributions and new investigations of sequential modelling for the specific application areas of systems that recommend music tracks to listeners and systems that process text semantics in order to automatically fact-check claims, or "speed read" text for efficient further classification. (Rest of abstract omitted due to arXiv abstract limit)
1901.02393
Maryam Negahbani
Suman K. Bera, Deeparnab Chakrabarty, Nicolas J. Flores, Maryam Negahbani
Fair Algorithms for Clustering
null
null
null
null
cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of finding low-cost Fair Clusterings in data where each data point may belong to many protected groups. Our work significantly generalizes the seminal work of Chierichetti et.al. (NIPS 2017) as follows. - We allow the user to specify the parameters that define fair representation. More precisely, these parameters define the maximum over- and minimum under-representation of any group in any cluster. - Our clustering algorithm works on any $\ell_p$-norm objective (e.g. $k$-means, $k$-median, and $k$-center). Indeed, our algorithm transforms any vanilla clustering solution into a fair one incurring only a slight loss in quality. - Our algorithm also allows individuals to lie in multiple protected groups. In other words, we do not need the protected groups to partition the data and we can maintain fairness across different groups simultaneously. Our experiments show that on established data sets, our algorithm performs much better in practice than what our theoretical results suggest.
[ { "created": "Tue, 8 Jan 2019 16:39:16 GMT", "version": "v1" }, { "created": "Mon, 17 Jun 2019 14:01:05 GMT", "version": "v2" } ]
2019-06-18
[ [ "Bera", "Suman K.", "" ], [ "Chakrabarty", "Deeparnab", "" ], [ "Flores", "Nicolas J.", "" ], [ "Negahbani", "Maryam", "" ] ]
We study the problem of finding low-cost Fair Clusterings in data where each data point may belong to many protected groups. Our work significantly generalizes the seminal work of Chierichetti et.al. (NIPS 2017) as follows. - We allow the user to specify the parameters that define fair representation. More precisely, these parameters define the maximum over- and minimum under-representation of any group in any cluster. - Our clustering algorithm works on any $\ell_p$-norm objective (e.g. $k$-means, $k$-median, and $k$-center). Indeed, our algorithm transforms any vanilla clustering solution into a fair one incurring only a slight loss in quality. - Our algorithm also allows individuals to lie in multiple protected groups. In other words, we do not need the protected groups to partition the data and we can maintain fairness across different groups simultaneously. Our experiments show that on established data sets, our algorithm performs much better in practice than what our theoretical results suggest.
1503.01868
Yao Wang
Wenfei Cao, Yao Wang, Jian Sun, Deyu Meng, Can Yang, Andrzej Cichocki, Zongben Xu
Total Variation Regularized Tensor RPCA for Background Subtraction from Compressive Measurements
To appear in IEEE TIP
null
10.1109/TIP.2016.2579262
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background subtraction has been a fundamental and widely studied task in video analysis, with a wide range of applications in video surveillance, teleconferencing and 3D modeling. Recently, motivated by compressive imaging, background subtraction from compressive measurements (BSCM) is becoming an active research task in video surveillance. In this paper, we propose a novel tensor-based robust PCA (TenRPCA) approach for BSCM by decomposing video frames into backgrounds with spatial-temporal correlations and foregrounds with spatio-temporal continuity in a tensor framework. In this approach, we use 3D total variation (TV) to enhance the spatio-temporal continuity of foregrounds, and Tucker decomposition to model the spatio-temporal correlations of video background. Based on this idea, we design a basic tensor RPCA model over the video frames, dubbed as the holistic TenRPCA model (H-TenRPCA). To characterize the correlations among the groups of similar 3D patches of video background, we further design a patch-group-based tensor RPCA model (PG-TenRPCA) by joint tensor Tucker decompositions of 3D patch groups for modeling the video background. Efficient algorithms using alternating direction method of multipliers (ADMM) are developed to solve the proposed models. Extensive experiments on simulated and real-world videos demonstrate the superiority of the proposed approaches over the existing state-of-the-art approaches.
[ { "created": "Fri, 6 Mar 2015 08:00:43 GMT", "version": "v1" }, { "created": "Mon, 9 Mar 2015 03:20:34 GMT", "version": "v2" }, { "created": "Mon, 10 Aug 2015 14:50:48 GMT", "version": "v3" }, { "created": "Sun, 5 Jun 2016 17:46:14 GMT", "version": "v4" } ]
2016-08-24
[ [ "Cao", "Wenfei", "" ], [ "Wang", "Yao", "" ], [ "Sun", "Jian", "" ], [ "Meng", "Deyu", "" ], [ "Yang", "Can", "" ], [ "Cichocki", "Andrzej", "" ], [ "Xu", "Zongben", "" ] ]
Background subtraction has been a fundamental and widely studied task in video analysis, with a wide range of applications in video surveillance, teleconferencing and 3D modeling. Recently, motivated by compressive imaging, background subtraction from compressive measurements (BSCM) is becoming an active research task in video surveillance. In this paper, we propose a novel tensor-based robust PCA (TenRPCA) approach for BSCM by decomposing video frames into backgrounds with spatial-temporal correlations and foregrounds with spatio-temporal continuity in a tensor framework. In this approach, we use 3D total variation (TV) to enhance the spatio-temporal continuity of foregrounds, and Tucker decomposition to model the spatio-temporal correlations of video background. Based on this idea, we design a basic tensor RPCA model over the video frames, dubbed as the holistic TenRPCA model (H-TenRPCA). To characterize the correlations among the groups of similar 3D patches of video background, we further design a patch-group-based tensor RPCA model (PG-TenRPCA) by joint tensor Tucker decompositions of 3D patch groups for modeling the video background. Efficient algorithms using alternating direction method of multipliers (ADMM) are developed to solve the proposed models. Extensive experiments on simulated and real-world videos demonstrate the superiority of the proposed approaches over the existing state-of-the-art approaches.
2105.11541
Qing Ping
Tao Tu, Qing Ping, Govind Thattai, Gokhan Tur, Prem Natarajan
Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
GuessWhat?! is a two-player visual dialog guessing game where player A asks a sequence of yes/no questions (Questioner) and makes a final guess (Guesser) about a target object in an image, based on answers from player B (Oracle). Based on this dialog history between the Questioner and the Oracle, a Guesser makes a final guess of the target object. Previous baseline Oracle model encodes no visual information in the model, and it cannot fully understand complex questions about color, shape, relationships and so on. Most existing work for Guesser encode the dialog history as a whole and train the Guesser models from scratch on the GuessWhat?! dataset. This is problematic since language encoder tend to forget long-term history and the GuessWhat?! data is sparse in terms of learning visual grounding of objects. Previous work for Questioner introduces state tracking mechanism into the model, but it is learned as a soft intermediates without any prior vision-linguistic insights. To bridge these gaps, in this paper we propose Vilbert-based Oracle, Guesser and Questioner, which are all built on top of pretrained vision-linguistic model, Vilbert. We introduce two-way background/target fusion mechanism into Vilbert-Oracle to account for both intra and inter-object questions. We propose a unified framework for Vilbert-Guesser and Vilbert-Questioner, where state-estimator is introduced to best utilize Vilbert's power on single-turn referring expression comprehension. Experimental results show that our proposed models outperform state-of-the-art models significantly by 7%, 10%, 12% for Oracle, Guesser and End-to-End Questioner respectively.
[ { "created": "Mon, 24 May 2021 21:09:20 GMT", "version": "v1" } ]
2021-05-26
[ [ "Tu", "Tao", "" ], [ "Ping", "Qing", "" ], [ "Thattai", "Govind", "" ], [ "Tur", "Gokhan", "" ], [ "Natarajan", "Prem", "" ] ]
GuessWhat?! is a two-player visual dialog guessing game where player A asks a sequence of yes/no questions (Questioner) and makes a final guess (Guesser) about a target object in an image, based on answers from player B (Oracle). Based on this dialog history between the Questioner and the Oracle, a Guesser makes a final guess of the target object. Previous baseline Oracle model encodes no visual information in the model, and it cannot fully understand complex questions about color, shape, relationships and so on. Most existing work for Guesser encode the dialog history as a whole and train the Guesser models from scratch on the GuessWhat?! dataset. This is problematic since language encoder tend to forget long-term history and the GuessWhat?! data is sparse in terms of learning visual grounding of objects. Previous work for Questioner introduces state tracking mechanism into the model, but it is learned as a soft intermediates without any prior vision-linguistic insights. To bridge these gaps, in this paper we propose Vilbert-based Oracle, Guesser and Questioner, which are all built on top of pretrained vision-linguistic model, Vilbert. We introduce two-way background/target fusion mechanism into Vilbert-Oracle to account for both intra and inter-object questions. We propose a unified framework for Vilbert-Guesser and Vilbert-Questioner, where state-estimator is introduced to best utilize Vilbert's power on single-turn referring expression comprehension. Experimental results show that our proposed models outperform state-of-the-art models significantly by 7%, 10%, 12% for Oracle, Guesser and End-to-End Questioner respectively.
2406.10935
Ashish Kumar
Ashish Kumar, Daneul Kim, Jaesik Park, Laxmidhar Behera
Pick-or-Mix: Dynamic Channel Sampling for ConvNets
Published in Computer Vision and Pattern Recognition (CVPR 2024)
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Channel pruning approaches for convolutional neural networks (ConvNets) deactivate the channels, statically or dynamically, and require special implementation. In addition, channel squeezing in representative ConvNets is carried out via 1x1 convolutions which dominates a large portion of computations and network parameters. Given these challenges, we propose an effective multi-purpose module for dynamic channel sampling, namely Pick-or-Mix (PiX), which does not require special implementation. PiX divides a set of channels into subsets and then picks from them, where the picking decision is dynamically made per each pixel based on the input activations. We plug PiX into prominent ConvNet architectures and verify its multi-purpose utilities. After replacing 1x1 channel squeezing layers in ResNet with PiX, the network becomes 25% faster without losing accuracy. We show that PiX allows ConvNets to learn better data representation than widely adopted approaches to enhance networks' representation power (e.g., SE, CBAM, AFF, SKNet, and DWP). We also show that PiX achieves state-of-the-art performance on network downscaling and dynamic channel pruning applications.
[ { "created": "Sun, 16 Jun 2024 13:33:09 GMT", "version": "v1" } ]
2024-06-18
[ [ "Kumar", "Ashish", "" ], [ "Kim", "Daneul", "" ], [ "Park", "Jaesik", "" ], [ "Behera", "Laxmidhar", "" ] ]
Channel pruning approaches for convolutional neural networks (ConvNets) deactivate the channels, statically or dynamically, and require special implementation. In addition, channel squeezing in representative ConvNets is carried out via 1x1 convolutions which dominates a large portion of computations and network parameters. Given these challenges, we propose an effective multi-purpose module for dynamic channel sampling, namely Pick-or-Mix (PiX), which does not require special implementation. PiX divides a set of channels into subsets and then picks from them, where the picking decision is dynamically made per each pixel based on the input activations. We plug PiX into prominent ConvNet architectures and verify its multi-purpose utilities. After replacing 1x1 channel squeezing layers in ResNet with PiX, the network becomes 25% faster without losing accuracy. We show that PiX allows ConvNets to learn better data representation than widely adopted approaches to enhance networks' representation power (e.g., SE, CBAM, AFF, SKNet, and DWP). We also show that PiX achieves state-of-the-art performance on network downscaling and dynamic channel pruning applications.
1807.01012
Weinan Chen
Weinan Chen and Lei Zhu and Yisheng Guan and C. Ronald Kube and Hong Zhang
Submap-based Pose-graph Visual SLAM: A Robust Visual Exploration and Localization System
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For VSLAM (Visual Simultaneous Localization and Mapping), localization is a challenging task, especially for some challenging situations: textureless frames, motion blur, etc.. To build a robust exploration and localization system in a given space or environment, a submap-based VSLAM system is proposed in this paper. Our system uses a submap back-end and a visual front-end. The main advantage of our system is its robustness with respect to tracking failure, a common problem in current VSLAM algorithms. The robustness of our system is compared with the state-of-the-art in terms of average tracking percentage. The precision of our system is also evaluated in terms of ATE (absolute trajectory error) RMSE (root mean square error) comparing the state-of-the-art. The ability of our system in solving the `kidnapped' problem is demonstrated. Our system can improve the robustness of visual localization in challenging situations.
[ { "created": "Tue, 3 Jul 2018 08:17:37 GMT", "version": "v1" } ]
2018-07-04
[ [ "Chen", "Weinan", "" ], [ "Zhu", "Lei", "" ], [ "Guan", "Yisheng", "" ], [ "Kube", "C. Ronald", "" ], [ "Zhang", "Hong", "" ] ]
For VSLAM (Visual Simultaneous Localization and Mapping), localization is a challenging task, especially for some challenging situations: textureless frames, motion blur, etc.. To build a robust exploration and localization system in a given space or environment, a submap-based VSLAM system is proposed in this paper. Our system uses a submap back-end and a visual front-end. The main advantage of our system is its robustness with respect to tracking failure, a common problem in current VSLAM algorithms. The robustness of our system is compared with the state-of-the-art in terms of average tracking percentage. The precision of our system is also evaluated in terms of ATE (absolute trajectory error) RMSE (root mean square error) comparing the state-of-the-art. The ability of our system in solving the `kidnapped' problem is demonstrated. Our system can improve the robustness of visual localization in challenging situations.
2112.11944
Jacob Armstrong
J. Armstrong, D. Clifton
Continual learning of longitudinal health records
15 pages, 5 figures
2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)
10.1109/BHI56158.2022.9926878
9926878
cs.LG
http://creativecommons.org/licenses/by/4.0/
Continual learning denotes machine learning methods which can adapt to new environments while retaining and reusing knowledge gained from past experiences. Such methods address two issues encountered by models in non-stationary environments: ungeneralisability to new data, and the catastrophic forgetting of previous knowledge when retrained. This is a pervasive problem in clinical settings where patient data exhibits covariate shift not only between populations, but also continuously over time. However, while continual learning methods have seen nascent success in the imaging domain, they have been little applied to the multi-variate sequential data characteristic of critical care patient recordings. Here we evaluate a variety of continual learning methods on longitudinal ICU data in a series of representative healthcare scenarios. We find that while several methods mitigate short-term forgetting, domain shift remains a challenging problem over large series of tasks, with only replay based methods achieving stable long-term performance. Code for reproducing all experiments can be found at https://github.com/iacobo/continual
[ { "created": "Wed, 22 Dec 2021 15:08:45 GMT", "version": "v1" } ]
2023-03-28
[ [ "Armstrong", "J.", "" ], [ "Clifton", "D.", "" ] ]
Continual learning denotes machine learning methods which can adapt to new environments while retaining and reusing knowledge gained from past experiences. Such methods address two issues encountered by models in non-stationary environments: ungeneralisability to new data, and the catastrophic forgetting of previous knowledge when retrained. This is a pervasive problem in clinical settings where patient data exhibits covariate shift not only between populations, but also continuously over time. However, while continual learning methods have seen nascent success in the imaging domain, they have been little applied to the multi-variate sequential data characteristic of critical care patient recordings. Here we evaluate a variety of continual learning methods on longitudinal ICU data in a series of representative healthcare scenarios. We find that while several methods mitigate short-term forgetting, domain shift remains a challenging problem over large series of tasks, with only replay based methods achieving stable long-term performance. Code for reproducing all experiments can be found at https://github.com/iacobo/continual
2207.00151
Yiming Huo
Yiming Huo
Space Broadband Access: The Race Has Just Begun
8 pages, 3 figures, 2 tables. Accepted by IEEE Magazine (https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=2)
null
10.1109/MC.2022.3160472
null
cs.NI cs.ET cs.SY eess.SP eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed an exponential growth of the commercial space industry, including rocket launch, satellite network deployment, private space travel, and even extraterrestrial colonization. Several trends are predicted in this unprecedented transition to an era of space-enabled broadband access.
[ { "created": "Fri, 6 May 2022 19:27:08 GMT", "version": "v1" } ]
2022-07-04
[ [ "Huo", "Yiming", "" ] ]
Recent years have witnessed an exponential growth of the commercial space industry, including rocket launch, satellite network deployment, private space travel, and even extraterrestrial colonization. Several trends are predicted in this unprecedented transition to an era of space-enabled broadband access.