id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2303.06823
Atul Dhingra
Atul Dhingra, Gaurav Sood
Instate: Predicting the State of Residence From Last Name
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
India has twenty-two official languages. Serving such a diverse language base is a challenge for survey statisticians, call center operators, software developers, and other such service providers. To help provide better services to different language communities via better localization, we introduce a new machine learning model that predicts the language(s) that the user can speak from their name. Using nearly 438M records spanning 33 Indian states and 1.13M unique last names from the Indian Electoral Rolls Corpus (?), we build a character-level transformer-based machine-learning model that predicts the state of residence based on the last name. The model has a top-3 accuracy of 85.3% on unseen names. We map the states to languages using the Indian census to infer languages understood by the respondent. We provide open-source software that implements the method discussed in the paper.
[ { "created": "Mon, 13 Mar 2023 02:49:50 GMT", "version": "v1" } ]
2023-03-14
[ [ "Dhingra", "Atul", "" ], [ "Sood", "Gaurav", "" ] ]
India has twenty-two official languages. Serving such a diverse language base is a challenge for survey statisticians, call center operators, software developers, and other such service providers. To help provide better services to different language communities via better localization, we introduce a new machine learning model that predicts the language(s) that the user can speak from their name. Using nearly 438M records spanning 33 Indian states and 1.13M unique last names from the Indian Electoral Rolls Corpus (?), we build a character-level transformer-based machine-learning model that predicts the state of residence based on the last name. The model has a top-3 accuracy of 85.3% on unseen names. We map the states to languages using the Indian census to infer languages understood by the respondent. We provide open-source software that implements the method discussed in the paper.
1103.3933
Tuvi Etzion
Tuvi Etzion
Product Constructions for Perfect Lee Codes
submitted to IEEE Transactions on Information Theory
null
10.1109/TIT.2011.2161133
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A well known conjecture of Golomb and Welch is that the only nontrivial perfect codes in the Lee and Manhattan metrics have length two or minimum distance three. This problem and related topics were subject for extensive research in the last forty years. In this paper two product constructions for perfect Lee codes and diameter perfect Lee codes are presented. These constructions yield a large number of nonlinear perfect codes and nonlinear diameter perfect codes in the Lee and Manhattan metrics. A short survey and other related problems on perfect codes in the Lee and the Manhattan metrics are also discussed.
[ { "created": "Mon, 21 Mar 2011 07:53:53 GMT", "version": "v1" }, { "created": "Wed, 22 Jun 2011 09:47:33 GMT", "version": "v2" } ]
2016-11-17
[ [ "Etzion", "Tuvi", "" ] ]
A well known conjecture of Golomb and Welch is that the only nontrivial perfect codes in the Lee and Manhattan metrics have length two or minimum distance three. This problem and related topics were subject for extensive research in the last forty years. In this paper two product constructions for perfect Lee codes and diameter perfect Lee codes are presented. These constructions yield a large number of nonlinear perfect codes and nonlinear diameter perfect codes in the Lee and Manhattan metrics. A short survey and other related problems on perfect codes in the Lee and the Manhattan metrics are also discussed.
1808.04405
Srayan Datta
Srayan Datta and Eytan Adar
Extracting Inter-community Conflicts in Reddit
21 pages, 7 figures
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anti-social behaviors in social media can happen both at user and community levels. While a great deal of attention is on the individual as an 'aggressor,' the banning of entire Reddit subcommunities (i.e., subreddits) demonstrates that this is a multi-layer concern. Existing research on inter-community conflict has largely focused on specific subcommunities or ideological opponents. However, antagonistic behaviors may be more pervasive and integrate into the broader network. In this work, we study the landscape of conflicts among subreddits by deriving higher-level (community) behaviors from the way individuals are sanctioned and rewarded. By constructing a conflict network, we characterize different patterns in subreddit-to-subreddit conflicts as well as communities of 'co-targeted' subreddits. By analyzing the dynamics of these interactions, we also observe that the conflict focus shifts over time.
[ { "created": "Mon, 13 Aug 2018 18:56:27 GMT", "version": "v1" } ]
2018-08-15
[ [ "Datta", "Srayan", "" ], [ "Adar", "Eytan", "" ] ]
Anti-social behaviors in social media can happen both at user and community levels. While a great deal of attention is on the individual as an 'aggressor,' the banning of entire Reddit subcommunities (i.e., subreddits) demonstrates that this is a multi-layer concern. Existing research on inter-community conflict has largely focused on specific subcommunities or ideological opponents. However, antagonistic behaviors may be more pervasive and integrate into the broader network. In this work, we study the landscape of conflicts among subreddits by deriving higher-level (community) behaviors from the way individuals are sanctioned and rewarded. By constructing a conflict network, we characterize different patterns in subreddit-to-subreddit conflicts as well as communities of 'co-targeted' subreddits. By analyzing the dynamics of these interactions, we also observe that the conflict focus shifts over time.
2401.03256
Subhajit Sahu
Subhajit Sahu
An Incrementally Expanding Approach for Updating PageRank on Dynamic Graphs
11 pages, 14 figures, 1 table
null
null
null
cs.DC cs.PF
http://creativecommons.org/licenses/by-nc-sa/4.0/
PageRank is a popular centrality metric that assigns importance to the vertices of a graph based on its neighbors and their score. Efficient parallel algorithms for updating PageRank on dynamic graphs is crucial for various applications, especially as dataset sizes have reached substantial scales. This technical report presents our Dynamic Frontier approach. Given a batch update of edge deletion and insertions, it progressively identifies affected vertices that are likely to change their ranks with minimal overhead. On a server equipped with a 64-core AMD EPYC-7742 processor, our Dynamic Frontier PageRank outperforms Static, Naive-dynamic, and Dynamic Traversal PageRank by 7.8x, 2.9x, and 3.9x respectively - on uniformly random batch updates of size 10^-7 |E| to 10^-3 |E|. In addition, our approach improves performance at an average rate of 1.8x for every doubling of threads.
[ { "created": "Sat, 6 Jan 2024 16:45:49 GMT", "version": "v1" }, { "created": "Tue, 9 Jan 2024 06:56:23 GMT", "version": "v2" }, { "created": "Fri, 26 Jan 2024 09:37:59 GMT", "version": "v3" } ]
2024-01-29
[ [ "Sahu", "Subhajit", "" ] ]
PageRank is a popular centrality metric that assigns importance to the vertices of a graph based on its neighbors and their score. Efficient parallel algorithms for updating PageRank on dynamic graphs is crucial for various applications, especially as dataset sizes have reached substantial scales. This technical report presents our Dynamic Frontier approach. Given a batch update of edge deletion and insertions, it progressively identifies affected vertices that are likely to change their ranks with minimal overhead. On a server equipped with a 64-core AMD EPYC-7742 processor, our Dynamic Frontier PageRank outperforms Static, Naive-dynamic, and Dynamic Traversal PageRank by 7.8x, 2.9x, and 3.9x respectively - on uniformly random batch updates of size 10^-7 |E| to 10^-3 |E|. In addition, our approach improves performance at an average rate of 1.8x for every doubling of threads.
1807.02098
Somdip Dey Mr.
Somdip Dey, Grigorios Kalliatakis, Sangeet Saha, Amit Kumar Singh, Shoaib Ehsan, Klaus McDonald-Maier
MAT-CNN-SOPC: Motionless Analysis of Traffic Using Convolutional Neural Networks on System-On-a-Programmable-Chip
6 pages, 3 figures, 2 tables
2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS 2018)
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Intelligent Transportation Systems (ITS) have become an important pillar in modern "smart city" framework which demands intelligent involvement of machines. Traffic load recognition can be categorized as an important and challenging issue for such systems. Recently, Convolutional Neural Network (CNN) models have drawn considerable amount of interest in many areas such as weather classification, human rights violation detection through images, due to its accurate prediction capabilities. This work tackles real-life traffic load recognition problem on System-On-a-Programmable-Chip (SOPC) platform and coin it as MAT-CNN- SOPC, which uses an intelligent re-training mechanism of the CNN with known environments. The proposed methodology is capable of enhancing the efficacy of the approach by 2.44x in comparison to the state-of-art and proven through experimental analysis. We have also introduced a mathematical equation, which is capable of quantifying the suitability of using different CNN models over the other for a particular application based implementation.
[ { "created": "Thu, 5 Jul 2018 17:35:33 GMT", "version": "v1" }, { "created": "Tue, 14 Aug 2018 23:31:16 GMT", "version": "v2" } ]
2018-08-16
[ [ "Dey", "Somdip", "" ], [ "Kalliatakis", "Grigorios", "" ], [ "Saha", "Sangeet", "" ], [ "Singh", "Amit Kumar", "" ], [ "Ehsan", "Shoaib", "" ], [ "McDonald-Maier", "Klaus", "" ] ]
Intelligent Transportation Systems (ITS) have become an important pillar in modern "smart city" framework which demands intelligent involvement of machines. Traffic load recognition can be categorized as an important and challenging issue for such systems. Recently, Convolutional Neural Network (CNN) models have drawn considerable amount of interest in many areas such as weather classification, human rights violation detection through images, due to its accurate prediction capabilities. This work tackles real-life traffic load recognition problem on System-On-a-Programmable-Chip (SOPC) platform and coin it as MAT-CNN- SOPC, which uses an intelligent re-training mechanism of the CNN with known environments. The proposed methodology is capable of enhancing the efficacy of the approach by 2.44x in comparison to the state-of-art and proven through experimental analysis. We have also introduced a mathematical equation, which is capable of quantifying the suitability of using different CNN models over the other for a particular application based implementation.
2002.05886
Jadab Kumar Pal Dr
Jimut Bahan Pal
How to cluster nearest unique nodes from different classes using JJCluster in Wisp application?
A new type of clustering algorithm is built which helps to find the best place for any location by giving a set of preferences to the application. Source code can be found here: https://github.com/Jimut123/wisp
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The work of finding the best place according to user preference is a tedious task. It needs manual research and lot of intuitive process to find the best location according to some earlier knowledge about the place. It is mainly about accessing publicly available spatial data, applying a simple algorithm to summarize the data according to given preferences, and visualizing the result on a map. We introduced JJCluster to eliminate the rigorous way of researching about a place and visualizing the location in real time. This algorithm successfully finds the heart of a city when used in Wisp application. The main purpose of designing Wisp application is used for finding the perfect location for a trip to unknown place which is nearest to a set of preferences. We also discussed the various optimization algorithms that are pioneer of today's dynamic programming and the need for visualization to find patterns when the data is cluttered. Yet, this general clustering algorithm can be used in other areas where we can explore every possible preference to maximize its utility.
[ { "created": "Fri, 14 Feb 2020 06:38:01 GMT", "version": "v1" }, { "created": "Mon, 17 Feb 2020 08:42:56 GMT", "version": "v2" } ]
2020-02-18
[ [ "Pal", "Jimut Bahan", "" ] ]
The work of finding the best place according to user preference is a tedious task. It needs manual research and lot of intuitive process to find the best location according to some earlier knowledge about the place. It is mainly about accessing publicly available spatial data, applying a simple algorithm to summarize the data according to given preferences, and visualizing the result on a map. We introduced JJCluster to eliminate the rigorous way of researching about a place and visualizing the location in real time. This algorithm successfully finds the heart of a city when used in Wisp application. The main purpose of designing Wisp application is used for finding the perfect location for a trip to unknown place which is nearest to a set of preferences. We also discussed the various optimization algorithms that are pioneer of today's dynamic programming and the need for visualization to find patterns when the data is cluttered. Yet, this general clustering algorithm can be used in other areas where we can explore every possible preference to maximize its utility.
1405.2652
Ronald Ortner
Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko
Selecting Near-Optimal Approximate State Representations in Reinforcement Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation.
[ { "created": "Mon, 12 May 2014 07:45:54 GMT", "version": "v1" }, { "created": "Wed, 14 May 2014 12:43:36 GMT", "version": "v2" }, { "created": "Wed, 9 Jul 2014 14:40:20 GMT", "version": "v3" }, { "created": "Mon, 21 Jul 2014 11:52:37 GMT", "version": "v4" }, { "created": "Tue, 12 Aug 2014 12:19:55 GMT", "version": "v5" }, { "created": "Mon, 15 Sep 2014 08:32:45 GMT", "version": "v6" } ]
2014-09-16
[ [ "Ortner", "Ronald", "" ], [ "Maillard", "Odalric-Ambrym", "" ], [ "Ryabko", "Daniil", "" ] ]
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation.
2009.10373
Michele Linardi
Michele Linardi and Themis Palpanas
Scalable Data Series Subsequence Matching with ULISSE
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data series similarity search is an important operation and at the core of several analysis tasks and applications related to data series collections. Despite the fact that data series indexes enable fast similarity search, all existing indexes can only answer queries of a single length (fixed at index construction time), which is a severe limitation. In this work, we propose ULISSE, the first data series index structure designed for answering similarity search queries of variable length (within some range). Our contribution is two-fold. First, we introduce a novel representation technique, which effectively and succinctly summarizes multiple sequences of different length. Based on the proposed index, we describe efficient algorithms for approximate and exact similarity search, combining disk based index visits and in-memory sequential scans. Our approach supports non Z-normalized and Z-normalized sequences, and can be used with no changes with both Euclidean Distance and Dynamic Time Warping, for answering both k-NN and epsilon-range queries. We experimentally evaluate our approach using several synthetic and real datasets. The results show that ULISSE is several times, and up to orders of magnitude more efficient in terms of both space and time cost, when compared to competing approaches. (Paper published in VLDBJ 2020)
[ { "created": "Tue, 22 Sep 2020 08:04:20 GMT", "version": "v1" } ]
2020-09-23
[ [ "Linardi", "Michele", "" ], [ "Palpanas", "Themis", "" ] ]
Data series similarity search is an important operation and at the core of several analysis tasks and applications related to data series collections. Despite the fact that data series indexes enable fast similarity search, all existing indexes can only answer queries of a single length (fixed at index construction time), which is a severe limitation. In this work, we propose ULISSE, the first data series index structure designed for answering similarity search queries of variable length (within some range). Our contribution is two-fold. First, we introduce a novel representation technique, which effectively and succinctly summarizes multiple sequences of different length. Based on the proposed index, we describe efficient algorithms for approximate and exact similarity search, combining disk based index visits and in-memory sequential scans. Our approach supports non Z-normalized and Z-normalized sequences, and can be used with no changes with both Euclidean Distance and Dynamic Time Warping, for answering both k-NN and epsilon-range queries. We experimentally evaluate our approach using several synthetic and real datasets. The results show that ULISSE is several times, and up to orders of magnitude more efficient in terms of both space and time cost, when compared to competing approaches. (Paper published in VLDBJ 2020)
2403.00315
Andr\'es P\'aez
Andr\'es P\'aez
Axe the X in XAI: A Plea for Understandable AI
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term "explanation" in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors' claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent's success in using the system, in drawing correct inferences from it.
[ { "created": "Fri, 1 Mar 2024 06:28:53 GMT", "version": "v1" } ]
2024-03-04
[ [ "Páez", "Andrés", "" ] ]
In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term "explanation" in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors' claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent's success in using the system, in drawing correct inferences from it.
1712.00975
Dat Thanh Tran
Dat Thanh Tran, Alexandros Iosifidis, Juho Kanniainen, Moncef Gabbouj
Temporal Attention augmented Bilinear Network for Financial Time-Series Data Analysis
12 pages, 4 figures, 3 tables
null
10.1109/TNNLS.2018.2869225
null
cs.CE cs.LG q-fin.CP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the High-Frequency Trading (HFT), forecasting for trading purposes is even a more challenging task since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale Limit Order Book (LOB) dataset show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.
[ { "created": "Mon, 4 Dec 2017 09:41:24 GMT", "version": "v1" } ]
2019-06-11
[ [ "Tran", "Dat Thanh", "" ], [ "Iosifidis", "Alexandros", "" ], [ "Kanniainen", "Juho", "" ], [ "Gabbouj", "Moncef", "" ] ]
Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the High-Frequency Trading (HFT), forecasting for trading purposes is even a more challenging task since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale Limit Order Book (LOB) dataset show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.
1804.03416
Katarzyna Biesialska
Katarzyna Biesialska, Xavier Franch, Victor Munt\'es-Mulero
Protocol and Tools for Conducting Agile Software Engineering Research in an Industrial-Academic Setting: A Preliminary Study
Accepted to CESI 2018 - International Workshop on Conducting Empirical Studies in Industry (in conjunction with ICSE 2018)
2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI), Gothenburg, 2018, pp. 29-32
10.1145/3193965.3193970
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conducting empirical research in software engineering industry is a process, and as such, it should be generalizable. The aim of this paper is to discuss how academic researchers may address some of the challenges they encounter during conducting empirical research in the software industry by means of a systematic and structured approach. The protocol developed in this paper should serve as a practical guide for researchers and help them with conducting empirical research in this complex environment.
[ { "created": "Tue, 10 Apr 2018 09:31:08 GMT", "version": "v1" } ]
2020-03-17
[ [ "Biesialska", "Katarzyna", "" ], [ "Franch", "Xavier", "" ], [ "Muntés-Mulero", "Victor", "" ] ]
Conducting empirical research in software engineering industry is a process, and as such, it should be generalizable. The aim of this paper is to discuss how academic researchers may address some of the challenges they encounter during conducting empirical research in the software industry by means of a systematic and structured approach. The protocol developed in this paper should serve as a practical guide for researchers and help them with conducting empirical research in this complex environment.
2003.04117
M. G. Sarwar Murshed
Rashik Shadman, M.G. Sarwar Murshed, Edward Verenich, Alvaro Velasquez, Faraz Hussain
The Utility of Feature Reuse: Transfer Learning in Data-Starved Regimes
5 pages, 3 figure, conference
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of transfer learning with deep neural networks has increasingly become widespread for deploying well-tested computer vision systems to newer domains, especially those with limited datasets. We describe a transfer learning use case for a domain with a data-starved regime, having fewer than 100 labeled target samples. We evaluate the effectiveness of convolutional feature extraction and fine-tuning of overparameterized models with respect to the size of target training data, as well as their generalization performance on data with covariate shift, or out-of-distribution (OOD) data. Our experiments demonstrate that both overparameterization and feature reuse contribute to the successful application of transfer learning in training image classifiers in data-starved regimes. We provide visual explanations to support our findings and conclude that transfer learning enhances the performance of CNN architectures in data-starved regimes.
[ { "created": "Sat, 29 Feb 2020 18:48:58 GMT", "version": "v1" }, { "created": "Thu, 28 Dec 2023 15:53:41 GMT", "version": "v2" } ]
2023-12-29
[ [ "Shadman", "Rashik", "" ], [ "Murshed", "M. G. Sarwar", "" ], [ "Verenich", "Edward", "" ], [ "Velasquez", "Alvaro", "" ], [ "Hussain", "Faraz", "" ] ]
The use of transfer learning with deep neural networks has increasingly become widespread for deploying well-tested computer vision systems to newer domains, especially those with limited datasets. We describe a transfer learning use case for a domain with a data-starved regime, having fewer than 100 labeled target samples. We evaluate the effectiveness of convolutional feature extraction and fine-tuning of overparameterized models with respect to the size of target training data, as well as their generalization performance on data with covariate shift, or out-of-distribution (OOD) data. Our experiments demonstrate that both overparameterization and feature reuse contribute to the successful application of transfer learning in training image classifiers in data-starved regimes. We provide visual explanations to support our findings and conclude that transfer learning enhances the performance of CNN architectures in data-starved regimes.
2007.14049
Stephan Lukasczyk
Stephan Lukasczyk and Florian Kroi{\ss} and Gordon Fraser
Automated Unit Test Generation for Python
15 pages, to be published in Proceedings of the 12th Symposium on Search-Based Software Engineering (SSBSE 2020)
null
10.1007/978-3-030-59762-7_2
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated unit test generation is an established research field, and mature test generation tools exist for statically typed programming languages such as Java. It is, however, substantially more difficult to automatically generate supportive tests for dynamically typed programming languages such as Python, due to the lack of type information and the dynamic nature of the language. In this paper, we describe a foray into the problem of unit test generation for dynamically typed languages. We introduce Pynguin, an automated unit test generation framework for Python. Using Pynguin, we aim to empirically shed light on two central questions: (1) Do well-established search-based test generation methods, previously evaluated only on statically typed languages, generalise to dynamically typed languages? (2) What is the influence of incomplete type information and dynamic typing on the problem of automated test generation? Our experiments confirm that evolutionary algorithms can outperform random test generation also in the context of Python, and can even alleviate the problem of absent type information to some degree. However, our results demonstrate that dynamic typing nevertheless poses a fundamental issue for test generation, suggesting future work on integrating type inference.
[ { "created": "Tue, 28 Jul 2020 08:12:23 GMT", "version": "v1" } ]
2020-10-07
[ [ "Lukasczyk", "Stephan", "" ], [ "Kroiß", "Florian", "" ], [ "Fraser", "Gordon", "" ] ]
Automated unit test generation is an established research field, and mature test generation tools exist for statically typed programming languages such as Java. It is, however, substantially more difficult to automatically generate supportive tests for dynamically typed programming languages such as Python, due to the lack of type information and the dynamic nature of the language. In this paper, we describe a foray into the problem of unit test generation for dynamically typed languages. We introduce Pynguin, an automated unit test generation framework for Python. Using Pynguin, we aim to empirically shed light on two central questions: (1) Do well-established search-based test generation methods, previously evaluated only on statically typed languages, generalise to dynamically typed languages? (2) What is the influence of incomplete type information and dynamic typing on the problem of automated test generation? Our experiments confirm that evolutionary algorithms can outperform random test generation also in the context of Python, and can even alleviate the problem of absent type information to some degree. However, our results demonstrate that dynamic typing nevertheless poses a fundamental issue for test generation, suggesting future work on integrating type inference.
2205.12324
Ferenc Ill\'es
Ferenc Ill\'es
Linearly representable games and pseudo-polynomial calculation of the Shapley value
17 pages
null
null
null
cs.GT
http://creativecommons.org/licenses/by/4.0/
We introduce the notion of linearly representable games. Broadly speaking, these are TU games that can be described by as many parameters as the number of players, like weighted voting games, airport games, or bankruptcy games. We show that the Shapley value calculation is pseudo-polynomial for linearly representable games. This is a generalization of many classical and recent results in the literature. Our method naturally turns into a strictly polynomial algorithm when the parameters are polynomial in the number of players.
[ { "created": "Tue, 24 May 2022 19:11:12 GMT", "version": "v1" } ]
2022-05-26
[ [ "Illés", "Ferenc", "" ] ]
We introduce the notion of linearly representable games. Broadly speaking, these are TU games that can be described by as many parameters as the number of players, like weighted voting games, airport games, or bankruptcy games. We show that the Shapley value calculation is pseudo-polynomial for linearly representable games. This is a generalization of many classical and recent results in the literature. Our method naturally turns into a strictly polynomial algorithm when the parameters are polynomial in the number of players.
1910.13095
Mengjing Chen
Hua Liu and Mengjing Chen and Xiaolong Wang and Zihe Wang
A Game-theoretical Approach to Analyze Film Release Time
11 pages
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Film release dates play an important part in box office revenues because of the facts of obvious seasonality demand in the film industry and severe competition among films shown at the same time. In this paper, we study how film studios choose release time for movies they produce to maximize their box offices. We first formalize this problem as an attraction competition game where players (film studios) consider both potential profits and competitors' choices when deciding the release time. Then we prove that there always exists a pure Nash equilibrium and give the sufficient condition of the uniqueness of the Nash equilibrium. Our model can be generalized to an extensive game and we compute the subgame-perfect equilibrium for homogeneous players. For the case that one film studio could have multiple movies to release, we prove that finding a player's best response is NP-hard and it does not guarantee the existence of a pure Nash equilibrium. Experiments are provided to support the soundness of our model. In the final state, most of film studios, accounting for 84 percent of the market, would not change their release time. The behaviors of film studios imply they are following some strategies to reach a Nash equilibrium.
[ { "created": "Tue, 29 Oct 2019 05:53:20 GMT", "version": "v1" }, { "created": "Tue, 1 Sep 2020 02:27:19 GMT", "version": "v2" }, { "created": "Mon, 4 Jan 2021 05:46:42 GMT", "version": "v3" } ]
2021-01-05
[ [ "Liu", "Hua", "" ], [ "Chen", "Mengjing", "" ], [ "Wang", "Xiaolong", "" ], [ "Wang", "Zihe", "" ] ]
Film release dates play an important part in box office revenues because of the facts of obvious seasonality demand in the film industry and severe competition among films shown at the same time. In this paper, we study how film studios choose release time for movies they produce to maximize their box offices. We first formalize this problem as an attraction competition game where players (film studios) consider both potential profits and competitors' choices when deciding the release time. Then we prove that there always exists a pure Nash equilibrium and give the sufficient condition of the uniqueness of the Nash equilibrium. Our model can be generalized to an extensive game and we compute the subgame-perfect equilibrium for homogeneous players. For the case that one film studio could have multiple movies to release, we prove that finding a player's best response is NP-hard and it does not guarantee the existence of a pure Nash equilibrium. Experiments are provided to support the soundness of our model. In the final state, most of film studios, accounting for 84 percent of the market, would not change their release time. The behaviors of film studios imply they are following some strategies to reach a Nash equilibrium.
2407.05061
Monika Wysocza\'nska
Monika Wysocza\'nska, Antonin Vobecky, Amaia Cardiel, Tomasz Trzci\'nski, Renaud Marlet, Andrei Bursuc, Oriane Sim\'eoni
A Study of Test-time Contrastive Concepts for Open-world, Open-vocabulary Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent VLMs, pre-trained on large amounts of image-text pairs to align both modalities, have opened the way to open-vocabulary semantic segmentation. Given an arbitrary set of textual queries, image regions are assigned the closest query in feature space. However, the usual setup expects the user to list all possible visual concepts that may occur in the image, typically all classes of benchmark datasets, that act as negatives to each other. We consider here the more challenging scenario of segmenting a single concept, given a textual prompt and nothing else. To achieve good results, besides contrasting with the generic 'background' text, we study different ways to generate query-specific test-time contrastive textual concepts, which leverage either the distribution of text in the VLM's training set or crafted LLM prompts. We show the relevance of our approach using a new, specific metric.
[ { "created": "Sat, 6 Jul 2024 12:18:43 GMT", "version": "v1" } ]
2024-07-09
[ [ "Wysoczańska", "Monika", "" ], [ "Vobecky", "Antonin", "" ], [ "Cardiel", "Amaia", "" ], [ "Trzciński", "Tomasz", "" ], [ "Marlet", "Renaud", "" ], [ "Bursuc", "Andrei", "" ], [ "Siméoni", "Oriane", "" ] ]
Recent VLMs, pre-trained on large amounts of image-text pairs to align both modalities, have opened the way to open-vocabulary semantic segmentation. Given an arbitrary set of textual queries, image regions are assigned the closest query in feature space. However, the usual setup expects the user to list all possible visual concepts that may occur in the image, typically all classes of benchmark datasets, that act as negatives to each other. We consider here the more challenging scenario of segmenting a single concept, given a textual prompt and nothing else. To achieve good results, besides contrasting with the generic 'background' text, we study different ways to generate query-specific test-time contrastive textual concepts, which leverage either the distribution of text in the VLM's training set or crafted LLM prompts. We show the relevance of our approach using a new, specific metric.
1911.06147
Sergio Gast\'on Burdisso
Sergio G. Burdisso, Marcelo Errecalde, Manuel Montes-y-G\'omez
t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams
Highlights: (*) A classifier that is able to dynamically learn and recognize important word n-grams. (*) A novel text classifier having the ability to visually explain its rationale. (*) Support for incremental learning and text classification over streams. (*) Efficient model for addressing early risk detection problems
Pattern Recognition Letters, Elsevier, 2020
10.1016/j.patrec.2020.07.001
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recently introduced classifier, called SS3, has shown to be well suited to deal with early risk detection (ERD) problems on text streams. It obtained state-of-the-art performance on early depression and anorexia detection on Reddit in the CLEF's eRisk open tasks. SS3 was created to deal with ERD problems naturally since: it supports incremental training and classification over text streams, and it can visually explain its rationale. However, SS3 processes the input using a bag-of-word model lacking the ability to recognize important word sequences. This aspect could negatively affect the classification performance and also reduces the descriptiveness of visual explanations. In the standard document classification field, it is very common to use word n-grams to try to overcome some of these limitations. Unfortunately, when working with text streams, using n-grams is not trivial since the system must learn and recognize which n-grams are important "on the fly". This paper introduces t-SS3, an extension of SS3 that allows it to recognize useful patterns over text streams dynamically. We evaluated our model in the eRisk 2017 and 2018 tasks on early depression and anorexia detection. Experimental results suggest that t-SS3 is able to improve both current results and the richness of visual explanations.
[ { "created": "Mon, 11 Nov 2019 22:06:40 GMT", "version": "v1" }, { "created": "Wed, 6 May 2020 23:04:03 GMT", "version": "v2" } ]
2020-07-14
[ [ "Burdisso", "Sergio G.", "" ], [ "Errecalde", "Marcelo", "" ], [ "Montes-y-Gómez", "Manuel", "" ] ]
A recently introduced classifier, called SS3, has shown to be well suited to deal with early risk detection (ERD) problems on text streams. It obtained state-of-the-art performance on early depression and anorexia detection on Reddit in the CLEF's eRisk open tasks. SS3 was created to deal with ERD problems naturally since: it supports incremental training and classification over text streams, and it can visually explain its rationale. However, SS3 processes the input using a bag-of-word model lacking the ability to recognize important word sequences. This aspect could negatively affect the classification performance and also reduces the descriptiveness of visual explanations. In the standard document classification field, it is very common to use word n-grams to try to overcome some of these limitations. Unfortunately, when working with text streams, using n-grams is not trivial since the system must learn and recognize which n-grams are important "on the fly". This paper introduces t-SS3, an extension of SS3 that allows it to recognize useful patterns over text streams dynamically. We evaluated our model in the eRisk 2017 and 2018 tasks on early depression and anorexia detection. Experimental results suggest that t-SS3 is able to improve both current results and the richness of visual explanations.
2204.03770
Jazlyn Hellman
Jazlyn Hellman, Jiahao Chen, Md. Sami Uddin, Jinghui Cheng, Jin L.C. Guo
Characterizing User Behaviors in Open-Source Software User Forums: An Empirical Study
15th International Conference on Cooperative and Human Aspects of Softare Engineering
null
10.1145/3528579.3529178
null
cs.SE cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
User forums of Open Source Software (OSS) enable end-users to collaboratively discuss problems concerning the OSS applications. Despite decades of research on OSS, we know very little about how end-users engage with OSS communities on these forums, in particular, the challenges that hinder their continuous and meaningful participation in the OSS community. Many previous works are developer-centric and overlook the importance of end-user forums. As a result, end-users' expectations are seldom reflected in OSS development. To better understand user behaviors in OSS user forums, we carried out an empirical study analyzing about 1.3 million posts from user forums of four popular OSS applications: Zotero, Audacity, VLC, and RStudio. Through analyzing the contribution patterns of three common user types (end-users, developers, and organizers), we observed that end-users not only initiated most of the threads (above 96% of threads in three projects, 86% in the other), but also acted as the significant contributors for responding to other users' posts, even though they tended to lack confidence in their activities as indicated by psycho-linguistic analyses. Moreover, we found end-users more open, reflecting a more positive emotion in communication than organizers and developers in the forums. Our work contributes new knowledge about end-users' activities and behaviors in OSS user forums that the vital OSS stakeholders can leverage to improve end-user engagement in the OSS development process.
[ { "created": "Thu, 7 Apr 2022 22:56:18 GMT", "version": "v1" } ]
2022-04-11
[ [ "Hellman", "Jazlyn", "" ], [ "Chen", "Jiahao", "" ], [ "Uddin", "Md. Sami", "" ], [ "Cheng", "Jinghui", "" ], [ "Guo", "Jin L. C.", "" ] ]
User forums of Open Source Software (OSS) enable end-users to collaboratively discuss problems concerning the OSS applications. Despite decades of research on OSS, we know very little about how end-users engage with OSS communities on these forums, in particular, the challenges that hinder their continuous and meaningful participation in the OSS community. Many previous works are developer-centric and overlook the importance of end-user forums. As a result, end-users' expectations are seldom reflected in OSS development. To better understand user behaviors in OSS user forums, we carried out an empirical study analyzing about 1.3 million posts from user forums of four popular OSS applications: Zotero, Audacity, VLC, and RStudio. Through analyzing the contribution patterns of three common user types (end-users, developers, and organizers), we observed that end-users not only initiated most of the threads (above 96% of threads in three projects, 86% in the other), but also acted as the significant contributors for responding to other users' posts, even though they tended to lack confidence in their activities as indicated by psycho-linguistic analyses. Moreover, we found end-users more open, reflecting a more positive emotion in communication than organizers and developers in the forums. Our work contributes new knowledge about end-users' activities and behaviors in OSS user forums that the vital OSS stakeholders can leverage to improve end-user engagement in the OSS development process.
2110.08649
Avishek Bose
Avishek Joey Bose, Marcus Brubaker, and Ivan Kobyzev
Equivariant Finite Normalizing Flows
Preprint
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative modeling seeks to uncover the underlying factors that give rise to observed data that can often be modeled as the natural symmetries that manifest themselves through invariances and equivariances to certain transformation laws. However, current approaches to representing these symmetries are couched in the formalism of continuous normalizing flows that require the construction of equivariant vector fields -- inhibiting their simple application to conventional higher dimensional generative modelling domains like natural images. In this paper, we focus on building equivariant normalizing flows using discrete layers. We first theoretically prove the existence of an equivariant map for compact groups whose actions are on compact spaces. We further introduce three new equivariant flows: $G$-Residual Flows, $G$-Coupling Flows, and $G$-Inverse Autoregressive Flows that elevate classical Residual, Coupling, and Inverse Autoregressive Flows with equivariant maps to a prescribed group $G$. Our construction of $G$-Residual Flows are also universal, in the sense that we prove an $G$-equivariant diffeomorphism can be exactly mapped by a $G$-residual flow. Finally, we complement our theoretical insights with demonstrative experiments -- for the first time -- on image datasets like CIFAR-10 and show $G$-Equivariant Finite Normalizing flows lead to increased data efficiency, faster convergence, and improved likelihood estimates.
[ { "created": "Sat, 16 Oct 2021 20:16:00 GMT", "version": "v1" }, { "created": "Fri, 12 Aug 2022 19:12:11 GMT", "version": "v2" } ]
2022-08-16
[ [ "Bose", "Avishek Joey", "" ], [ "Brubaker", "Marcus", "" ], [ "Kobyzev", "Ivan", "" ] ]
Generative modeling seeks to uncover the underlying factors that give rise to observed data that can often be modeled as the natural symmetries that manifest themselves through invariances and equivariances to certain transformation laws. However, current approaches to representing these symmetries are couched in the formalism of continuous normalizing flows that require the construction of equivariant vector fields -- inhibiting their simple application to conventional higher dimensional generative modelling domains like natural images. In this paper, we focus on building equivariant normalizing flows using discrete layers. We first theoretically prove the existence of an equivariant map for compact groups whose actions are on compact spaces. We further introduce three new equivariant flows: $G$-Residual Flows, $G$-Coupling Flows, and $G$-Inverse Autoregressive Flows that elevate classical Residual, Coupling, and Inverse Autoregressive Flows with equivariant maps to a prescribed group $G$. Our construction of $G$-Residual Flows are also universal, in the sense that we prove an $G$-equivariant diffeomorphism can be exactly mapped by a $G$-residual flow. Finally, we complement our theoretical insights with demonstrative experiments -- for the first time -- on image datasets like CIFAR-10 and show $G$-Equivariant Finite Normalizing flows lead to increased data efficiency, faster convergence, and improved likelihood estimates.
2210.12539
Tanya Shreedhar
Tanya Shreedhar, Sanjit K. Kaul and Roy D. Yates
ACP+: An Age Control Protocol for the Internet
Under submission. arXiv admin note: text overlap with arXiv:2103.07797, arXiv:1811.03353
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ACP+, an age control protocol, which is a transport layer protocol that regulates the rate at which update packets from a source are sent over the Internet to a monitor. The source would like to keep the average age of sensed information at the monitor to a minimum, given the network conditions. Extensive experimentation helps us shed light on age control over the current Internet and its implications for sources sending updates over a shared wireless access to monitors in the cloud. We also show that many congestion control algorithms proposed over the years for the Transmission Control Protocol (TCP), including hybrid approaches that achieve higher throughputs at lower delays than traditional loss-based congestion control, are unsuitable for age control.
[ { "created": "Sat, 22 Oct 2022 20:01:22 GMT", "version": "v1" } ]
2022-10-25
[ [ "Shreedhar", "Tanya", "" ], [ "Kaul", "Sanjit K.", "" ], [ "Yates", "Roy D.", "" ] ]
We present ACP+, an age control protocol, which is a transport layer protocol that regulates the rate at which update packets from a source are sent over the Internet to a monitor. The source would like to keep the average age of sensed information at the monitor to a minimum, given the network conditions. Extensive experimentation helps us shed light on age control over the current Internet and its implications for sources sending updates over a shared wireless access to monitors in the cloud. We also show that many congestion control algorithms proposed over the years for the Transmission Control Protocol (TCP), including hybrid approaches that achieve higher throughputs at lower delays than traditional loss-based congestion control, are unsuitable for age control.
2406.08816
Manish Kumar Singh
Manish Kumar Singh, Rajeev Yasarla, Hong Cai, Mingu Lee, Fatih Porikli
ToSA: Token Selective Attention for Efficient Vision Transformers
Accepted at CVPRW 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel token selective attention approach, ToSA, which can identify tokens that need to be attended as well as those that can skip a transformer layer. More specifically, a token selector parses the current attention maps and predicts the attention maps for the next layer, which are then used to select the important tokens that should participate in the attention operation. The remaining tokens simply bypass the next layer and are concatenated with the attended ones to re-form a complete set of tokens. In this way, we reduce the quadratic computation and memory costs as fewer tokens participate in self-attention while maintaining the features for all the image patches throughout the network, which allows it to be used for dense prediction tasks. Our experiments show that by applying ToSA, we can significantly reduce computation costs while maintaining accuracy on the ImageNet classification benchmark. Furthermore, we evaluate on the dense prediction task of monocular depth estimation on NYU Depth V2, and show that we can achieve similar depth prediction accuracy using a considerably lighter backbone with ToSA.
[ { "created": "Thu, 13 Jun 2024 05:17:21 GMT", "version": "v1" } ]
2024-06-14
[ [ "Singh", "Manish Kumar", "" ], [ "Yasarla", "Rajeev", "" ], [ "Cai", "Hong", "" ], [ "Lee", "Mingu", "" ], [ "Porikli", "Fatih", "" ] ]
In this paper, we propose a novel token selective attention approach, ToSA, which can identify tokens that need to be attended as well as those that can skip a transformer layer. More specifically, a token selector parses the current attention maps and predicts the attention maps for the next layer, which are then used to select the important tokens that should participate in the attention operation. The remaining tokens simply bypass the next layer and are concatenated with the attended ones to re-form a complete set of tokens. In this way, we reduce the quadratic computation and memory costs as fewer tokens participate in self-attention while maintaining the features for all the image patches throughout the network, which allows it to be used for dense prediction tasks. Our experiments show that by applying ToSA, we can significantly reduce computation costs while maintaining accuracy on the ImageNet classification benchmark. Furthermore, we evaluate on the dense prediction task of monocular depth estimation on NYU Depth V2, and show that we can achieve similar depth prediction accuracy using a considerably lighter backbone with ToSA.
2406.03253
Sreeram Vennam
Akshit Sinha, Sreeram Vennam, Charu Sharma, Ponnurangam Kumaraguru
Generating Explanations for Cellular Neural Networks
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent advancements in graph learning contributed to explaining predictions generated by Graph Neural Networks. However, existing methodologies often fall short when applied to real-world datasets. We introduce HOGE, a framework to capture higher-order structures using cell complexes, which excel at modeling higher-order relationships. In the real world, higher-order structures are ubiquitous like in molecules or social networks, thus our work significantly enhances the practical applicability of graph explanations. HOGE produces clearer and more accurate explanations compared to prior methods. Our method can be integrated with all existing graph explainers, ensuring seamless integration into current frameworks. We evaluate on GraphXAI benchmark datasets, HOGE achieves improved or comparable performance with minimal computational overhead. Ablation studies show that the performance gain observed can be attributed to the higher-order structures that come from introducing cell complexes.
[ { "created": "Wed, 5 Jun 2024 13:31:30 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2024 01:42:52 GMT", "version": "v2" }, { "created": "Wed, 24 Jul 2024 18:22:22 GMT", "version": "v3" } ]
2024-07-26
[ [ "Sinha", "Akshit", "" ], [ "Vennam", "Sreeram", "" ], [ "Sharma", "Charu", "" ], [ "Kumaraguru", "Ponnurangam", "" ] ]
Recent advancements in graph learning contributed to explaining predictions generated by Graph Neural Networks. However, existing methodologies often fall short when applied to real-world datasets. We introduce HOGE, a framework to capture higher-order structures using cell complexes, which excel at modeling higher-order relationships. In the real world, higher-order structures are ubiquitous like in molecules or social networks, thus our work significantly enhances the practical applicability of graph explanations. HOGE produces clearer and more accurate explanations compared to prior methods. Our method can be integrated with all existing graph explainers, ensuring seamless integration into current frameworks. We evaluate on GraphXAI benchmark datasets, HOGE achieves improved or comparable performance with minimal computational overhead. Ablation studies show that the performance gain observed can be attributed to the higher-order structures that come from introducing cell complexes.
2012.15355
Peng Xu
Peng Xu, Dhruv Kumar, Wei Yang, Wenjie Zi, Keyi Tang, Chenyang Huang, Jackie Chi Kit Cheung, Simon J.D. Prince, Yanshuai Cao
Optimizing Deeper Transformers on Small Datasets
Accepted at ACL 2021 main conference
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
It is a common belief that training deep transformers from scratch requires large datasets. Consequently, for small datasets, people usually use shallow and simple additional layers on top of pre-trained models during fine-tuning. This work shows that this does not always need to be the case: with proper initialization and optimization, the benefits of very deep transformers can carry over to challenging tasks with small datasets, including Text-to-SQL semantic parsing and logical reading comprehension. In particular, we successfully train $48$ layers of transformers, comprising $24$ fine-tuned layers from pre-trained RoBERTa and $24$ relation-aware layers trained from scratch. With fewer training steps and no task-specific pre-training, we obtain the state-of-the-art performance on the challenging cross-domain Text-to-SQL parsing benchmark Spider. We achieve this by deriving a novel Data-dependent Transformer Fixed-update initialization scheme (DT-Fixup), inspired by the prior T-Fixup work. Further error analysis shows that increasing depth can help improve generalization on small datasets for hard cases that require reasoning and structural understanding.
[ { "created": "Wed, 30 Dec 2020 22:53:49 GMT", "version": "v1" }, { "created": "Wed, 19 May 2021 17:12:23 GMT", "version": "v2" }, { "created": "Thu, 27 May 2021 16:53:14 GMT", "version": "v3" }, { "created": "Mon, 31 May 2021 16:45:47 GMT", "version": "v4" } ]
2021-06-01
[ [ "Xu", "Peng", "" ], [ "Kumar", "Dhruv", "" ], [ "Yang", "Wei", "" ], [ "Zi", "Wenjie", "" ], [ "Tang", "Keyi", "" ], [ "Huang", "Chenyang", "" ], [ "Cheung", "Jackie Chi Kit", "" ], [ "Prince", "Simon J. D.", "" ], [ "Cao", "Yanshuai", "" ] ]
It is a common belief that training deep transformers from scratch requires large datasets. Consequently, for small datasets, people usually use shallow and simple additional layers on top of pre-trained models during fine-tuning. This work shows that this does not always need to be the case: with proper initialization and optimization, the benefits of very deep transformers can carry over to challenging tasks with small datasets, including Text-to-SQL semantic parsing and logical reading comprehension. In particular, we successfully train $48$ layers of transformers, comprising $24$ fine-tuned layers from pre-trained RoBERTa and $24$ relation-aware layers trained from scratch. With fewer training steps and no task-specific pre-training, we obtain the state-of-the-art performance on the challenging cross-domain Text-to-SQL parsing benchmark Spider. We achieve this by deriving a novel Data-dependent Transformer Fixed-update initialization scheme (DT-Fixup), inspired by the prior T-Fixup work. Further error analysis shows that increasing depth can help improve generalization on small datasets for hard cases that require reasoning and structural understanding.
1905.07512
Daniel Gordon
Daniel Gordon, Abhishek Kadian, Devi Parikh, Judy Hoffman, Dhruv Batra
SplitNet: Sim2Sim and Task2Task Transfer for Embodied Visual Navigation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose SplitNet, a method for decoupling visual perception and policy learning. By incorporating auxiliary tasks and selective learning of portions of the model, we explicitly decompose the learning objectives for visual navigation into perceiving the world and acting on that perception. We show dramatic improvements over baseline models on transferring between simulators, an encouraging step towards Sim2Real. Additionally, SplitNet generalizes better to unseen environments from the same simulator and transfers faster and more effectively to novel embodied navigation tasks. Further, given only a small sample from a target domain, SplitNet can match the performance of traditional end-to-end pipelines which receive the entire dataset. Code is available https://github.com/facebookresearch/splitnet
[ { "created": "Sat, 18 May 2019 00:57:19 GMT", "version": "v1" }, { "created": "Tue, 21 May 2019 00:50:54 GMT", "version": "v2" }, { "created": "Wed, 23 Oct 2019 19:07:16 GMT", "version": "v3" } ]
2019-10-25
[ [ "Gordon", "Daniel", "" ], [ "Kadian", "Abhishek", "" ], [ "Parikh", "Devi", "" ], [ "Hoffman", "Judy", "" ], [ "Batra", "Dhruv", "" ] ]
We propose SplitNet, a method for decoupling visual perception and policy learning. By incorporating auxiliary tasks and selective learning of portions of the model, we explicitly decompose the learning objectives for visual navigation into perceiving the world and acting on that perception. We show dramatic improvements over baseline models on transferring between simulators, an encouraging step towards Sim2Real. Additionally, SplitNet generalizes better to unseen environments from the same simulator and transfers faster and more effectively to novel embodied navigation tasks. Further, given only a small sample from a target domain, SplitNet can match the performance of traditional end-to-end pipelines which receive the entire dataset. Code is available https://github.com/facebookresearch/splitnet
1705.04981
Bing Li
Bing Li, Ning Chen, Yang Xu, Ulf Schlichtmann
On Timing Model Extraction and Hierarchical Statistical Timing Analysis
null
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 32(3), 367-380, March 2013
10.1109/TCAD.2012.2228305
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the challenges to apply Statistical Static Timing Analysis (SSTA) in hierarchical design flow, where modules supplied by IP vendors are used to hide design details for IP protection and to reduce the complexity of design and verification. For the three basic circuit types, combinational, flip-flop-based and latch-controlled, we propose methods to extract timing models which contain interfacing as well as compressed internal constraints. Using these compact timing models the runtime of full-chip timing analysis can be reduced, while circuit details from IP vendors are not exposed. We also propose a method to reconstruct the correlation between modules during full-chip timing analysis. This correlation can not be incorporated into timing models because it depends on the layout of the corresponding modules in the chip. In addition, we investigate how to apply the extracted timing models with the reconstructed correlation to evaluate the performance of the complete design. Experiments demonstrate that using the extracted timing models and reconstructed correlation full-chip timing analysis can be several times faster than applying the flattened circuit directly, while the accuracy of statistical timing analysis is still well maintained.
[ { "created": "Sun, 14 May 2017 15:49:14 GMT", "version": "v1" } ]
2017-05-16
[ [ "Li", "Bing", "" ], [ "Chen", "Ning", "" ], [ "Xu", "Yang", "" ], [ "Schlichtmann", "Ulf", "" ] ]
In this paper, we investigate the challenges to apply Statistical Static Timing Analysis (SSTA) in hierarchical design flow, where modules supplied by IP vendors are used to hide design details for IP protection and to reduce the complexity of design and verification. For the three basic circuit types, combinational, flip-flop-based and latch-controlled, we propose methods to extract timing models which contain interfacing as well as compressed internal constraints. Using these compact timing models the runtime of full-chip timing analysis can be reduced, while circuit details from IP vendors are not exposed. We also propose a method to reconstruct the correlation between modules during full-chip timing analysis. This correlation can not be incorporated into timing models because it depends on the layout of the corresponding modules in the chip. In addition, we investigate how to apply the extracted timing models with the reconstructed correlation to evaluate the performance of the complete design. Experiments demonstrate that using the extracted timing models and reconstructed correlation full-chip timing analysis can be several times faster than applying the flattened circuit directly, while the accuracy of statistical timing analysis is still well maintained.
1706.08436
Juan Garcia Torres
Juan Garcia-Torres, Diana Caro-Prieto
Image Processing in Floriculture Using a robotic Mobile Platform
4 Pages, Paper made at Fundacion Universitaria Agraria de Colombia
null
null
null
cs.CY cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Colombia has a privileged geographical location which makes it a cornerstone and equidistant point to all regional markets. The country has a great ecological diversity and it is one of the largest suppliers of flowers for US. Colombian flower companies have made innovations in the marketing process, using methods to reach all conditions for final consumers. This article develops a monitoring system for floriculture industries. The system was implemented in a robotic platform. This device has the ability to be programmed in different programming languages. The robot takes the necessary environment information from its camera. The algorithm of the monitoring system was developed with the image processing toolbox on Matlab. The implemented algorithm acquires images through its camera, it performs a preprocessing of the image, noise filter, enhancing of the color and adjusting the dimension in order to increase processing speed. Then, the image is segmented by color and with the binarized version of the image using morphological operations (erosion and dilation), extract relevant features such as centroid, perimeter and area. The data obtained from the image processing helps the robot with the automatic identification of objectives, orientation and move towards them. Also, the results generate a diagnostic quality of each object scanned.
[ { "created": "Mon, 26 Jun 2017 15:21:44 GMT", "version": "v1" } ]
2017-06-27
[ [ "Garcia-Torres", "Juan", "" ], [ "Caro-Prieto", "Diana", "" ] ]
Colombia has a privileged geographical location which makes it a cornerstone and equidistant point to all regional markets. The country has a great ecological diversity and it is one of the largest suppliers of flowers for US. Colombian flower companies have made innovations in the marketing process, using methods to reach all conditions for final consumers. This article develops a monitoring system for floriculture industries. The system was implemented in a robotic platform. This device has the ability to be programmed in different programming languages. The robot takes the necessary environment information from its camera. The algorithm of the monitoring system was developed with the image processing toolbox on Matlab. The implemented algorithm acquires images through its camera, it performs a preprocessing of the image, noise filter, enhancing of the color and adjusting the dimension in order to increase processing speed. Then, the image is segmented by color and with the binarized version of the image using morphological operations (erosion and dilation), extract relevant features such as centroid, perimeter and area. The data obtained from the image processing helps the robot with the automatic identification of objectives, orientation and move towards them. Also, the results generate a diagnostic quality of each object scanned.
2003.06518
Jie Ying Wu
Jie Ying Wu, Peter Kazanzides, Mathias Unberath
Leveraging Vision and Kinematics Data to Improve Realism of Biomechanic Soft-tissue Simulation for Robotic Surgery
12 pages, 4 figures, to be published in IJCARS IPCAI special edition 2020
null
10.1007/s11548-020-02139-6
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose Surgical simulations play an increasingly important role in surgeon education and developing algorithms that enable robots to perform surgical subtasks. To model anatomy, Finite Element Method (FEM) simulations have been held as the gold standard for calculating accurate soft-tissue deformation. Unfortunately, their accuracy is highly dependent on the simulation parameters, which can be difficult to obtain. Methods In this work, we investigate how live data acquired during any robotic endoscopic surgical procedure may be used to correct for inaccurate FEM simulation results. Since FEMs are calculated from initial parameters and cannot directly incorporate observations, we propose to add a correction factor that accounts for the discrepancy between simulation and observations. We train a network to predict this correction factor. Results To evaluate our method, we use an open-source da Vinci Surgical System to probe a soft-tissue phantom and replay the interaction in simulation. We train the network to correct for the difference between the predicted mesh position and the measured point cloud. This results in 15-30% improvement in the mean distance, demonstrating the effectiveness of our approach across a large range of simulation parameters. Conclusion We show a first step towards a framework that synergistically combines the benefits of model-based simulation and real-time observations. It corrects discrepancies between simulation and the scene that results from inaccurate modeling parameters. This can provide a more accurate simulation environment for surgeons and better data with which to train algorithms.
[ { "created": "Sat, 14 Mar 2020 00:16:08 GMT", "version": "v1" } ]
2020-03-27
[ [ "Wu", "Jie Ying", "" ], [ "Kazanzides", "Peter", "" ], [ "Unberath", "Mathias", "" ] ]
Purpose Surgical simulations play an increasingly important role in surgeon education and developing algorithms that enable robots to perform surgical subtasks. To model anatomy, Finite Element Method (FEM) simulations have been held as the gold standard for calculating accurate soft-tissue deformation. Unfortunately, their accuracy is highly dependent on the simulation parameters, which can be difficult to obtain. Methods In this work, we investigate how live data acquired during any robotic endoscopic surgical procedure may be used to correct for inaccurate FEM simulation results. Since FEMs are calculated from initial parameters and cannot directly incorporate observations, we propose to add a correction factor that accounts for the discrepancy between simulation and observations. We train a network to predict this correction factor. Results To evaluate our method, we use an open-source da Vinci Surgical System to probe a soft-tissue phantom and replay the interaction in simulation. We train the network to correct for the difference between the predicted mesh position and the measured point cloud. This results in 15-30% improvement in the mean distance, demonstrating the effectiveness of our approach across a large range of simulation parameters. Conclusion We show a first step towards a framework that synergistically combines the benefits of model-based simulation and real-time observations. It corrects discrepancies between simulation and the scene that results from inaccurate modeling parameters. This can provide a more accurate simulation environment for surgeons and better data with which to train algorithms.
2402.14154
Yiqiao Jin
Yiqiao Jin, Minje Choi, Gaurav Verma, Jindong Wang, Srijan Kumar
MM-Soc: Benchmarking Multimodal Large Language Models in Social Media Platforms
In Proceedings of ACL 2024
null
null
null
cs.CL cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
Social media platforms are hubs for multimodal information exchange, encompassing text, images, and videos, making it challenging for machines to comprehend the information or emotions associated with interactions in online spaces. Multimodal Large Language Models (MLLMs) have emerged as a promising solution to these challenges, yet they struggle to accurately interpret human emotions and complex content such as misinformation. This paper introduces MM-Soc, a comprehensive benchmark designed to evaluate MLLMs' understanding of multimodal social media content. MM-Soc compiles prominent multimodal datasets and incorporates a novel large-scale YouTube tagging dataset, targeting a range of tasks from misinformation detection, hate speech detection, and social context generation. Through our exhaustive evaluation on ten size-variants of four open-source MLLMs, we have identified significant performance disparities, highlighting the need for advancements in models' social understanding capabilities. Our analysis reveals that, in a zero-shot setting, various types of MLLMs generally exhibit difficulties in handling social media tasks. However, MLLMs demonstrate performance improvements post fine-tuning, suggesting potential pathways for improvement. Our code and data are available at https://github.com/claws-lab/MMSoc.git.
[ { "created": "Wed, 21 Feb 2024 22:27:40 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 15:19:20 GMT", "version": "v2" } ]
2024-07-25
[ [ "Jin", "Yiqiao", "" ], [ "Choi", "Minje", "" ], [ "Verma", "Gaurav", "" ], [ "Wang", "Jindong", "" ], [ "Kumar", "Srijan", "" ] ]
Social media platforms are hubs for multimodal information exchange, encompassing text, images, and videos, making it challenging for machines to comprehend the information or emotions associated with interactions in online spaces. Multimodal Large Language Models (MLLMs) have emerged as a promising solution to these challenges, yet they struggle to accurately interpret human emotions and complex content such as misinformation. This paper introduces MM-Soc, a comprehensive benchmark designed to evaluate MLLMs' understanding of multimodal social media content. MM-Soc compiles prominent multimodal datasets and incorporates a novel large-scale YouTube tagging dataset, targeting a range of tasks from misinformation detection, hate speech detection, and social context generation. Through our exhaustive evaluation on ten size-variants of four open-source MLLMs, we have identified significant performance disparities, highlighting the need for advancements in models' social understanding capabilities. Our analysis reveals that, in a zero-shot setting, various types of MLLMs generally exhibit difficulties in handling social media tasks. However, MLLMs demonstrate performance improvements post fine-tuning, suggesting potential pathways for improvement. Our code and data are available at https://github.com/claws-lab/MMSoc.git.
2404.05128
Nazifa Khan
Nazifa Azam Khan, Mikolaj Cieslak, Ian McQuillan
Importance of realism in procedurally-generated synthetic images for deep learning: case studies in maize and canola
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial neural networks are often used to identify features of crop plants. However, training their models requires many annotated images, which can be expensive and time-consuming to acquire. Procedural models of plants, such as those developed with Lindenmayer-systems (L-systems) can be created to produce visually realistic simulations, and hence images of plant simulations, where annotations are implicitly known. These synthetic images can either augment or completely replace real images in training neural networks for phenotyping tasks. In this paper, we systematically vary amounts of real and synthetic images used for training in both maize and canola to better understand situations where synthetic images generated from L-systems can help prediction on real images. This work also explores the degree to which realism in the synthetic images improves prediction. We have five different variants of a procedural canola model (these variants were created by tuning the realism while using calibration), and the deep learning results showed how drastically these results improve as the canola synthetic images are made to be more realistic. Furthermore, we see how neural network predictions can be used to help calibrate L-systems themselves, creating a feedback loop.
[ { "created": "Mon, 8 Apr 2024 01:08:41 GMT", "version": "v1" }, { "created": "Wed, 15 May 2024 16:55:03 GMT", "version": "v2" } ]
2024-05-16
[ [ "Khan", "Nazifa Azam", "" ], [ "Cieslak", "Mikolaj", "" ], [ "McQuillan", "Ian", "" ] ]
Artificial neural networks are often used to identify features of crop plants. However, training their models requires many annotated images, which can be expensive and time-consuming to acquire. Procedural models of plants, such as those developed with Lindenmayer-systems (L-systems) can be created to produce visually realistic simulations, and hence images of plant simulations, where annotations are implicitly known. These synthetic images can either augment or completely replace real images in training neural networks for phenotyping tasks. In this paper, we systematically vary amounts of real and synthetic images used for training in both maize and canola to better understand situations where synthetic images generated from L-systems can help prediction on real images. This work also explores the degree to which realism in the synthetic images improves prediction. We have five different variants of a procedural canola model (these variants were created by tuning the realism while using calibration), and the deep learning results showed how drastically these results improve as the canola synthetic images are made to be more realistic. Furthermore, we see how neural network predictions can be used to help calibrate L-systems themselves, creating a feedback loop.
1602.02509
Wouter Bokslag
Wouter Bokslag, Manon de Vries
Evaluating e-voting: theory and practice
19 pages
null
null
null
cs.CY cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
In the Netherlands as well as many other countries, the use of electronic voting solutions is a recurrent topic of discussion. While electronic voting certainly has advantages over paper voting, there are also important risks involved. This paper presents an analysis of benefits and risks of electronic voting, and shows the relevance of these issues by means of three case studies of real-world implementations. Additionally, techniques that may be employed to improve upon many of the current systems are presented. We conclude that the advantages of E-voting do not outweigh the disadvantages, as the resulting reduced verifiability and transparency seem hard to overcome.
[ { "created": "Mon, 8 Feb 2016 09:47:04 GMT", "version": "v1" } ]
2016-02-09
[ [ "Bokslag", "Wouter", "" ], [ "de Vries", "Manon", "" ] ]
In the Netherlands as well as many other countries, the use of electronic voting solutions is a recurrent topic of discussion. While electronic voting certainly has advantages over paper voting, there are also important risks involved. This paper presents an analysis of benefits and risks of electronic voting, and shows the relevance of these issues by means of three case studies of real-world implementations. Additionally, techniques that may be employed to improve upon many of the current systems are presented. We conclude that the advantages of E-voting do not outweigh the disadvantages, as the resulting reduced verifiability and transparency seem hard to overcome.
2010.11522
Jiaoyan Chen
Ziheng Zhang and Jiaoyan Chen and Xi Chen and Hualuo Liu and Yuejia Xiang and Bo Liu and Yefeng Zheng
An Industry Evaluation of Embedding-based Entity Alignment
accepted by COLING 2020
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedding-based entity alignment has been widely investigated in recent years, but most proposed methods still rely on an ideal supervised learning setting with a large number of unbiased seed mappings for training and validation, which significantly limits their usage. In this study, we evaluate those state-of-the-art methods in an industrial context, where the impact of seed mappings with different sizes and different biases is explored. Besides the popular benchmarks from DBpedia and Wikidata, we contribute and evaluate a new industrial benchmark that is extracted from two heterogeneous knowledge graphs (KGs) under deployment for medical applications. The experimental results enable the analysis of the advantages and disadvantages of these alignment methods and the further discussion of suitable strategies for their industrial deployment.
[ { "created": "Thu, 22 Oct 2020 08:33:58 GMT", "version": "v1" }, { "created": "Sat, 7 Nov 2020 12:25:10 GMT", "version": "v2" } ]
2020-11-10
[ [ "Zhang", "Ziheng", "" ], [ "Chen", "Jiaoyan", "" ], [ "Chen", "Xi", "" ], [ "Liu", "Hualuo", "" ], [ "Xiang", "Yuejia", "" ], [ "Liu", "Bo", "" ], [ "Zheng", "Yefeng", "" ] ]
Embedding-based entity alignment has been widely investigated in recent years, but most proposed methods still rely on an ideal supervised learning setting with a large number of unbiased seed mappings for training and validation, which significantly limits their usage. In this study, we evaluate those state-of-the-art methods in an industrial context, where the impact of seed mappings with different sizes and different biases is explored. Besides the popular benchmarks from DBpedia and Wikidata, we contribute and evaluate a new industrial benchmark that is extracted from two heterogeneous knowledge graphs (KGs) under deployment for medical applications. The experimental results enable the analysis of the advantages and disadvantages of these alignment methods and the further discussion of suitable strategies for their industrial deployment.
2106.07296
Rafel Palliser Sans
Rafel Palliser-Sans
RRULES: An improvement of the RULES rule-based classifier
6 pages, 2 algorithms
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
RRULES is presented as an improvement and optimization over RULES, a simple inductive learning algorithm for extracting IF-THEN rules from a set of training examples. RRULES optimizes the algorithm by implementing a more effective mechanism to detect irrelevant rules, at the same time that checks the stopping conditions more often. This results in a more compact rule set containing more general rules which prevent overfitting the training set and obtain a higher test accuracy. Moreover, the results show that RRULES outperforms the original algorithm by reducing the coverage rate up to a factor of 7 while running twice or three times faster consistently over several datasets.
[ { "created": "Mon, 14 Jun 2021 10:42:12 GMT", "version": "v1" } ]
2021-06-15
[ [ "Palliser-Sans", "Rafel", "" ] ]
RRULES is presented as an improvement and optimization over RULES, a simple inductive learning algorithm for extracting IF-THEN rules from a set of training examples. RRULES optimizes the algorithm by implementing a more effective mechanism to detect irrelevant rules, at the same time that checks the stopping conditions more often. This results in a more compact rule set containing more general rules which prevent overfitting the training set and obtain a higher test accuracy. Moreover, the results show that RRULES outperforms the original algorithm by reducing the coverage rate up to a factor of 7 while running twice or three times faster consistently over several datasets.
1804.05966
Kyle Kloster
Eric Horton, Kyle Kloster, Blair D. Sullivan
Subgraph centrality and walk-regularity
23 pages, 2 figures, links to software repository
null
null
null
cs.SI physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Matrix-based centrality measures have enjoyed significant popularity in network analysis, in no small part due to our ability to rigorously analyze their behavior as parameters vary. Recent work has considered the relationship between subgraph centrality, which is defined using the matrix exponential $f(x) = \exp(x)$, and the walk structure of a network. In a walk-regular graph, the number of closed walks of each length must be the same for all nodes, implying uniform $f$-subgraph centralities for any $f$ (or maximum $f$-$\textit{walk entropy}$). We consider when non--walk-regular graphs can achieve maximum entropy, calling such graphs $\textit{entropic}$. For parameterized measures, we are also interested in which values of the parameter witness this uniformity. To date, only one entropic graph has been identified, with only two witnessing parameter values, raising the question of how many such graphs and parameters exist. We resolve these questions by constructing infinite families of entropic graphs, as well as a family of witnessing parameters with a limit point at zero.
[ { "created": "Mon, 16 Apr 2018 22:56:46 GMT", "version": "v1" }, { "created": "Mon, 29 Oct 2018 15:25:14 GMT", "version": "v2" }, { "created": "Mon, 4 Feb 2019 22:27:40 GMT", "version": "v3" } ]
2019-02-06
[ [ "Horton", "Eric", "" ], [ "Kloster", "Kyle", "" ], [ "Sullivan", "Blair D.", "" ] ]
Matrix-based centrality measures have enjoyed significant popularity in network analysis, in no small part due to our ability to rigorously analyze their behavior as parameters vary. Recent work has considered the relationship between subgraph centrality, which is defined using the matrix exponential $f(x) = \exp(x)$, and the walk structure of a network. In a walk-regular graph, the number of closed walks of each length must be the same for all nodes, implying uniform $f$-subgraph centralities for any $f$ (or maximum $f$-$\textit{walk entropy}$). We consider when non--walk-regular graphs can achieve maximum entropy, calling such graphs $\textit{entropic}$. For parameterized measures, we are also interested in which values of the parameter witness this uniformity. To date, only one entropic graph has been identified, with only two witnessing parameter values, raising the question of how many such graphs and parameters exist. We resolve these questions by constructing infinite families of entropic graphs, as well as a family of witnessing parameters with a limit point at zero.
1103.0087
Ephzibah Ep
E.P.Ephzibah
Cost effective approach on feature selection using genetic algorithms and fuzzy logic for diabetes diagnosis
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A way to enhance the performance of a model that combines genetic algorithms and fuzzy logic for feature selection and classification is proposed. Early diagnosis of any disease with less cost is preferable. Diabetes is one such disease. Diabetes has become the fourth leading cause of death in developed countries and there is substantial evidence that it is reaching epidemic proportions in many developing and newly industrialized nations. In medical diagnosis, patterns consist of observable symptoms along with the results of diagnostic tests. These tests have various associated costs and risks. In the automated design of pattern classification, the proposed system solves the feature subset selection problem. It is a task of identifying and selecting a useful subset of pattern-representing features from a larger set of features. Using fuzzy rule-based classification system, the proposed system proves to improve the classification accuracy.
[ { "created": "Tue, 1 Mar 2011 06:10:18 GMT", "version": "v1" } ]
2011-03-02
[ [ "Ephzibah", "E. P.", "" ] ]
A way to enhance the performance of a model that combines genetic algorithms and fuzzy logic for feature selection and classification is proposed. Early diagnosis of any disease with less cost is preferable. Diabetes is one such disease. Diabetes has become the fourth leading cause of death in developed countries and there is substantial evidence that it is reaching epidemic proportions in many developing and newly industrialized nations. In medical diagnosis, patterns consist of observable symptoms along with the results of diagnostic tests. These tests have various associated costs and risks. In the automated design of pattern classification, the proposed system solves the feature subset selection problem. It is a task of identifying and selecting a useful subset of pattern-representing features from a larger set of features. Using fuzzy rule-based classification system, the proposed system proves to improve the classification accuracy.
1308.6807
Joohwan Kim
Joohwan Kim and R. Srikant
Achieving the Optimal Steaming Capacity and Delay Using Random Regular Digraphs in P2P Networks
null
null
null
null
cs.NI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In earlier work, we showed that it is possible to achieve $O(\log N)$ streaming delay with high probability in a peer-to-peer network, where each peer has as little as four neighbors, while achieving any arbitrary fraction of the maximum possible streaming rate. However, the constant in the $O(log N)$ delay term becomes rather large as we get closer to the maximum streaming rate. In this paper, we design an alternative pairing and chunk dissemination algorithm that allows us to transmit at the maximum streaming rate while ensuring that all, but a negligible fraction of the peers, receive the data stream with $O(\log N)$ delay with high probability. The result is established by examining the properties of graph formed by the union of two or more random 1-regular digraphs, i.e., directed graphs in which each node has an incoming and an outgoing node degree both equal to one.
[ { "created": "Fri, 30 Aug 2013 17:49:55 GMT", "version": "v1" } ]
2013-09-02
[ [ "Kim", "Joohwan", "" ], [ "Srikant", "R.", "" ] ]
In earlier work, we showed that it is possible to achieve $O(\log N)$ streaming delay with high probability in a peer-to-peer network, where each peer has as little as four neighbors, while achieving any arbitrary fraction of the maximum possible streaming rate. However, the constant in the $O(log N)$ delay term becomes rather large as we get closer to the maximum streaming rate. In this paper, we design an alternative pairing and chunk dissemination algorithm that allows us to transmit at the maximum streaming rate while ensuring that all, but a negligible fraction of the peers, receive the data stream with $O(\log N)$ delay with high probability. The result is established by examining the properties of graph formed by the union of two or more random 1-regular digraphs, i.e., directed graphs in which each node has an incoming and an outgoing node degree both equal to one.
2301.00621
Dor Tsur
Dor Tsur, Ziv Aharoni, Ziv Goldfeld and Haim Permuter
Data-Driven Optimization of Directed Information over Discrete Alphabets
null
null
null
null
cs.IT cs.LG math.IT
http://creativecommons.org/licenses/by/4.0/
Directed information (DI) is a fundamental measure for the study and analysis of sequential stochastic models. In particular, when optimized over input distributions it characterizes the capacity of general communication channels. However, analytic computation of DI is typically intractable and existing optimization techniques over discrete input alphabets require knowledge of the channel model, which renders them inapplicable when only samples are available. To overcome these limitations, we propose a novel estimation-optimization framework for DI over discrete input spaces. We formulate DI optimization as a Markov decision process and leverage reinforcement learning techniques to optimize a deep generative model of the input process probability mass function (PMF). Combining this optimizer with the recently developed DI neural estimator, we obtain an end-to-end estimation-optimization algorithm which is applied to estimating the (feedforward and feedback) capacity of various discrete channels with memory. Furthermore, we demonstrate how to use the optimized PMF model to (i) obtain theoretical bounds on the feedback capacity of unifilar finite-state channels; and (ii) perform probabilistic shaping of constellations in the peak power-constrained additive white Gaussian noise channel.
[ { "created": "Mon, 2 Jan 2023 12:25:40 GMT", "version": "v1" } ]
2023-01-03
[ [ "Tsur", "Dor", "" ], [ "Aharoni", "Ziv", "" ], [ "Goldfeld", "Ziv", "" ], [ "Permuter", "Haim", "" ] ]
Directed information (DI) is a fundamental measure for the study and analysis of sequential stochastic models. In particular, when optimized over input distributions it characterizes the capacity of general communication channels. However, analytic computation of DI is typically intractable and existing optimization techniques over discrete input alphabets require knowledge of the channel model, which renders them inapplicable when only samples are available. To overcome these limitations, we propose a novel estimation-optimization framework for DI over discrete input spaces. We formulate DI optimization as a Markov decision process and leverage reinforcement learning techniques to optimize a deep generative model of the input process probability mass function (PMF). Combining this optimizer with the recently developed DI neural estimator, we obtain an end-to-end estimation-optimization algorithm which is applied to estimating the (feedforward and feedback) capacity of various discrete channels with memory. Furthermore, we demonstrate how to use the optimized PMF model to (i) obtain theoretical bounds on the feedback capacity of unifilar finite-state channels; and (ii) perform probabilistic shaping of constellations in the peak power-constrained additive white Gaussian noise channel.
1908.08044
Jian Yao
Jian Yao and Ahmad Al-Dahle
Coarse-to-fine Optimization for Speech Enhancement
null
Interspeech 2019
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose the coarse-to-fine optimization for the task of speech enhancement. Cosine similarity loss [1] has proven to be an effective metric to measure similarity of speech signals. However, due to the large variance of the enhanced speech with even the same cosine similarity loss in high dimensional space, a deep neural network learnt with this loss might not be able to predict enhanced speech with good quality. Our coarse-to-fine strategy optimizes the cosine similarity loss for different granularities so that more constraints are added to the prediction from high dimension to relatively low dimension. In this way, the enhanced speech will better resemble the clean speech. Experimental results show the effectiveness of our proposed coarse-to-fine optimization in both discriminative models and generative models. Moreover, we apply the coarse-to-fine strategy to the adversarial loss in generative adversarial network (GAN) and propose dynamic perceptual loss, which dynamically computes the adversarial loss from coarse resolution to fine resolution. Dynamic perceptual loss further improves the accuracy and achieves state-of-the-art results compared with other generative models.
[ { "created": "Wed, 21 Aug 2019 17:51:29 GMT", "version": "v1" } ]
2019-08-23
[ [ "Yao", "Jian", "" ], [ "Al-Dahle", "Ahmad", "" ] ]
In this paper, we propose the coarse-to-fine optimization for the task of speech enhancement. Cosine similarity loss [1] has proven to be an effective metric to measure similarity of speech signals. However, due to the large variance of the enhanced speech with even the same cosine similarity loss in high dimensional space, a deep neural network learnt with this loss might not be able to predict enhanced speech with good quality. Our coarse-to-fine strategy optimizes the cosine similarity loss for different granularities so that more constraints are added to the prediction from high dimension to relatively low dimension. In this way, the enhanced speech will better resemble the clean speech. Experimental results show the effectiveness of our proposed coarse-to-fine optimization in both discriminative models and generative models. Moreover, we apply the coarse-to-fine strategy to the adversarial loss in generative adversarial network (GAN) and propose dynamic perceptual loss, which dynamically computes the adversarial loss from coarse resolution to fine resolution. Dynamic perceptual loss further improves the accuracy and achieves state-of-the-art results compared with other generative models.
2111.04308
Cedric Cook
Cedric Cook
Learning Context-Aware Representations of Subtrees
36 pages, 12 Figures. This work was carried out as a Master Thesis at Klarna Bank AB, under supervision of Stefan Magureanu
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
This thesis tackles the problem of learning efficient representations of complex, structured data with a natural application to web page and element classification. We hypothesise that the context around the element inside the web page is of high value to the problem and is currently under exploited. This thesis aims to solve the problem of classifying web elements as subtrees of a DOM tree by also considering their context. To achieve this, first we discuss current expert knowledge systems that work on structures, such as Tree-LSTM. Then, we propose context-aware extensions to this model. We show that the new model achieves an average F1-score of 0.7973 on a multi-class web classification task. This model generates better representations for various subtrees and may be used for applications such element classification, state estimators in reinforcement learning over the Web and more.
[ { "created": "Mon, 8 Nov 2021 07:43:14 GMT", "version": "v1" } ]
2021-11-09
[ [ "Cook", "Cedric", "" ] ]
This thesis tackles the problem of learning efficient representations of complex, structured data with a natural application to web page and element classification. We hypothesise that the context around the element inside the web page is of high value to the problem and is currently under exploited. This thesis aims to solve the problem of classifying web elements as subtrees of a DOM tree by also considering their context. To achieve this, first we discuss current expert knowledge systems that work on structures, such as Tree-LSTM. Then, we propose context-aware extensions to this model. We show that the new model achieves an average F1-score of 0.7973 on a multi-class web classification task. This model generates better representations for various subtrees and may be used for applications such element classification, state estimators in reinforcement learning over the Web and more.
2402.17358
Xinyu Lu
Xinyu Lu, Bowen Yu, Yaojie Lu, Hongyu Lin, Haiyang Yu, Le Sun, Xianpei Han, Yongbin Li
SoFA: Shielded On-the-fly Alignment via Priority Rule Following
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The alignment problem in Large Language Models (LLMs) involves adapting them to the broad spectrum of human values. This requirement challenges existing alignment methods due to diversity of preferences and regulatory standards. This paper introduces a novel alignment paradigm, priority rule following, which defines rules as the primary control mechanism in each dialog, prioritizing them over user instructions. Our preliminary analysis reveals that even the advanced LLMs, such as GPT-4, exhibit shortcomings in understanding and prioritizing the rules. Therefore, we present PriorityDistill, a semi-automated approach for distilling priority following signals from LLM simulations to ensure robust rule integration and adherence. Our experiments show that this method not only effectively minimizes misalignments utilizing only one general rule but also adapts smoothly to various unseen rules, ensuring they are shielded from hijacking and that the model responds appropriately.
[ { "created": "Tue, 27 Feb 2024 09:52:27 GMT", "version": "v1" } ]
2024-02-28
[ [ "Lu", "Xinyu", "" ], [ "Yu", "Bowen", "" ], [ "Lu", "Yaojie", "" ], [ "Lin", "Hongyu", "" ], [ "Yu", "Haiyang", "" ], [ "Sun", "Le", "" ], [ "Han", "Xianpei", "" ], [ "Li", "Yongbin", "" ] ]
The alignment problem in Large Language Models (LLMs) involves adapting them to the broad spectrum of human values. This requirement challenges existing alignment methods due to diversity of preferences and regulatory standards. This paper introduces a novel alignment paradigm, priority rule following, which defines rules as the primary control mechanism in each dialog, prioritizing them over user instructions. Our preliminary analysis reveals that even the advanced LLMs, such as GPT-4, exhibit shortcomings in understanding and prioritizing the rules. Therefore, we present PriorityDistill, a semi-automated approach for distilling priority following signals from LLM simulations to ensure robust rule integration and adherence. Our experiments show that this method not only effectively minimizes misalignments utilizing only one general rule but also adapts smoothly to various unseen rules, ensuring they are shielded from hijacking and that the model responds appropriately.
2006.01080
Alina Karakanta
Alina Karakanta, Matteo Negri, Marco Turchi
Is 42 the Answer to Everything in Subtitling-oriented Speech Translation?
Accepted at IWSLT 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subtitling is becoming increasingly important for disseminating information, given the enormous amounts of audiovisual content becoming available daily. Although Neural Machine Translation (NMT) can speed up the process of translating audiovisual content, large manual effort is still required for transcribing the source language, and for spotting and segmenting the text into proper subtitles. Creating proper subtitles in terms of timing and segmentation highly depends on information present in the audio (utterance duration, natural pauses). In this work, we explore two methods for applying Speech Translation (ST) to subtitling: a) a direct end-to-end and b) a classical cascade approach. We discuss the benefit of having access to the source language speech for improving the conformity of the generated subtitles to the spatial and temporal subtitling constraints and show that length is not the answer to everything in the case of subtitling-oriented ST.
[ { "created": "Mon, 1 Jun 2020 17:02:28 GMT", "version": "v1" } ]
2020-06-02
[ [ "Karakanta", "Alina", "" ], [ "Negri", "Matteo", "" ], [ "Turchi", "Marco", "" ] ]
Subtitling is becoming increasingly important for disseminating information, given the enormous amounts of audiovisual content becoming available daily. Although Neural Machine Translation (NMT) can speed up the process of translating audiovisual content, large manual effort is still required for transcribing the source language, and for spotting and segmenting the text into proper subtitles. Creating proper subtitles in terms of timing and segmentation highly depends on information present in the audio (utterance duration, natural pauses). In this work, we explore two methods for applying Speech Translation (ST) to subtitling: a) a direct end-to-end and b) a classical cascade approach. We discuss the benefit of having access to the source language speech for improving the conformity of the generated subtitles to the spatial and temporal subtitling constraints and show that length is not the answer to everything in the case of subtitling-oriented ST.
1606.03507
Tahir Hameed
Tahir Hameed and Bobby Swar
Social value and information quality in online health information search
Research-in-progress ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on Information Systems 2015 (arXiv:1605.01032)
null
null
ACIS/2015/196
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper extends and validates a model of value-driven online healthcare information search in online shared contexts. Perceived value is an important factor behind users' decisions concerning search, consumption and reuse of products and services. The role of utilitarian, hedonic and epistemic value of information in user satisfaction and intention to repeat online search is well recognized, but little support has been found for social value affecting user satisfaction critical for such decisions. Therefore, a value-based model of online healthcare information search was extended adding detailed information quality measures. Our survey data collected from 143 college going students from more than 10 countries studying in South Korea demonstrated two novel results. At first, unlike existing studies, strong support was found for perceived social value affecting the user satisfaction. As a second substantial finding, the significance of social value could only be appreciated by using comprehensive constructs of information quality than simpler measures. Therefore, this study shows for social and shared HIS, users consider access, representation and context quality of healthcare information to be more important from social perspective than intrinsic (or content) quality. Therefore, developers and healthcare organizations interested in traction of their online healthcare information delivery systems (HIS), especially those using internet and social media networks, should focus on enhancing information quality of their systems accordingly.
[ { "created": "Sat, 11 Jun 2016 00:12:51 GMT", "version": "v1" } ]
2016-06-14
[ [ "Hameed", "Tahir", "" ], [ "Swar", "Bobby", "" ] ]
This paper extends and validates a model of value-driven online healthcare information search in online shared contexts. Perceived value is an important factor behind users' decisions concerning search, consumption and reuse of products and services. The role of utilitarian, hedonic and epistemic value of information in user satisfaction and intention to repeat online search is well recognized, but little support has been found for social value affecting user satisfaction critical for such decisions. Therefore, a value-based model of online healthcare information search was extended adding detailed information quality measures. Our survey data collected from 143 college going students from more than 10 countries studying in South Korea demonstrated two novel results. At first, unlike existing studies, strong support was found for perceived social value affecting the user satisfaction. As a second substantial finding, the significance of social value could only be appreciated by using comprehensive constructs of information quality than simpler measures. Therefore, this study shows for social and shared HIS, users consider access, representation and context quality of healthcare information to be more important from social perspective than intrinsic (or content) quality. Therefore, developers and healthcare organizations interested in traction of their online healthcare information delivery systems (HIS), especially those using internet and social media networks, should focus on enhancing information quality of their systems accordingly.
2103.04942
Ioannis Exarchos
Ioannis Exarchos, Karen Wang, Brian H. Do, Fabio Stroppa, Margaret M. Coad, Allison M. Okamura, and C. Karen Liu
Task-Specific Design Optimization and Fabrication for Inflated-Beam Soft Robots with Growable Discrete Joints
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Soft robot serial chain manipulators with the capability for growth, stiffness control, and discrete joints have the potential to approach the dexterity of traditional robot arms, while improving safety, lowering cost, and providing an increased workspace, with potential application in home environments. This paper presents an approach for design optimization of such robots to reach specified targets while minimizing the number of discrete joints and thus construction and actuation costs. We define a maximum number of allowable joints, as well as hardware constraints imposed by the materials and actuation available for soft growing robots, and we formulate and solve an optimization problem to output a planar robot design, i.e., the total number of potential joints and their locations along the robot body, which reaches all the desired targets, avoids known obstacles, and maximizes the workspace. We demonstrate a process to rapidly construct the resulting soft growing robot design. Finally, we use our algorithm to evaluate the ability of this design to reach new targets and demonstrate the algorithm's utility as a design tool to explore robot capabilities given various constraints and objectives.
[ { "created": "Mon, 8 Mar 2021 18:00:27 GMT", "version": "v1" }, { "created": "Wed, 22 Sep 2021 17:10:42 GMT", "version": "v2" } ]
2021-09-23
[ [ "Exarchos", "Ioannis", "" ], [ "Wang", "Karen", "" ], [ "Do", "Brian H.", "" ], [ "Stroppa", "Fabio", "" ], [ "Coad", "Margaret M.", "" ], [ "Okamura", "Allison M.", "" ], [ "Liu", "C. Karen", "" ] ]
Soft robot serial chain manipulators with the capability for growth, stiffness control, and discrete joints have the potential to approach the dexterity of traditional robot arms, while improving safety, lowering cost, and providing an increased workspace, with potential application in home environments. This paper presents an approach for design optimization of such robots to reach specified targets while minimizing the number of discrete joints and thus construction and actuation costs. We define a maximum number of allowable joints, as well as hardware constraints imposed by the materials and actuation available for soft growing robots, and we formulate and solve an optimization problem to output a planar robot design, i.e., the total number of potential joints and their locations along the robot body, which reaches all the desired targets, avoids known obstacles, and maximizes the workspace. We demonstrate a process to rapidly construct the resulting soft growing robot design. Finally, we use our algorithm to evaluate the ability of this design to reach new targets and demonstrate the algorithm's utility as a design tool to explore robot capabilities given various constraints and objectives.
1711.06964
Amitabha Roy
Amitabha Roy, Subramanya R. Dulloor
Cyclone: High Availability for Persistent Key Value Stores
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Persistent key value stores are an important component of many distributed data serving solutions with innovations targeted at taking advantage of growing flash speeds. Unfortunately their performance is hampered by the need to maintain and replicate a write ahead log to guarantee availability in the face of machine and storage failures. Cyclone is a replicated log plug-in for key value stores that systematically addresses various sources of this bottleneck. It uses a small amount of non-volatile memory directly addressable by the CPU - such as in the form of NVDIMMs or Intel 3DXPoint - to remove block oriented IO devices such as SSDs from the critical path for appending to the log. This enables it to address network overheads using an implementation of the RAFT consensus protocol that is designed around a userspace network stack to relieve the CPU of the burden of data copies. Finally, it provides a way to efficiently map the commutativity in key-value store APIs to the parallelism available in commodity NICs. Cyclone is able to replicate millions of small updates per second using only commodity 10 gigabit ethernet adapters. As a practical application, we use it to improve the performance (and availability) of RocksDB, a popular persistent key value store by an order of magnitude when compared to its own write ahead log without replication.
[ { "created": "Sun, 19 Nov 2017 04:07:34 GMT", "version": "v1" } ]
2017-11-21
[ [ "Roy", "Amitabha", "" ], [ "Dulloor", "Subramanya R.", "" ] ]
Persistent key value stores are an important component of many distributed data serving solutions with innovations targeted at taking advantage of growing flash speeds. Unfortunately their performance is hampered by the need to maintain and replicate a write ahead log to guarantee availability in the face of machine and storage failures. Cyclone is a replicated log plug-in for key value stores that systematically addresses various sources of this bottleneck. It uses a small amount of non-volatile memory directly addressable by the CPU - such as in the form of NVDIMMs or Intel 3DXPoint - to remove block oriented IO devices such as SSDs from the critical path for appending to the log. This enables it to address network overheads using an implementation of the RAFT consensus protocol that is designed around a userspace network stack to relieve the CPU of the burden of data copies. Finally, it provides a way to efficiently map the commutativity in key-value store APIs to the parallelism available in commodity NICs. Cyclone is able to replicate millions of small updates per second using only commodity 10 gigabit ethernet adapters. As a practical application, we use it to improve the performance (and availability) of RocksDB, a popular persistent key value store by an order of magnitude when compared to its own write ahead log without replication.
1405.0320
Jan Verschelde
Danko Adrovic and Jan Verschelde
Computing all Affine Solution Sets of Binomial Systems
4 page extended abstract accepted by EACA 2014, a conference on Computer Algebra and its Applications
null
null
null
cs.SC cs.MS math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To compute solutions of sparse polynomial systems efficiently we have to exploit the structure of their Newton polytopes. While the application of polyhedral methods naturally excludes solutions with zero components, an irreducible decomposition of a variety is typically understood in affine space, including also those components with zero coordinates. For the problem of computing solution sets in the intersection of some coordinate planes, the direct application of a polyhedral method fails, because the original facial structure of the Newton polytopes may alter completely when selected variables become zero. Our new proposed method enumerates all factors contributing to a generalized permanent and toric solutions as a special case of this enumeration. For benchmark problems such as the adjacent 2-by-2 minors of a general matrix, our methods scale much better than the witness set representations of numerical algebraic geometry.
[ { "created": "Thu, 1 May 2014 23:00:24 GMT", "version": "v1" } ]
2014-05-05
[ [ "Adrovic", "Danko", "" ], [ "Verschelde", "Jan", "" ] ]
To compute solutions of sparse polynomial systems efficiently we have to exploit the structure of their Newton polytopes. While the application of polyhedral methods naturally excludes solutions with zero components, an irreducible decomposition of a variety is typically understood in affine space, including also those components with zero coordinates. For the problem of computing solution sets in the intersection of some coordinate planes, the direct application of a polyhedral method fails, because the original facial structure of the Newton polytopes may alter completely when selected variables become zero. Our new proposed method enumerates all factors contributing to a generalized permanent and toric solutions as a special case of this enumeration. For benchmark problems such as the adjacent 2-by-2 minors of a general matrix, our methods scale much better than the witness set representations of numerical algebraic geometry.
1710.04578
Dongyao Chen
Dongyao Chen, Kyong-Tak Cho, Kang G. Shin
Mobile IMUs Reveal Driver's Identity From Vehicle Turns
15 pages, 15 figures
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As vehicle maneuver data becomes abundant for assisted or autonomous driving, their implication of privacy invasion/leakage has become an increasing concern. In particular, the surface for fingerprinting a driver will expand significantly if the driver's identity can be linked with the data collected from his mobile or wearable devices which are widely deployed worldwide and have increasing sensing capabilities. In line with this trend, this paper investigates a fast emerging driving data source that has driver's privacy implications. We first show that such privacy threats can be materialized via any mobile device with IMUs (e.g., gyroscope and accelerometer). We then present Dri-Fi (Driver Fingerprint), a driving data analytic engine that can fingerprint the driver with vehicle turn(s). Dri-Fi achieves this based on IMUs data taken only during the vehicle's turn(s). Such an approach expands the attack surface significantly compared to existing driver fingerprinting schemes. From this data, Dri-Fi extracts three new features --- acceleration along the end-of-turn axis, its deviation, and the deviation of the yaw rate --- and exploits them to identify the driver. Our extensive evaluation shows that an adversary equipped with Dri-Fi can correctly fingerprint the driver within just one turn with 74.1%, 83.5%, and 90.8% accuracy across 12, 8, and 5 drivers --- typical of an immediate family or close-friends circle --- respectively. Moreover, with measurements on more than one turn, the adversary can achieve up to 95.3%, 95.4%, and 96.6% accuracy across 12, 8, and 5 drivers, respectively.
[ { "created": "Thu, 12 Oct 2017 15:49:35 GMT", "version": "v1" } ]
2017-10-13
[ [ "Chen", "Dongyao", "" ], [ "Cho", "Kyong-Tak", "" ], [ "Shin", "Kang G.", "" ] ]
As vehicle maneuver data becomes abundant for assisted or autonomous driving, their implication of privacy invasion/leakage has become an increasing concern. In particular, the surface for fingerprinting a driver will expand significantly if the driver's identity can be linked with the data collected from his mobile or wearable devices which are widely deployed worldwide and have increasing sensing capabilities. In line with this trend, this paper investigates a fast emerging driving data source that has driver's privacy implications. We first show that such privacy threats can be materialized via any mobile device with IMUs (e.g., gyroscope and accelerometer). We then present Dri-Fi (Driver Fingerprint), a driving data analytic engine that can fingerprint the driver with vehicle turn(s). Dri-Fi achieves this based on IMUs data taken only during the vehicle's turn(s). Such an approach expands the attack surface significantly compared to existing driver fingerprinting schemes. From this data, Dri-Fi extracts three new features --- acceleration along the end-of-turn axis, its deviation, and the deviation of the yaw rate --- and exploits them to identify the driver. Our extensive evaluation shows that an adversary equipped with Dri-Fi can correctly fingerprint the driver within just one turn with 74.1%, 83.5%, and 90.8% accuracy across 12, 8, and 5 drivers --- typical of an immediate family or close-friends circle --- respectively. Moreover, with measurements on more than one turn, the adversary can achieve up to 95.3%, 95.4%, and 96.6% accuracy across 12, 8, and 5 drivers, respectively.
2305.17581
Mher Safaryan
Mher Safaryan and Alexandra Peste and Dan Alistarh
Knowledge Distillation Performs Partial Variance Reduction
15+22 pages, NeurIPS 2023
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge distillation is a popular approach for enhancing the performance of ''student'' models, with lower representational capacity, by taking advantage of more powerful ''teacher'' models. Despite its apparent simplicity and widespread use, the underlying mechanics behind knowledge distillation (KD) are still not fully understood. In this work, we shed new light on the inner workings of this method, by examining it from an optimization perspective. We show that, in the context of linear and deep linear models, KD can be interpreted as a novel type of stochastic variance reduction mechanism. We provide a detailed convergence analysis of the resulting dynamics, which hold under standard assumptions for both strongly-convex and non-convex losses, showing that KD acts as a form of partial variance reduction, which can reduce the stochastic gradient noise, but may not eliminate it completely, depending on the properties of the ''teacher'' model. Our analysis puts further emphasis on the need for careful parametrization of KD, in particular w.r.t. the weighting of the distillation loss, and is validated empirically on both linear models and deep neural networks.
[ { "created": "Sat, 27 May 2023 21:25:55 GMT", "version": "v1" }, { "created": "Fri, 8 Dec 2023 22:08:09 GMT", "version": "v2" } ]
2023-12-12
[ [ "Safaryan", "Mher", "" ], [ "Peste", "Alexandra", "" ], [ "Alistarh", "Dan", "" ] ]
Knowledge distillation is a popular approach for enhancing the performance of ''student'' models, with lower representational capacity, by taking advantage of more powerful ''teacher'' models. Despite its apparent simplicity and widespread use, the underlying mechanics behind knowledge distillation (KD) are still not fully understood. In this work, we shed new light on the inner workings of this method, by examining it from an optimization perspective. We show that, in the context of linear and deep linear models, KD can be interpreted as a novel type of stochastic variance reduction mechanism. We provide a detailed convergence analysis of the resulting dynamics, which hold under standard assumptions for both strongly-convex and non-convex losses, showing that KD acts as a form of partial variance reduction, which can reduce the stochastic gradient noise, but may not eliminate it completely, depending on the properties of the ''teacher'' model. Our analysis puts further emphasis on the need for careful parametrization of KD, in particular w.r.t. the weighting of the distillation loss, and is validated empirically on both linear models and deep neural networks.
2111.02120
Raheel Qader
Melissa Ailem, Jinghsu Liu, Raheel Qader
Lingua Custodia's participation at the WMT 2021 Machine Translation using Terminologies shared task
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper describes Lingua Custodia's submission to the WMT21 shared task on machine translation using terminologies. We consider three directions, namely English to French, Russian, and Chinese. We rely on a Transformer-based architecture as a building block, and we explore a method which introduces two main changes to the standard procedure to handle terminologies. The first one consists in augmenting the training data in such a way as to encourage the model to learn a copy behavior when it encounters terminology constraint terms. The second change is constraint token masking, whose purpose is to ease copy behavior learning and to improve model generalization. Empirical results show that our method satisfies most terminology constraints while maintaining high translation quality.
[ { "created": "Wed, 3 Nov 2021 10:36:32 GMT", "version": "v1" } ]
2021-11-04
[ [ "Ailem", "Melissa", "" ], [ "Liu", "Jinghsu", "" ], [ "Qader", "Raheel", "" ] ]
This paper describes Lingua Custodia's submission to the WMT21 shared task on machine translation using terminologies. We consider three directions, namely English to French, Russian, and Chinese. We rely on a Transformer-based architecture as a building block, and we explore a method which introduces two main changes to the standard procedure to handle terminologies. The first one consists in augmenting the training data in such a way as to encourage the model to learn a copy behavior when it encounters terminology constraint terms. The second change is constraint token masking, whose purpose is to ease copy behavior learning and to improve model generalization. Empirical results show that our method satisfies most terminology constraints while maintaining high translation quality.
1404.4560
Lane A. Hemaspaandra
Edith Hemaspaandra and Lane A. Hemaspaandra and Henning Schnoor
A Control Dichotomy for Pure Scoring Rules
A shorter version of this paper will appear in the proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014)
null
null
null
cs.GT cs.CC cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scoring systems are an extremely important class of election systems. A length-$m$ (so-called) scoring vector applies only to $m$-candidate elections. To handle general elections, one must use a family of vectors, one per length. The most elegant approach to making sure such families are "family-like" is the recently introduced notion of (polynomial-time uniform) pure scoring rules [Betzler and Dorn 2010], where each scoring vector is obtained from its precursor by adding one new coefficient. We obtain the first dichotomy theorem for pure scoring rules for a control problem. In particular, for constructive control by adding voters (CCAV), we show that CCAV is solvable in polynomial time for $k$-approval with $k \leq 3$, $k$-veto with $k \leq 2$, every pure scoring rule in which only the two top-rated candidates gain nonzero scores, and a particular rule that is a "hybrid" of 1-approval and 1-veto. For all other pure scoring rules, CCAV is NP-complete. We also investigate the descriptive richness of different models for defining pure scoring rules, proving how more rule-generation time gives more rules, proving that rationals give more rules than do the natural numbers, and proving that some restrictions previously thought to be "w.l.o.g." in fact do lose generality.
[ { "created": "Thu, 17 Apr 2014 15:38:54 GMT", "version": "v1" } ]
2014-04-18
[ [ "Hemaspaandra", "Edith", "" ], [ "Hemaspaandra", "Lane A.", "" ], [ "Schnoor", "Henning", "" ] ]
Scoring systems are an extremely important class of election systems. A length-$m$ (so-called) scoring vector applies only to $m$-candidate elections. To handle general elections, one must use a family of vectors, one per length. The most elegant approach to making sure such families are "family-like" is the recently introduced notion of (polynomial-time uniform) pure scoring rules [Betzler and Dorn 2010], where each scoring vector is obtained from its precursor by adding one new coefficient. We obtain the first dichotomy theorem for pure scoring rules for a control problem. In particular, for constructive control by adding voters (CCAV), we show that CCAV is solvable in polynomial time for $k$-approval with $k \leq 3$, $k$-veto with $k \leq 2$, every pure scoring rule in which only the two top-rated candidates gain nonzero scores, and a particular rule that is a "hybrid" of 1-approval and 1-veto. For all other pure scoring rules, CCAV is NP-complete. We also investigate the descriptive richness of different models for defining pure scoring rules, proving how more rule-generation time gives more rules, proving that rationals give more rules than do the natural numbers, and proving that some restrictions previously thought to be "w.l.o.g." in fact do lose generality.
0704.3904
Fabien Mathieu
Anh-Tuan Gai (INRIA Rocquencourt), Dmitry Lebedev (FT R&D), Fabien Mathieu (FT R&D), Fabien De Montgolfier (LIAFA), Julien Reynier (LIENS), Laurent Viennot (INRIA Rocquencourt)
Acyclic Preference Systems in P2P Networks
null
null
null
null
cs.DS cs.GT
null
In this work we study preference systems natural for the Peer-to-Peer paradigm. Most of them fall in three categories: global, symmetric and complementary. All these systems share an acyclicity property. As a consequence, they admit a stable (or Pareto efficient) configuration, where no participant can collaborate with better partners than their current ones. We analyze the representation of the such preference systems and show that any acyclic system can be represented with a symmetric mark matrix. This gives a method to merge acyclic preference systems and retain the acyclicity. We also consider such properties of the corresponding collaboration graph, as clustering coefficient and diameter. In particular, studying the example of preferences based on real latency measurements, we observe that its stable configuration is a small-world graph.
[ { "created": "Mon, 30 Apr 2007 09:26:39 GMT", "version": "v1" }, { "created": "Wed, 2 May 2007 13:07:31 GMT", "version": "v2" } ]
2007-05-23
[ [ "Gai", "Anh-Tuan", "", "INRIA Rocquencourt" ], [ "Lebedev", "Dmitry", "", "FT R&D" ], [ "Mathieu", "Fabien", "", "FT R&D" ], [ "De Montgolfier", "Fabien", "", "LIAFA" ], [ "Reynier", "Julien", "", "LIENS" ], [ "Viennot", "Laurent", "", "INRIA Rocquencourt" ] ]
In this work we study preference systems natural for the Peer-to-Peer paradigm. Most of them fall in three categories: global, symmetric and complementary. All these systems share an acyclicity property. As a consequence, they admit a stable (or Pareto efficient) configuration, where no participant can collaborate with better partners than their current ones. We analyze the representation of the such preference systems and show that any acyclic system can be represented with a symmetric mark matrix. This gives a method to merge acyclic preference systems and retain the acyclicity. We also consider such properties of the corresponding collaboration graph, as clustering coefficient and diameter. In particular, studying the example of preferences based on real latency measurements, we observe that its stable configuration is a small-world graph.
2101.09849
Roozbeh Yousefzadeh
Roozbeh Yousefzadeh
Deep Learning Generalization and the Convex Hull of Training Sets
null
null
null
null
cs.LG math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the generalization of deep learning models in relation to the convex hull of their training sets. A trained image classifier basically partitions its domain via decision boundaries and assigns a class to each of those partitions. The location of decision boundaries inside the convex hull of training set can be investigated in relation to the training samples. However, our analysis shows that in standard image classification datasets, all testing images are considerably outside that convex hull, in the pixel space, in the wavelet space, and in the internal representations learned by deep networks. Therefore, the performance of a trained model partially depends on how its decision boundaries are extended outside the convex hull of its training data. From this perspective which is not studied before, over-parameterization of deep learning models may be considered a necessity for shaping the extension of decision boundaries. At the same time, over-parameterization should be accompanied by a specific training regime, in order to yield a model that not only fits the training set, but also its decision boundaries extend desirably outside the convex hull. To illustrate this, we investigate the decision boundaries of a neural network, with various degrees of parameters, inside and outside the convex hull of its training set. Moreover, we use a polynomial decision boundary to study the necessity of over-parameterization and the influence of training regime in shaping its extensions outside the convex hull of training set.
[ { "created": "Mon, 25 Jan 2021 01:54:02 GMT", "version": "v1" } ]
2021-01-26
[ [ "Yousefzadeh", "Roozbeh", "" ] ]
We study the generalization of deep learning models in relation to the convex hull of their training sets. A trained image classifier basically partitions its domain via decision boundaries and assigns a class to each of those partitions. The location of decision boundaries inside the convex hull of training set can be investigated in relation to the training samples. However, our analysis shows that in standard image classification datasets, all testing images are considerably outside that convex hull, in the pixel space, in the wavelet space, and in the internal representations learned by deep networks. Therefore, the performance of a trained model partially depends on how its decision boundaries are extended outside the convex hull of its training data. From this perspective which is not studied before, over-parameterization of deep learning models may be considered a necessity for shaping the extension of decision boundaries. At the same time, over-parameterization should be accompanied by a specific training regime, in order to yield a model that not only fits the training set, but also its decision boundaries extend desirably outside the convex hull. To illustrate this, we investigate the decision boundaries of a neural network, with various degrees of parameters, inside and outside the convex hull of its training set. Moreover, we use a polynomial decision boundary to study the necessity of over-parameterization and the influence of training regime in shaping its extensions outside the convex hull of training set.
1701.03826
Yu Zhang
Yu Zhang, Kanat Tangwongsan and Srikanta Tirthapura
Streaming k-Means Clustering with Fast Queries
null
null
null
null
cs.DS cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present methods for k-means clustering on a stream with a focus on providing fast responses to clustering queries. Compared to the current state-of-the-art, our methods provide substantial improvement in the query time for cluster centers while retaining the desirable properties of provably small approximation error and low space usage. Our algorithms rely on a novel idea of "coreset caching" that systematically reuses coresets (summaries of data) computed for recent queries in answering the current clustering query. We present both theoretical analysis and detailed experiments demonstrating their correctness and efficiency
[ { "created": "Fri, 13 Jan 2017 20:21:08 GMT", "version": "v1" }, { "created": "Thu, 6 Dec 2018 20:28:18 GMT", "version": "v2" } ]
2018-12-10
[ [ "Zhang", "Yu", "" ], [ "Tangwongsan", "Kanat", "" ], [ "Tirthapura", "Srikanta", "" ] ]
We present methods for k-means clustering on a stream with a focus on providing fast responses to clustering queries. Compared to the current state-of-the-art, our methods provide substantial improvement in the query time for cluster centers while retaining the desirable properties of provably small approximation error and low space usage. Our algorithms rely on a novel idea of "coreset caching" that systematically reuses coresets (summaries of data) computed for recent queries in answering the current clustering query. We present both theoretical analysis and detailed experiments demonstrating their correctness and efficiency
2212.04264
Kaan Ak\c{s}it
Ahmet G\"uzel, Jeanne Beyazian, Praneeth Chakravarthula and Kaan Ak\c{s}it
ChromaCorrect: Prescription Correction in Virtual Reality Headsets through Perceptual Guidance
12 pages, 9 figures, 1 table, 1 listing
null
null
null
cs.HC cs.GR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
A large portion of today's world population suffer from vision impairments and wear prescription eyeglasses. However, eyeglasses causes additional bulk and discomfort when used with augmented and virtual reality headsets, thereby negatively impacting the viewer's visual experience. In this work, we remedy the usage of prescription eyeglasses in Virtual Reality (VR) headsets by shifting the optical complexity completely into software and propose a prescription-aware rendering approach for providing sharper and immersive VR imagery. To this end, we develop a differentiable display and visual perception model encapsulating display-specific parameters, color and visual acuity of human visual system and the user-specific refractive errors. Using this differentiable visual perception model, we optimize the rendered imagery in the display using stochastic gradient-descent solvers. This way, we provide prescription glasses-free sharper images for a person with vision impairments. We evaluate our approach on various displays, including desktops and VR headsets, and show significant quality and contrast improvements for users with vision impairments.
[ { "created": "Thu, 8 Dec 2022 13:30:17 GMT", "version": "v1" } ]
2022-12-09
[ [ "Güzel", "Ahmet", "" ], [ "Beyazian", "Jeanne", "" ], [ "Chakravarthula", "Praneeth", "" ], [ "Akşit", "Kaan", "" ] ]
A large portion of today's world population suffer from vision impairments and wear prescription eyeglasses. However, eyeglasses causes additional bulk and discomfort when used with augmented and virtual reality headsets, thereby negatively impacting the viewer's visual experience. In this work, we remedy the usage of prescription eyeglasses in Virtual Reality (VR) headsets by shifting the optical complexity completely into software and propose a prescription-aware rendering approach for providing sharper and immersive VR imagery. To this end, we develop a differentiable display and visual perception model encapsulating display-specific parameters, color and visual acuity of human visual system and the user-specific refractive errors. Using this differentiable visual perception model, we optimize the rendered imagery in the display using stochastic gradient-descent solvers. This way, we provide prescription glasses-free sharper images for a person with vision impairments. We evaluate our approach on various displays, including desktops and VR headsets, and show significant quality and contrast improvements for users with vision impairments.
2105.06186
EPTCS
Nicolas Behr (Universit\'e Paris Cit\'e, IRIF, CNRS), Joachim Kock (Universitat Aut\`onoma de Barcelona & Centre de Recerca Matem\`atica)
Tracelet Hopf Algebras and Decomposition Spaces (Extended Abstract)
In Proceedings ACT 2021, arXiv:2211.01102
EPTCS 372, 2022, pp. 323-337
10.4204/EPTCS.372.23
null
cs.LO math.CT
http://creativecommons.org/licenses/by/4.0/
Tracelets are the intrinsic carriers of causal information in categorical rewriting systems. In this work, we assemble tracelets into a symmetric monoidal decomposition space, inducing a cocommutative Hopf algebra of tracelets. This Hopf algebra captures important combinatorial and algebraic aspects of rewriting theory, and is motivated by applications of its representation theory to stochastic rewriting systems such as chemical reaction networks.
[ { "created": "Thu, 13 May 2021 10:59:16 GMT", "version": "v1" }, { "created": "Tue, 13 Jul 2021 17:36:49 GMT", "version": "v2" }, { "created": "Thu, 3 Nov 2022 14:20:47 GMT", "version": "v3" } ]
2022-11-04
[ [ "Behr", "Nicolas", "", "Université Paris Cité, IRIF, CNRS" ], [ "Kock", "Joachim", "", "Universitat Autònoma de Barcelona & Centre de Recerca Matemàtica" ] ]
Tracelets are the intrinsic carriers of causal information in categorical rewriting systems. In this work, we assemble tracelets into a symmetric monoidal decomposition space, inducing a cocommutative Hopf algebra of tracelets. This Hopf algebra captures important combinatorial and algebraic aspects of rewriting theory, and is motivated by applications of its representation theory to stochastic rewriting systems such as chemical reaction networks.
1909.10869
Sam Thompson
Dominik D. Freydenberger and Sam M. Thompson
Dynamic Complexity of Document Spanners
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present paper investigates the dynamic complexity of document spanners, a formal framework for information extraction introduced by Fagin, Kimelfeld, Reiss, and Vansummeren (JACM 2015). We first look at the class of regular spanners and prove that any regular spanner can be maintained in the dynamic complexity class DynPROP. This result follows from work done previously on the dynamic complexity of formal languages by Gelade, Marquardt, and Schwentick (TOCL 2012). To investigate core spanners we use SpLog, a concatenation logic that exactly captures core spanners. We show that the dynamic complexity class DynCQ, is more expressive than SpLog and therefore can maintain any core spanner. This result is then extended to show that DynFO can maintain any generalized core spanner and that DynFO is at least as powerful as SpLog with negation.
[ { "created": "Tue, 24 Sep 2019 13:13:37 GMT", "version": "v1" }, { "created": "Thu, 9 Jan 2020 14:01:43 GMT", "version": "v2" } ]
2020-01-10
[ [ "Freydenberger", "Dominik D.", "" ], [ "Thompson", "Sam M.", "" ] ]
The present paper investigates the dynamic complexity of document spanners, a formal framework for information extraction introduced by Fagin, Kimelfeld, Reiss, and Vansummeren (JACM 2015). We first look at the class of regular spanners and prove that any regular spanner can be maintained in the dynamic complexity class DynPROP. This result follows from work done previously on the dynamic complexity of formal languages by Gelade, Marquardt, and Schwentick (TOCL 2012). To investigate core spanners we use SpLog, a concatenation logic that exactly captures core spanners. We show that the dynamic complexity class DynCQ, is more expressive than SpLog and therefore can maintain any core spanner. This result is then extended to show that DynFO can maintain any generalized core spanner and that DynFO is at least as powerful as SpLog with negation.
2006.07563
Huthaifa I. Ashqar
Huthaifa I. Ashqar, Mohammed Elhenawy, and Hesham A.Rakha
Modeling bike counts in a bike-sharing system considering the effect of weather conditions
Published in Case Studies on Transport Policy (Volume 7, Issue 2, June 2019, Pages 261-268)
Case studies on transport policy 7, no. 2 (2019): 261-268
10.1016/j.cstp.2019.02.011
null
cs.CY cs.LG physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper develops a method that quantifies the effect of weather conditions on the prediction of bike station counts in the San Francisco Bay Area Bike Share System. The Random Forest technique was used to rank the predictors that were then used to develop a regression model using a guided forward step-wise regression approach. The Bayesian Information Criterion was used in the development and comparison of the various prediction models. We demonstrated that the proposed approach is promising to quantify the effect of various features on a large BSS and on each station in cases of large networks with big data. The results show that the time-of-the-day, temperature, and humidity level (which has not been studied before) are significant count predictors. It also shows that as weather variables are geographic location dependent and thus should be quantified before using them in modeling. Further, findings show that the number of available bikes at station i at time t-1 and time-of-the-day were the most significant variables in estimating the bike counts at station i.
[ { "created": "Sat, 13 Jun 2020 05:32:32 GMT", "version": "v1" } ]
2020-06-16
[ [ "Ashqar", "Huthaifa I.", "" ], [ "Elhenawy", "Mohammed", "" ], [ "Rakha", "Hesham A.", "" ] ]
The paper develops a method that quantifies the effect of weather conditions on the prediction of bike station counts in the San Francisco Bay Area Bike Share System. The Random Forest technique was used to rank the predictors that were then used to develop a regression model using a guided forward step-wise regression approach. The Bayesian Information Criterion was used in the development and comparison of the various prediction models. We demonstrated that the proposed approach is promising to quantify the effect of various features on a large BSS and on each station in cases of large networks with big data. The results show that the time-of-the-day, temperature, and humidity level (which has not been studied before) are significant count predictors. It also shows that as weather variables are geographic location dependent and thus should be quantified before using them in modeling. Further, findings show that the number of available bikes at station i at time t-1 and time-of-the-day were the most significant variables in estimating the bike counts at station i.
1910.05895
Julian Salazar
Toan Q. Nguyen and Julian Salazar
Transformers without Tears: Improving the Normalization of Self-Attention
Accepted to IWSLT 2019 (oral); code is available at https://github.com/tnq177/transformers_without_tears
null
10.5281/zenodo.3525484
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance.
[ { "created": "Mon, 14 Oct 2019 02:23:43 GMT", "version": "v1" }, { "created": "Mon, 30 Dec 2019 03:53:04 GMT", "version": "v2" } ]
2020-01-01
[ [ "Nguyen", "Toan Q.", "" ], [ "Salazar", "Julian", "" ] ]
We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance.
2204.11041
Meng Xing
Meng Xing, Zhiyong Feng, Yong Su and Changjae Oh
Learning by Erasing: Conditional Entropy based Transferable Out-Of-Distribution Detection
update new experimental results
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Out-of-distribution (OOD) detection is essential to handle the distribution shifts between training and test scenarios. For a new in-distribution (ID) dataset, existing methods require retraining to capture the dataset-specific feature representation or data distribution. In this paper, we propose a deep generative models (DGM) based transferable OOD detection method, which is unnecessary to retrain on a new ID dataset. We design an image erasing strategy to equip exclusive conditional entropy distribution for each ID dataset, which determines the discrepancy of DGM's posteriori ucertainty distribution on different ID datasets. Owing to the powerful representation capacity of convolutional neural networks, the proposed model trained on complex dataset can capture the above discrepancy between ID datasets without retraining and thus achieve transferable OOD detection. We validate the proposed method on five datasets and verity that ours achieves comparable performance to the state-of-the-art group based OOD detection methods that need to be retrained to deploy on new ID datasets. Our code is available at https://github.com/oOHCIOo/CETOOD.
[ { "created": "Sat, 23 Apr 2022 10:19:58 GMT", "version": "v1" }, { "created": "Mon, 7 Aug 2023 22:47:07 GMT", "version": "v2" }, { "created": "Wed, 27 Mar 2024 14:29:27 GMT", "version": "v3" } ]
2024-03-28
[ [ "Xing", "Meng", "" ], [ "Feng", "Zhiyong", "" ], [ "Su", "Yong", "" ], [ "Oh", "Changjae", "" ] ]
Out-of-distribution (OOD) detection is essential to handle the distribution shifts between training and test scenarios. For a new in-distribution (ID) dataset, existing methods require retraining to capture the dataset-specific feature representation or data distribution. In this paper, we propose a deep generative models (DGM) based transferable OOD detection method, which is unnecessary to retrain on a new ID dataset. We design an image erasing strategy to equip exclusive conditional entropy distribution for each ID dataset, which determines the discrepancy of DGM's posteriori ucertainty distribution on different ID datasets. Owing to the powerful representation capacity of convolutional neural networks, the proposed model trained on complex dataset can capture the above discrepancy between ID datasets without retraining and thus achieve transferable OOD detection. We validate the proposed method on five datasets and verity that ours achieves comparable performance to the state-of-the-art group based OOD detection methods that need to be retrained to deploy on new ID datasets. Our code is available at https://github.com/oOHCIOo/CETOOD.
2211.07343
Pengyu Cheng
Pengyu Cheng, Ruineng Li
Replacing Language Model for Style Transfer
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce replacing language model (RLM), a sequence-to-sequence language modeling framework for text style transfer (TST). Our method autoregressively replaces each token of the source sentence with a text span that has a similar meaning but in the target style. The new span is generated via a non-autoregressive masked language model, which can better preserve the local-contextual meaning of the replaced token. This RLM generation scheme gathers the flexibility of autoregressive models and the accuracy of non-autoregressive models, which bridges the gap between sentence-level and word-level style transfer methods. To control the generation style more precisely, we conduct a token-level style-content disentanglement on the hidden representations of RLM. Empirical results on real-world text datasets demonstrate the effectiveness of RLM compared with other TST baselines. The code is at https://github.com/Linear95/RLM.
[ { "created": "Mon, 14 Nov 2022 13:35:55 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 12:51:09 GMT", "version": "v2" } ]
2024-02-29
[ [ "Cheng", "Pengyu", "" ], [ "Li", "Ruineng", "" ] ]
We introduce replacing language model (RLM), a sequence-to-sequence language modeling framework for text style transfer (TST). Our method autoregressively replaces each token of the source sentence with a text span that has a similar meaning but in the target style. The new span is generated via a non-autoregressive masked language model, which can better preserve the local-contextual meaning of the replaced token. This RLM generation scheme gathers the flexibility of autoregressive models and the accuracy of non-autoregressive models, which bridges the gap between sentence-level and word-level style transfer methods. To control the generation style more precisely, we conduct a token-level style-content disentanglement on the hidden representations of RLM. Empirical results on real-world text datasets demonstrate the effectiveness of RLM compared with other TST baselines. The code is at https://github.com/Linear95/RLM.
2009.10689
Vasiliy Gurianov
Vasyliy I. Gurianov
Simulation model of spacetime with the Minkowski metric
8 pages, 5 figures, 2 tables
null
null
null
cs.CE cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a simulation model of spacetime as a discrete model of physical space. The model is based on the ideas of Stephen Wolfram and uses non-numerical modelling. The simulation model is described as an ontology. We use object-oriented simulation (OOS), but the model is also suitable for agent-based simulation (ABS). We use UML2 SP (UML Scientific Profile), an object-oriented simulation language used in scientific fields. This paper describes several experiments that demonstrate time dilation and dynamic relativistic effects. The reproducibility of experimental results can be verified. We provide a link to the repository in this paper. The model is implemented in Python.
[ { "created": "Tue, 22 Sep 2020 17:03:38 GMT", "version": "v1" } ]
2020-09-23
[ [ "Gurianov", "Vasyliy I.", "" ] ]
In this paper, we propose a simulation model of spacetime as a discrete model of physical space. The model is based on the ideas of Stephen Wolfram and uses non-numerical modelling. The simulation model is described as an ontology. We use object-oriented simulation (OOS), but the model is also suitable for agent-based simulation (ABS). We use UML2 SP (UML Scientific Profile), an object-oriented simulation language used in scientific fields. This paper describes several experiments that demonstrate time dilation and dynamic relativistic effects. The reproducibility of experimental results can be verified. We provide a link to the repository in this paper. The model is implemented in Python.
1510.04747
Yang Shi
Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan
Tensor vs Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations
null
null
null
null
cs.LG cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust tensor CP decomposition involves decomposing a tensor into low rank and sparse components. We propose a novel non-convex iterative algorithm with guaranteed recovery. It alternates between low-rank CP decomposition through gradient ascent (a variant of the tensor power method), and hard thresholding of the residual. We prove convergence to the globally optimal solution under natural incoherence conditions on the low rank component, and bounded level of sparse perturbations. We compare our method with natural baselines which apply robust matrix PCA either to the {\em flattened} tensor, or to the matrix slices of the tensor. Our method can provably handle a far greater level of perturbation when the sparse tensor is block-structured. This naturally occurs in many applications such as the activity detection task in videos. Our experiments validate these findings. Thus, we establish that tensor methods can tolerate a higher level of gross corruptions compared to matrix methods.
[ { "created": "Thu, 15 Oct 2015 23:40:13 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2015 00:53:13 GMT", "version": "v2" }, { "created": "Thu, 5 Nov 2015 05:02:03 GMT", "version": "v3" }, { "created": "Sat, 14 Nov 2015 21:54:08 GMT", "version": "v4" }, { "created": "Sun, 27 Dec 2015 03:06:51 GMT", "version": "v5" }, { "created": "Fri, 22 Jan 2016 22:41:21 GMT", "version": "v6" }, { "created": "Wed, 27 Apr 2016 05:19:21 GMT", "version": "v7" } ]
2016-04-28
[ [ "Anandkumar", "Animashree", "" ], [ "Jain", "Prateek", "" ], [ "Shi", "Yang", "" ], [ "Niranjan", "U. N.", "" ] ]
Robust tensor CP decomposition involves decomposing a tensor into low rank and sparse components. We propose a novel non-convex iterative algorithm with guaranteed recovery. It alternates between low-rank CP decomposition through gradient ascent (a variant of the tensor power method), and hard thresholding of the residual. We prove convergence to the globally optimal solution under natural incoherence conditions on the low rank component, and bounded level of sparse perturbations. We compare our method with natural baselines which apply robust matrix PCA either to the {\em flattened} tensor, or to the matrix slices of the tensor. Our method can provably handle a far greater level of perturbation when the sparse tensor is block-structured. This naturally occurs in many applications such as the activity detection task in videos. Our experiments validate these findings. Thus, we establish that tensor methods can tolerate a higher level of gross corruptions compared to matrix methods.
1906.02123
Hongming Zhang
Hongming Zhang, Hantian Ding, Yangqiu Song
SP-10K: A Large-scale Evaluation Set for Selectional Preference Acquisition
Accepted by ACL 2019
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selectional Preference (SP) is a commonly observed language phenomenon and proved to be useful in many natural language processing tasks. To provide a better evaluation method for SP models, we introduce SP-10K, a large-scale evaluation set that provides human ratings for the plausibility of 10,000 SP pairs over five SP relations, covering 2,500 most frequent verbs, nouns, and adjectives in American English. Three representative SP acquisition methods based on pseudo-disambiguation are evaluated with SP-10K. To demonstrate the importance of our dataset, we investigate the relationship between SP-10K and the commonsense knowledge in ConceptNet5 and show the potential of using SP to represent the commonsense knowledge. We also use the Winograd Schema Challenge to prove that the proposed new SP relations are essential for the hard pronoun coreference resolution problem.
[ { "created": "Tue, 14 May 2019 08:32:39 GMT", "version": "v1" } ]
2019-06-06
[ [ "Zhang", "Hongming", "" ], [ "Ding", "Hantian", "" ], [ "Song", "Yangqiu", "" ] ]
Selectional Preference (SP) is a commonly observed language phenomenon and proved to be useful in many natural language processing tasks. To provide a better evaluation method for SP models, we introduce SP-10K, a large-scale evaluation set that provides human ratings for the plausibility of 10,000 SP pairs over five SP relations, covering 2,500 most frequent verbs, nouns, and adjectives in American English. Three representative SP acquisition methods based on pseudo-disambiguation are evaluated with SP-10K. To demonstrate the importance of our dataset, we investigate the relationship between SP-10K and the commonsense knowledge in ConceptNet5 and show the potential of using SP to represent the commonsense knowledge. We also use the Winograd Schema Challenge to prove that the proposed new SP relations are essential for the hard pronoun coreference resolution problem.
2405.14567
Adibvafa Fallahpour
Adibvafa Fallahpour, Mahshid Alinoori, Arash Afkanpour, Amrit Krishnan
EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records
17 Pages, 4 Figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Transformers have significantly advanced the modeling of Electronic Health Records (EHR), yet their deployment in real-world healthcare is limited by several key challenges. Firstly, the quadratic computational cost and insufficient context length of these models pose significant obstacles for hospitals in processing the extensive medical histories typical in EHR data. Additionally, existing models employ separate finetuning for each clinical task, complicating maintenance in healthcare environments. Moreover, these models focus exclusively on either clinical prediction or EHR forecasting, lacking the flexibility to perform well across both. To overcome these limitations, we introduce EHRMamba, a robust foundation model built on the Mamba architecture. EHRMamba can process sequences up to four times longer than previous models due to its linear computational cost. We also introduce a novel approach to Multitask Prompted Finetuning (MTF) for EHR data, which enables EHRMamba to simultaneously learn multiple clinical tasks in a single finetuning phase, significantly enhancing deployment and cross-task generalization. Furthermore, our model leverages the HL7 FHIR data standard to simplify integration into existing hospital systems. Alongside EHRMamba, we open-source Odyssey, a toolkit designed to support the development and deployment of EHR foundation models, with an emphasis on data standardization and interpretability. Our evaluations on the MIMIC-IV dataset demonstrate that EHRMamba advances state-of-the-art performance across 6 major clinical tasks and excels in EHR forecasting, marking a significant leap forward in the field.
[ { "created": "Thu, 23 May 2024 13:43:29 GMT", "version": "v1" }, { "created": "Fri, 24 May 2024 02:22:21 GMT", "version": "v2" } ]
2024-05-27
[ [ "Fallahpour", "Adibvafa", "" ], [ "Alinoori", "Mahshid", "" ], [ "Afkanpour", "Arash", "" ], [ "Krishnan", "Amrit", "" ] ]
Transformers have significantly advanced the modeling of Electronic Health Records (EHR), yet their deployment in real-world healthcare is limited by several key challenges. Firstly, the quadratic computational cost and insufficient context length of these models pose significant obstacles for hospitals in processing the extensive medical histories typical in EHR data. Additionally, existing models employ separate finetuning for each clinical task, complicating maintenance in healthcare environments. Moreover, these models focus exclusively on either clinical prediction or EHR forecasting, lacking the flexibility to perform well across both. To overcome these limitations, we introduce EHRMamba, a robust foundation model built on the Mamba architecture. EHRMamba can process sequences up to four times longer than previous models due to its linear computational cost. We also introduce a novel approach to Multitask Prompted Finetuning (MTF) for EHR data, which enables EHRMamba to simultaneously learn multiple clinical tasks in a single finetuning phase, significantly enhancing deployment and cross-task generalization. Furthermore, our model leverages the HL7 FHIR data standard to simplify integration into existing hospital systems. Alongside EHRMamba, we open-source Odyssey, a toolkit designed to support the development and deployment of EHR foundation models, with an emphasis on data standardization and interpretability. Our evaluations on the MIMIC-IV dataset demonstrate that EHRMamba advances state-of-the-art performance across 6 major clinical tasks and excels in EHR forecasting, marking a significant leap forward in the field.
2006.02887
Francesco Dagnino
Francesco Dagnino
Foundations of regular coinduction
null
Logical Methods in Computer Science, Volume 17, Issue 4 (October 1, 2021) lmcs:6553
10.46298/lmcs-17(4:2)2021
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Inference systems are a widespread framework used to define possibly recursive predicates by means of inference rules. They allow both inductive and coinductive interpretations that are fairly well-studied. In this paper, we consider a middle way interpretation, called regular, which combines advantages of both approaches: it allows non-well-founded reasoning while being finite. We show that the natural proof-theoretic definition of the regular interpretation, based on regular trees, coincides with a rational fixed point. Then, we provide an equivalent inductive characterization, which leads to an algorithm which looks for a regular derivation of a judgment. Relying on these results, we define proof techniques for regular reasoning: the regular coinduction principle, to prove completeness, and an inductive technique to prove soundness, based on the inductive characterization of the regular interpretation. Finally, we show the regular approach can be smoothly extended to inference systems with corules, a recently introduced, generalised framework, which allows one to refine the coinductive interpretation, proving that also this flexible regular interpretation admits an equivalent inductive characterisation.
[ { "created": "Thu, 4 Jun 2020 14:33:39 GMT", "version": "v1" }, { "created": "Thu, 11 Jun 2020 12:45:07 GMT", "version": "v2" }, { "created": "Wed, 5 May 2021 15:30:23 GMT", "version": "v3" }, { "created": "Thu, 6 May 2021 10:45:38 GMT", "version": "v4" }, { "created": "Thu, 20 May 2021 14:52:28 GMT", "version": "v5" }, { "created": "Thu, 30 Sep 2021 11:58:29 GMT", "version": "v6" } ]
2023-06-22
[ [ "Dagnino", "Francesco", "" ] ]
Inference systems are a widespread framework used to define possibly recursive predicates by means of inference rules. They allow both inductive and coinductive interpretations that are fairly well-studied. In this paper, we consider a middle way interpretation, called regular, which combines advantages of both approaches: it allows non-well-founded reasoning while being finite. We show that the natural proof-theoretic definition of the regular interpretation, based on regular trees, coincides with a rational fixed point. Then, we provide an equivalent inductive characterization, which leads to an algorithm which looks for a regular derivation of a judgment. Relying on these results, we define proof techniques for regular reasoning: the regular coinduction principle, to prove completeness, and an inductive technique to prove soundness, based on the inductive characterization of the regular interpretation. Finally, we show the regular approach can be smoothly extended to inference systems with corules, a recently introduced, generalised framework, which allows one to refine the coinductive interpretation, proving that also this flexible regular interpretation admits an equivalent inductive characterisation.
2405.13075
Shunyang Zhang
S. Zhang, S. Wang, H. Miao, H. Chen, C. Fan, J. Zhang
Score-CDM: Score-Weighted Convolutional Diffusion Model for Multivariate Time Series Imputation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multivariant time series (MTS) data are usually incomplete in real scenarios, and imputing the incomplete MTS is practically important to facilitate various time series mining tasks. Recently, diffusion model-based MTS imputation methods have achieved promising results by utilizing CNN or attention mechanisms for temporal feature learning. However, it is hard to adaptively trade off the diverse effects of local and global temporal features by simply combining CNN and attention. To address this issue, we propose a Score-weighted Convolutional Diffusion Model (Score-CDM for short), whose backbone consists of a Score-weighted Convolution Module (SCM) and an Adaptive Reception Module (ARM). SCM adopts a score map to capture the global temporal features in the time domain, while ARM uses a Spectral2Time Window Block (S2TWB) to convolve the local time series data in the spectral domain. Benefiting from the time convolution properties of Fast Fourier Transformation, ARM can adaptively change the receptive field of the score map, and thus effectively balance the local and global temporal features. We conduct extensive evaluations on three real MTS datasets of different domains, and the result verifies the effectiveness of the proposed Score-CDM.
[ { "created": "Tue, 21 May 2024 02:00:55 GMT", "version": "v1" } ]
2024-05-24
[ [ "Zhang", "S.", "" ], [ "Wang", "S.", "" ], [ "Miao", "H.", "" ], [ "Chen", "H.", "" ], [ "Fan", "C.", "" ], [ "Zhang", "J.", "" ] ]
Multivariant time series (MTS) data are usually incomplete in real scenarios, and imputing the incomplete MTS is practically important to facilitate various time series mining tasks. Recently, diffusion model-based MTS imputation methods have achieved promising results by utilizing CNN or attention mechanisms for temporal feature learning. However, it is hard to adaptively trade off the diverse effects of local and global temporal features by simply combining CNN and attention. To address this issue, we propose a Score-weighted Convolutional Diffusion Model (Score-CDM for short), whose backbone consists of a Score-weighted Convolution Module (SCM) and an Adaptive Reception Module (ARM). SCM adopts a score map to capture the global temporal features in the time domain, while ARM uses a Spectral2Time Window Block (S2TWB) to convolve the local time series data in the spectral domain. Benefiting from the time convolution properties of Fast Fourier Transformation, ARM can adaptively change the receptive field of the score map, and thus effectively balance the local and global temporal features. We conduct extensive evaluations on three real MTS datasets of different domains, and the result verifies the effectiveness of the proposed Score-CDM.
2202.06649
Zhensu Sun
Zhensu Sun, Li Li, Yan Liu, Xiaoning Du, Li Li
On the Importance of Building High-quality Training Datasets for Neural Code Search
11 pages, accepted to ICSE 2022
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
The performance of neural code search is significantly influenced by the quality of the training data from which the neural models are derived. A large corpus of high-quality query and code pairs is demanded to establish a precise mapping from the natural language to the programming language. Due to the limited availability, most widely-used code search datasets are established with compromise, such as using code comments as a replacement of queries. Our empirical study on a famous code search dataset reveals that over one-third of its queries contain noises that make them deviate from natural user queries. Models trained through noisy data are faced with severe performance degradation when applied in real-world scenarios. To improve the dataset quality and make the queries of its samples semantically identical to real user queries is critical for the practical usability of neural code search. In this paper, we propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter. This is the first framework that applies semantic query cleaning to code search datasets. Experimentally, we evaluated the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks. Training the popular DeepCS model with the filtered dataset from our framework improves its performance by 19.2% MRR and 21.3% Answer@1, on average with the three validation benchmarks.
[ { "created": "Mon, 14 Feb 2022 12:02:41 GMT", "version": "v1" } ]
2022-02-15
[ [ "Sun", "Zhensu", "" ], [ "Li", "Li", "" ], [ "Liu", "Yan", "" ], [ "Du", "Xiaoning", "" ], [ "Li", "Li", "" ] ]
The performance of neural code search is significantly influenced by the quality of the training data from which the neural models are derived. A large corpus of high-quality query and code pairs is demanded to establish a precise mapping from the natural language to the programming language. Due to the limited availability, most widely-used code search datasets are established with compromise, such as using code comments as a replacement of queries. Our empirical study on a famous code search dataset reveals that over one-third of its queries contain noises that make them deviate from natural user queries. Models trained through noisy data are faced with severe performance degradation when applied in real-world scenarios. To improve the dataset quality and make the queries of its samples semantically identical to real user queries is critical for the practical usability of neural code search. In this paper, we propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter. This is the first framework that applies semantic query cleaning to code search datasets. Experimentally, we evaluated the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks. Training the popular DeepCS model with the filtered dataset from our framework improves its performance by 19.2% MRR and 21.3% Answer@1, on average with the three validation benchmarks.
1705.03427
Laurent Massoulie
Laurent Massouli\'e and R\'emi Varloot
Rapid Mixing of Local Graph Dynamics
null
null
null
null
cs.DC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium. In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.
[ { "created": "Tue, 9 May 2017 16:56:19 GMT", "version": "v1" } ]
2017-05-10
[ [ "Massoulié", "Laurent", "" ], [ "Varloot", "Rémi", "" ] ]
Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium. In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.
2402.19262
Advait Gadhikar
Advait Gadhikar and Rebekka Burkholz
Masks, Signs, And Learning Rate Rewinding
Accepted for publishing at ICLR 2024
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learning, understanding how LRR excels in both aspects can bring us closer to the design of more flexible deep learning algorithms that can optimize diverse sets of sparse architectures. To this end, we conduct experiments that disentangle the effect of mask learning and parameter optimization and how both benefit from overparameterization. The ability of LRR to flip parameter signs early and stay robust to sign perturbations seems to make it not only more effective in mask identification but also in optimizing diverse sets of masks, including random ones. In support of this hypothesis, we prove in a simplified single hidden neuron setting that LRR succeeds in more cases than IMP, as it can escape initially problematic sign configurations.
[ { "created": "Thu, 29 Feb 2024 15:32:02 GMT", "version": "v1" } ]
2024-03-01
[ [ "Gadhikar", "Advait", "" ], [ "Burkholz", "Rebekka", "" ] ]
Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learning, understanding how LRR excels in both aspects can bring us closer to the design of more flexible deep learning algorithms that can optimize diverse sets of sparse architectures. To this end, we conduct experiments that disentangle the effect of mask learning and parameter optimization and how both benefit from overparameterization. The ability of LRR to flip parameter signs early and stay robust to sign perturbations seems to make it not only more effective in mask identification but also in optimizing diverse sets of masks, including random ones. In support of this hypothesis, we prove in a simplified single hidden neuron setting that LRR succeeds in more cases than IMP, as it can escape initially problematic sign configurations.
2306.12829
Klaus Schoeffmann
Natalia Math\'a, Klaus Schoeffmann, Konstantin Schekotihin, Stephanie Sarny, Doris Putzgruber-Adamitsch, Yosuf El-Shabrawi
Relevance-Based Compression of Cataract Surgery Videos
11 pages, 5 figures, 3 tables
null
null
null
cs.MM
http://creativecommons.org/licenses/by/4.0/
In the last decade, the need for storing videos from cataract surgery has increased significantly. Hospitals continue to improve their imaging and recording devices (e.g., microscopes and cameras used in microscopic surgery, such as ophthalmology) to enhance their post-surgical processing efficiency. The video recordings enable a lot of user-cases after the actual surgery, for example, teaching, documentation, and forensics. However, videos recorded from operations are typically stored in the internal archive without any domain-specific compression, leading to a massive storage space consumption. In this work, we propose a relevance-based compression scheme for videos from cataract surgery, which is based on content specifics of particular cataract surgery phases. We evaluate our compression scheme with three state-of-the-art video codecs, namely H.264/AVC, H.265/HEVC, and AV1, and ask medical experts to evaluate the visual quality of encoded videos. Our results show significant savings, in particular up to 95.94% when using H.264/AVC, up to 98.71% when using H.265/HEVC, and up to 98.82% when using AV1.
[ { "created": "Thu, 22 Jun 2023 12:04:37 GMT", "version": "v1" } ]
2023-06-23
[ [ "Mathá", "Natalia", "" ], [ "Schoeffmann", "Klaus", "" ], [ "Schekotihin", "Konstantin", "" ], [ "Sarny", "Stephanie", "" ], [ "Putzgruber-Adamitsch", "Doris", "" ], [ "El-Shabrawi", "Yosuf", "" ] ]
In the last decade, the need for storing videos from cataract surgery has increased significantly. Hospitals continue to improve their imaging and recording devices (e.g., microscopes and cameras used in microscopic surgery, such as ophthalmology) to enhance their post-surgical processing efficiency. The video recordings enable a lot of user-cases after the actual surgery, for example, teaching, documentation, and forensics. However, videos recorded from operations are typically stored in the internal archive without any domain-specific compression, leading to a massive storage space consumption. In this work, we propose a relevance-based compression scheme for videos from cataract surgery, which is based on content specifics of particular cataract surgery phases. We evaluate our compression scheme with three state-of-the-art video codecs, namely H.264/AVC, H.265/HEVC, and AV1, and ask medical experts to evaluate the visual quality of encoded videos. Our results show significant savings, in particular up to 95.94% when using H.264/AVC, up to 98.71% when using H.265/HEVC, and up to 98.82% when using AV1.
1812.11826
Cihan Tugrul Cicek
Cihan Tugrul Cicek and Hakan Gultekin and Bulent Tavli and Halim Yanikomeroglu
UAV Base Station Location Optimization for Next Generation Wireless Networks: Overview and Future Research Directions
1st IEEE Unmanned Vehicle Systems (UVS) Conference - Oman Feb.2019
null
10.1109/UVS.2019.8658363
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicles mounted base stations (UAV-BSs) are expected to become one of the significant components of the Next Generation Wireless Networks (NGWNs). Rapid deployment, mobility, higher chances of unobstructed propagation path, and flexibility features of UAV-BSs have attracted significant attention. Despite, potentially, high gains brought by UAV-BSs in NGWNs, many challenges are also introduced by them. Optimal location assignment to UAV-BSs, arguably, is the most widely investigated problem in the literature on UAV-BSs in NGWNs. This paper presents a comprehensive survey of the literature on the location optimization of UAV-BSs in NGWNs. A generic optimization framework through a universal Mixed Integer Non-Linear Programming (MINLP) formulation is constructed and the specifications of its constituents are elaborated. The generic problem is classified into a novel taxonomy. Due to the highly challenging nature of the optimization problem a range of solutions are adopted in the literature which are also covered under the aforementioned classification. Furthermore, future research directions on UAV-BS location optimization in 5G and beyond non-terrestrial aerial communication systems are discussed.
[ { "created": "Mon, 3 Dec 2018 19:22:03 GMT", "version": "v1" } ]
2020-08-27
[ [ "Cicek", "Cihan Tugrul", "" ], [ "Gultekin", "Hakan", "" ], [ "Tavli", "Bulent", "" ], [ "Yanikomeroglu", "Halim", "" ] ]
Unmanned aerial vehicles mounted base stations (UAV-BSs) are expected to become one of the significant components of the Next Generation Wireless Networks (NGWNs). Rapid deployment, mobility, higher chances of unobstructed propagation path, and flexibility features of UAV-BSs have attracted significant attention. Despite, potentially, high gains brought by UAV-BSs in NGWNs, many challenges are also introduced by them. Optimal location assignment to UAV-BSs, arguably, is the most widely investigated problem in the literature on UAV-BSs in NGWNs. This paper presents a comprehensive survey of the literature on the location optimization of UAV-BSs in NGWNs. A generic optimization framework through a universal Mixed Integer Non-Linear Programming (MINLP) formulation is constructed and the specifications of its constituents are elaborated. The generic problem is classified into a novel taxonomy. Due to the highly challenging nature of the optimization problem a range of solutions are adopted in the literature which are also covered under the aforementioned classification. Furthermore, future research directions on UAV-BS location optimization in 5G and beyond non-terrestrial aerial communication systems are discussed.
1904.12777
Abdullah Almethen
Abdullah Almethen, Othon Michail, Igor Potapov
Pushing Lines Helps: Efficient Universal Centralised Transformations for Programmable Matter
40 pages, 27 figures
null
null
null
cs.DS cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a discrete system of entities residing on a two-dimensional square grid. Each entity is modelled as a node occupying a distinct cell of the grid. The set of all $n$ nodes forms initially a connected shape $A$. Entities are equipped with a linear-strength pushing mechanism that can push a whole line of entities, from 1 to $n$, in parallel in a single time-step. A target connected shape $B$ is also provided and the goal is to \emph{transform} $A$ into $B$ via a sequence of line movements. Existing models based on local movement of individual nodes, such as rotating or sliding a single node, can be shown to be special cases of the present model, therefore their (inefficient, $\Theta(n^2)$) \emph{universal transformations} carry over. Our main goal is to investigate whether the parallelism inherent in this new type of movement can be exploited for efficient, i.e., sub-quadratic worst-case, transformations. As a first step towards this, we restrict attention solely to centralised transformations and leave the distributed case as a direction for future research. Our results are positive. By focusing on the apparently hard instance of transforming a diagonal $A$ into a straight line $B$, we first obtain transformations of time $O(n\sqrt{n})$ without and with preserving the connectivity of the shape throughout the transformation. Then, we further improve by providing two $O(n\log n)$-time transformations for this problem. By building upon these ideas, we first manage to develop an $O(n\sqrt{n})$-time universal transformation. Our main result is then an $ O(n \log n) $-time universal transformation. We leave as an interesting open problem a suspected $\Omega(n\log n)$-time lower bound.
[ { "created": "Mon, 29 Apr 2019 15:38:41 GMT", "version": "v1" } ]
2019-04-30
[ [ "Almethen", "Abdullah", "" ], [ "Michail", "Othon", "" ], [ "Potapov", "Igor", "" ] ]
In this paper, we study a discrete system of entities residing on a two-dimensional square grid. Each entity is modelled as a node occupying a distinct cell of the grid. The set of all $n$ nodes forms initially a connected shape $A$. Entities are equipped with a linear-strength pushing mechanism that can push a whole line of entities, from 1 to $n$, in parallel in a single time-step. A target connected shape $B$ is also provided and the goal is to \emph{transform} $A$ into $B$ via a sequence of line movements. Existing models based on local movement of individual nodes, such as rotating or sliding a single node, can be shown to be special cases of the present model, therefore their (inefficient, $\Theta(n^2)$) \emph{universal transformations} carry over. Our main goal is to investigate whether the parallelism inherent in this new type of movement can be exploited for efficient, i.e., sub-quadratic worst-case, transformations. As a first step towards this, we restrict attention solely to centralised transformations and leave the distributed case as a direction for future research. Our results are positive. By focusing on the apparently hard instance of transforming a diagonal $A$ into a straight line $B$, we first obtain transformations of time $O(n\sqrt{n})$ without and with preserving the connectivity of the shape throughout the transformation. Then, we further improve by providing two $O(n\log n)$-time transformations for this problem. By building upon these ideas, we first manage to develop an $O(n\sqrt{n})$-time universal transformation. Our main result is then an $ O(n \log n) $-time universal transformation. We leave as an interesting open problem a suspected $\Omega(n\log n)$-time lower bound.
2309.07694
Shentong Mo
Shentong Mo, Miao Xin
Tree of Uncertain Thoughts Reasoning for Large Language Models
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the recently introduced Tree of Thoughts (ToT) has heralded advancements in allowing Large Language Models (LLMs) to reason through foresight and backtracking for global decision-making, it has overlooked the inherent local uncertainties in intermediate decision points or "thoughts". These local uncertainties, intrinsic to LLMs given their potential for diverse responses, remain a significant concern in the reasoning process. Addressing this pivotal gap, we introduce the Tree of Uncertain Thoughts (TouT) - a reasoning framework tailored for LLMs. Our TouT effectively leverages Monte Carlo Dropout to quantify uncertainty scores associated with LLMs' diverse local responses at these intermediate steps. By marrying this local uncertainty quantification with global search algorithms, TouT enhances the model's precision in response generation. We substantiate our approach with rigorous experiments on two demanding planning tasks: Game of 24 and Mini Crosswords. The empirical evidence underscores TouT's superiority over both ToT and chain-of-thought prompting methods.
[ { "created": "Thu, 14 Sep 2023 13:14:51 GMT", "version": "v1" } ]
2023-09-15
[ [ "Mo", "Shentong", "" ], [ "Xin", "Miao", "" ] ]
While the recently introduced Tree of Thoughts (ToT) has heralded advancements in allowing Large Language Models (LLMs) to reason through foresight and backtracking for global decision-making, it has overlooked the inherent local uncertainties in intermediate decision points or "thoughts". These local uncertainties, intrinsic to LLMs given their potential for diverse responses, remain a significant concern in the reasoning process. Addressing this pivotal gap, we introduce the Tree of Uncertain Thoughts (TouT) - a reasoning framework tailored for LLMs. Our TouT effectively leverages Monte Carlo Dropout to quantify uncertainty scores associated with LLMs' diverse local responses at these intermediate steps. By marrying this local uncertainty quantification with global search algorithms, TouT enhances the model's precision in response generation. We substantiate our approach with rigorous experiments on two demanding planning tasks: Game of 24 and Mini Crosswords. The empirical evidence underscores TouT's superiority over both ToT and chain-of-thought prompting methods.
1503.08909
George Toderici
Joe Yue-Hei Ng and Matthew Hausknecht and Sudheendra Vijayanarasimhan and Oriol Vinyals and Rajat Monga and George Toderici
Beyond Short Snippets: Deep Networks for Video Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 72.8%).
[ { "created": "Tue, 31 Mar 2015 04:34:12 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2015 19:44:25 GMT", "version": "v2" } ]
2015-04-14
[ [ "Ng", "Joe Yue-Hei", "" ], [ "Hausknecht", "Matthew", "" ], [ "Vijayanarasimhan", "Sudheendra", "" ], [ "Vinyals", "Oriol", "" ], [ "Monga", "Rajat", "" ], [ "Toderici", "George", "" ] ]
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 72.8%).
1112.6160
Katharine Turner
Katharine Turner
Cone fields and topological sampling in manifolds with bounded curvature
20 pages, 3 figures
null
null
null
cs.CG math.AT math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Often noisy point clouds are given as an approximation of a particular compact set of interest. A finite point cloud is a compact set. This paper proves a reconstruction theorem which gives a sufficient condition, as a bound on the Hausdorff distance between two compact sets, for when certain offsets of these two sets are homotopic in terms of the absence of {\mu}-critical points in an annular region. Since an offset of a set deformation retracts to the set itself provided that there are no critical points of the distance function nearby, we can use this theorem to show when the offset of a point cloud is homotopy equivalent to the set it is sampled from. The ambient space can be any Riemannian manifold but we focus on ambient manifolds which have nowhere negative curvature. In the process, we prove stability theorems for {\mu}-critical points when the ambient space is a manifold.
[ { "created": "Wed, 28 Dec 2011 18:10:52 GMT", "version": "v1" }, { "created": "Sat, 3 Aug 2013 05:05:22 GMT", "version": "v2" } ]
2013-08-06
[ [ "Turner", "Katharine", "" ] ]
Often noisy point clouds are given as an approximation of a particular compact set of interest. A finite point cloud is a compact set. This paper proves a reconstruction theorem which gives a sufficient condition, as a bound on the Hausdorff distance between two compact sets, for when certain offsets of these two sets are homotopic in terms of the absence of {\mu}-critical points in an annular region. Since an offset of a set deformation retracts to the set itself provided that there are no critical points of the distance function nearby, we can use this theorem to show when the offset of a point cloud is homotopy equivalent to the set it is sampled from. The ambient space can be any Riemannian manifold but we focus on ambient manifolds which have nowhere negative curvature. In the process, we prove stability theorems for {\mu}-critical points when the ambient space is a manifold.
1501.00696
Pauli Miettinen
Saskia Metzler and Pauli Miettinen
Clustering Boolean Tensors
null
null
10.1007/s10618-015-0420-3
null
cs.NA cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tensor factorizations are computationally hard problems, and in particular, are often significantly harder than their matrix counterparts. In case of Boolean tensor factorizations -- where the input tensor and all the factors are required to be binary and we use Boolean algebra -- much of that hardness comes from the possibility of overlapping components. Yet, in many applications we are perfectly happy to partition at least one of the modes. In this paper we investigate what consequences does this partitioning have on the computational complexity of the Boolean tensor factorizations and present a new algorithm for the resulting clustering problem. This algorithm can alternatively be seen as a particularly regularized clustering algorithm that can handle extremely high-dimensional observations. We analyse our algorithms with the goal of maximizing the similarity and argue that this is more meaningful than minimizing the dissimilarity. As a by-product we obtain a PTAS and an efficient 0.828-approximation algorithm for rank-1 binary factorizations. Our algorithm for Boolean tensor clustering achieves high scalability, high similarity, and good generalization to unseen data with both synthetic and real-world data sets.
[ { "created": "Sun, 4 Jan 2015 17:01:03 GMT", "version": "v1" } ]
2016-09-19
[ [ "Metzler", "Saskia", "" ], [ "Miettinen", "Pauli", "" ] ]
Tensor factorizations are computationally hard problems, and in particular, are often significantly harder than their matrix counterparts. In case of Boolean tensor factorizations -- where the input tensor and all the factors are required to be binary and we use Boolean algebra -- much of that hardness comes from the possibility of overlapping components. Yet, in many applications we are perfectly happy to partition at least one of the modes. In this paper we investigate what consequences does this partitioning have on the computational complexity of the Boolean tensor factorizations and present a new algorithm for the resulting clustering problem. This algorithm can alternatively be seen as a particularly regularized clustering algorithm that can handle extremely high-dimensional observations. We analyse our algorithms with the goal of maximizing the similarity and argue that this is more meaningful than minimizing the dissimilarity. As a by-product we obtain a PTAS and an efficient 0.828-approximation algorithm for rank-1 binary factorizations. Our algorithm for Boolean tensor clustering achieves high scalability, high similarity, and good generalization to unseen data with both synthetic and real-world data sets.
2211.09198
Christian Rieck
Javier Garcia, Michael Yannuzzi, Peter Kramer, Christian Rieck, S\'andor P. Fekete, Aaron T. Becker
Reconfiguration of a 2D Structure Using Spatio-Temporal Planning and Load Transferring
seven pages, eight figures, one table; revised version; to appear in the proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA 2024)
null
null
null
cs.RO cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present progress on the problem of reconfiguring a 2D arrangement of building material by a cooperative group of robots. These robots must avoid collisions, deadlocks, and are subjected to the constraint of maintaining connectivity of the structure. We develop two reconfiguration methods, one based on spatio-temporal planning, and one based on target swapping, to increase building efficiency. The first method can significantly reduce planning times compared to other multi-robot planners. The second method helps to reduce the amount of time robots spend waiting for paths to be cleared, and the overall distance traveled by the robots.
[ { "created": "Wed, 16 Nov 2022 20:32:53 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2024 14:08:20 GMT", "version": "v2" } ]
2024-03-08
[ [ "Garcia", "Javier", "" ], [ "Yannuzzi", "Michael", "" ], [ "Kramer", "Peter", "" ], [ "Rieck", "Christian", "" ], [ "Fekete", "Sándor P.", "" ], [ "Becker", "Aaron T.", "" ] ]
We present progress on the problem of reconfiguring a 2D arrangement of building material by a cooperative group of robots. These robots must avoid collisions, deadlocks, and are subjected to the constraint of maintaining connectivity of the structure. We develop two reconfiguration methods, one based on spatio-temporal planning, and one based on target swapping, to increase building efficiency. The first method can significantly reduce planning times compared to other multi-robot planners. The second method helps to reduce the amount of time robots spend waiting for paths to be cleared, and the overall distance traveled by the robots.
1903.04448
Judith Fan
Judith Fan, Robert Hawkins, Mike Wu, Noah Goodman
Pragmatic inference and visual abstraction enable contextual flexibility during visual communication
29 pages; 5 figures; submitted draft of manuscript
null
10.1007/s42113-019-00058-7
null
cs.OH cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual modes of communication are ubiquitous in modern life --- from maps to data plots to political cartoons. Here we investigate drawing, the most basic form of visual communication. Participants were paired in an online environment to play a drawing-based reference game. On each trial, both participants were shown the same four objects, but in different locations. The sketcher's goal was to draw one of these objects so that the viewer could select it from the array. On `close' trials, objects belonged to the same basic-level category, whereas on `far' trials objects belonged to different categories. We found that people exploited shared information to efficiently communicate about the target object: on far trials, sketchers achieved high recognition accuracy while applying fewer strokes, using less ink, and spending less time on their drawings than on close trials. We hypothesized that humans succeed in this task by recruiting two core faculties: visual abstraction, the ability to perceive the correspondence between an object and a drawing of it; and pragmatic inference, the ability to judge what information would help a viewer distinguish the target from distractors. To evaluate this hypothesis, we developed a computational model of the sketcher that embodied both faculties, instantiated as a deep convolutional neural network nested within a probabilistic program. We found that this model fit human data well and outperformed lesioned variants. Together, this work provides the first algorithmically explicit theory of how visual perception and social cognition jointly support contextual flexibility in visual communication.
[ { "created": "Mon, 11 Mar 2019 17:18:16 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 01:06:20 GMT", "version": "v2" } ]
2019-12-17
[ [ "Fan", "Judith", "" ], [ "Hawkins", "Robert", "" ], [ "Wu", "Mike", "" ], [ "Goodman", "Noah", "" ] ]
Visual modes of communication are ubiquitous in modern life --- from maps to data plots to political cartoons. Here we investigate drawing, the most basic form of visual communication. Participants were paired in an online environment to play a drawing-based reference game. On each trial, both participants were shown the same four objects, but in different locations. The sketcher's goal was to draw one of these objects so that the viewer could select it from the array. On `close' trials, objects belonged to the same basic-level category, whereas on `far' trials objects belonged to different categories. We found that people exploited shared information to efficiently communicate about the target object: on far trials, sketchers achieved high recognition accuracy while applying fewer strokes, using less ink, and spending less time on their drawings than on close trials. We hypothesized that humans succeed in this task by recruiting two core faculties: visual abstraction, the ability to perceive the correspondence between an object and a drawing of it; and pragmatic inference, the ability to judge what information would help a viewer distinguish the target from distractors. To evaluate this hypothesis, we developed a computational model of the sketcher that embodied both faculties, instantiated as a deep convolutional neural network nested within a probabilistic program. We found that this model fit human data well and outperformed lesioned variants. Together, this work provides the first algorithmically explicit theory of how visual perception and social cognition jointly support contextual flexibility in visual communication.
2208.01182
Yun-Wei Chu
Yun-Wei Chu, Seyyedali Hosseinalipour, Elizabeth Tenorio, Laura Cruz, Kerrie Douglas, Andrew Lan, Christopher Brinton
Mitigating Biases in Student Performance Prediction via Attention-Based Personalized Federated Learning
10 pages, CIKM 2022
null
null
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Traditional learning-based approaches to student modeling generalize poorly to underrepresented student groups due to biases in data availability. In this paper, we propose a methodology for predicting student performance from their online learning activities that optimizes inference accuracy over different demographic groups such as race and gender. Building upon recent foundations in federated learning, in our approach, personalized models for individual student subgroups are derived from a global model aggregated across all student models via meta-gradient updates that account for subgroup heterogeneity. To learn better representations of student activity, we augment our approach with a self-supervised behavioral pretraining methodology that leverages multiple modalities of student behavior (e.g., visits to lecture videos and participation on forums), and include a neural network attention mechanism in the model aggregation stage. Through experiments on three real-world datasets from online courses, we demonstrate that our approach obtains substantial improvements over existing student modeling baselines in predicting student learning outcomes for all subgroups. Visual analysis of the resulting student embeddings confirm that our personalization methodology indeed identifies different activity patterns within different subgroups, consistent with its stronger inference ability compared with the baselines.
[ { "created": "Tue, 2 Aug 2022 00:22:20 GMT", "version": "v1" } ]
2022-08-03
[ [ "Chu", "Yun-Wei", "" ], [ "Hosseinalipour", "Seyyedali", "" ], [ "Tenorio", "Elizabeth", "" ], [ "Cruz", "Laura", "" ], [ "Douglas", "Kerrie", "" ], [ "Lan", "Andrew", "" ], [ "Brinton", "Christopher", "" ] ]
Traditional learning-based approaches to student modeling generalize poorly to underrepresented student groups due to biases in data availability. In this paper, we propose a methodology for predicting student performance from their online learning activities that optimizes inference accuracy over different demographic groups such as race and gender. Building upon recent foundations in federated learning, in our approach, personalized models for individual student subgroups are derived from a global model aggregated across all student models via meta-gradient updates that account for subgroup heterogeneity. To learn better representations of student activity, we augment our approach with a self-supervised behavioral pretraining methodology that leverages multiple modalities of student behavior (e.g., visits to lecture videos and participation on forums), and include a neural network attention mechanism in the model aggregation stage. Through experiments on three real-world datasets from online courses, we demonstrate that our approach obtains substantial improvements over existing student modeling baselines in predicting student learning outcomes for all subgroups. Visual analysis of the resulting student embeddings confirm that our personalization methodology indeed identifies different activity patterns within different subgroups, consistent with its stronger inference ability compared with the baselines.
0810.2390
Simone Faro
Simone Faro and Thierry Lecroq
Efficient Pattern Matching on Binary Strings
12 pages
null
null
null
cs.DS cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The binary string matching problem consists in finding all the occurrences of a pattern in a text where both strings are built on a binary alphabet. This is an interesting problem in computer science, since binary data are omnipresent in telecom and computer network applications. Moreover the problem finds applications also in the field of image processing and in pattern matching on compressed texts. Recently it has been shown that adaptations of classical exact string matching algorithms are not very efficient on binary data. In this paper we present two efficient algorithms for the problem adapted to completely avoid any reference to bits allowing to process pattern and text byte by byte. Experimental results show that the new algorithms outperform existing solutions in most cases.
[ { "created": "Tue, 14 Oct 2008 08:44:27 GMT", "version": "v1" }, { "created": "Wed, 15 Oct 2008 12:15:24 GMT", "version": "v2" } ]
2008-10-15
[ [ "Faro", "Simone", "" ], [ "Lecroq", "Thierry", "" ] ]
The binary string matching problem consists in finding all the occurrences of a pattern in a text where both strings are built on a binary alphabet. This is an interesting problem in computer science, since binary data are omnipresent in telecom and computer network applications. Moreover the problem finds applications also in the field of image processing and in pattern matching on compressed texts. Recently it has been shown that adaptations of classical exact string matching algorithms are not very efficient on binary data. In this paper we present two efficient algorithms for the problem adapted to completely avoid any reference to bits allowing to process pattern and text byte by byte. Experimental results show that the new algorithms outperform existing solutions in most cases.
2304.01834
Ntumba Elie Nsampi
Ntumba Elie Nsampi, Adarsh Djeacoumar, Hans-Peter Seidel, Tobias Ritschel, Thomas Leimk\"uhler
Neural Field Convolutions by Repeated Differentiation
null
null
10.1145/3618340
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Neural fields are evolving towards a general-purpose continuous representation for visual computing. Yet, despite their numerous appealing properties, they are hardly amenable to signal processing. As a remedy, we present a method to perform general continuous convolutions with general continuous signals such as neural fields. Observing that piecewise polynomial kernels reduce to a sparse set of Dirac deltas after repeated differentiation, we leverage convolution identities and train a repeated integral field to efficiently execute large-scale convolutions. We demonstrate our approach on a variety of data modalities and spatially-varying kernels.
[ { "created": "Tue, 4 Apr 2023 14:39:44 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 09:18:32 GMT", "version": "v2" }, { "created": "Sun, 10 Mar 2024 19:35:43 GMT", "version": "v3" }, { "created": "Thu, 4 Apr 2024 18:01:47 GMT", "version": "v4" } ]
2024-04-08
[ [ "Nsampi", "Ntumba Elie", "" ], [ "Djeacoumar", "Adarsh", "" ], [ "Seidel", "Hans-Peter", "" ], [ "Ritschel", "Tobias", "" ], [ "Leimkühler", "Thomas", "" ] ]
Neural fields are evolving towards a general-purpose continuous representation for visual computing. Yet, despite their numerous appealing properties, they are hardly amenable to signal processing. As a remedy, we present a method to perform general continuous convolutions with general continuous signals such as neural fields. Observing that piecewise polynomial kernels reduce to a sparse set of Dirac deltas after repeated differentiation, we leverage convolution identities and train a repeated integral field to efficiently execute large-scale convolutions. We demonstrate our approach on a variety of data modalities and spatially-varying kernels.
1401.3357
Jean Gregoire
Jean Gregoire and Emilio Frazzoli and Arnaud de La Fortelle and Tichakorn Wongpiromsarn
Back-pressure traffic signal control with unknown routing rates
accepted for presentation at IFAC 2014, 6 pages. arXiv admin note: text overlap with arXiv:1309.6484
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The control of a network of signalized intersections is considered. Previous works proposed a feedback control belonging to the family of the so-called back-pressure controls that ensures provably maximum stability given pre-specified routing probabilities. However, this optimal back-pressure controller (BP*) requires routing rates and a measure of the number of vehicles queuing at a node for each possible routing decision. It is an idealistic assumption for our application since vehicles (going straight, turning left/right) are all gathered in the same lane apart from the proximity of the intersection and cameras can only give estimations of the aggregated queue length. In this paper, we present a back-pressure traffic signal controller (BP) that does not require routing rates, it requires only aggregated queue lengths estimation (without direction information) and loop detectors at the stop line for each possible direction. A theoretical result on the Lyapunov drift in heavy load conditions under BP control is provided and tends to indicate that BP should have good stability properties. Simulations confirm this and show that BP stabilizes the queuing network in a significant part of the capacity region.
[ { "created": "Mon, 9 Dec 2013 10:41:10 GMT", "version": "v1" }, { "created": "Mon, 31 Mar 2014 13:48:02 GMT", "version": "v2" } ]
2014-04-01
[ [ "Gregoire", "Jean", "" ], [ "Frazzoli", "Emilio", "" ], [ "de La Fortelle", "Arnaud", "" ], [ "Wongpiromsarn", "Tichakorn", "" ] ]
The control of a network of signalized intersections is considered. Previous works proposed a feedback control belonging to the family of the so-called back-pressure controls that ensures provably maximum stability given pre-specified routing probabilities. However, this optimal back-pressure controller (BP*) requires routing rates and a measure of the number of vehicles queuing at a node for each possible routing decision. It is an idealistic assumption for our application since vehicles (going straight, turning left/right) are all gathered in the same lane apart from the proximity of the intersection and cameras can only give estimations of the aggregated queue length. In this paper, we present a back-pressure traffic signal controller (BP) that does not require routing rates, it requires only aggregated queue lengths estimation (without direction information) and loop detectors at the stop line for each possible direction. A theoretical result on the Lyapunov drift in heavy load conditions under BP control is provided and tends to indicate that BP should have good stability properties. Simulations confirm this and show that BP stabilizes the queuing network in a significant part of the capacity region.
1611.07541
Harold Gabow
Harold N. Gabow
Data Structures for Weighted Matching and Extensions to $b$-matching and $f$-factors
This paper is a combination of two conference papers: A preliminary version of the data structures part appeared in Proc. 1st Annual ACM-SIAM Symp. on Disc. Algorithms (SODA), 1990. A preliminary version of the extensions part, based on reduction to matching, appeared in Proc. 15th Annual ACM Symp. on Theory of Comp. (STOC), 1983
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper shows the weighted matching problem on general graphs can be solved in time $O(n(m + n\log n))$ for $n$ and $m$ the number of vertices and edges, respectively. This was previously known only for bipartite graphs. The crux is a data structure for blossom creation. It uses a dynamic nearest-common-ancestor algorithm to simplify blossom steps, so they involve only back edges rather than arbitrary nontree edges. The rest of the paper presents direct extensions of Edmonds' blossom algorithm to weighted $b$-matching and $f$-factors. Again the time bound is the one previously known for bipartite graphs: for $b$-matching the time is $O(\min\{b(V),n\log n\}(m + n\log n))$ and for $f$-factors the time is $O(\min\{f(V),m\log n\}( m + n\log n) )$, where $b(V)$ and $f(V)$ denote the sum of all degree constraints. Several immediate applications of the $f$-factor algorithm are given: The generalized shortest path structure of \cite{GS13}, i.e., the analog of the shortest path tree for conservative undirected graphs, is shown to be a version of the blossom structure for $f$-factors. This structure is found in time $O(|N|(m+n\log n))$ for $N$ the set of negative edges ($0<|N|<n$). A shortest $T$-join is found in time $O(n(m+n\log n))$, or $O(|T|(m+n\log n))$ when all costs are nonnegative. These bounds are all slight improvements of previously known ones, and are simply achieved by proper initialization of the $f$-factor algorithm.
[ { "created": "Tue, 22 Nov 2016 21:22:33 GMT", "version": "v1" } ]
2016-11-24
[ [ "Gabow", "Harold N.", "" ] ]
This paper shows the weighted matching problem on general graphs can be solved in time $O(n(m + n\log n))$ for $n$ and $m$ the number of vertices and edges, respectively. This was previously known only for bipartite graphs. The crux is a data structure for blossom creation. It uses a dynamic nearest-common-ancestor algorithm to simplify blossom steps, so they involve only back edges rather than arbitrary nontree edges. The rest of the paper presents direct extensions of Edmonds' blossom algorithm to weighted $b$-matching and $f$-factors. Again the time bound is the one previously known for bipartite graphs: for $b$-matching the time is $O(\min\{b(V),n\log n\}(m + n\log n))$ and for $f$-factors the time is $O(\min\{f(V),m\log n\}( m + n\log n) )$, where $b(V)$ and $f(V)$ denote the sum of all degree constraints. Several immediate applications of the $f$-factor algorithm are given: The generalized shortest path structure of \cite{GS13}, i.e., the analog of the shortest path tree for conservative undirected graphs, is shown to be a version of the blossom structure for $f$-factors. This structure is found in time $O(|N|(m+n\log n))$ for $N$ the set of negative edges ($0<|N|<n$). A shortest $T$-join is found in time $O(n(m+n\log n))$, or $O(|T|(m+n\log n))$ when all costs are nonnegative. These bounds are all slight improvements of previously known ones, and are simply achieved by proper initialization of the $f$-factor algorithm.
2407.15599
Philip Whittington
Philip Whittington
Online String Attractors
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
In today's data-centric world, fast and effective compression of data is paramount. To measure success towards the second goal, Kempa and Prezza [STOC2018] introduce the string attractor, a combinatorial object unifying dictionary-based compression. Given a string $T \in \Sigma^n$, a string attractor ($k$-attractor) is a set of positions $\Gamma\subseteq [1,n]$, such that every distinct substring (of length at most $k$) has at least one occurrence that contains one of the selected positions. String attractors are shown to be approximated by and thus measure the quality of many important dictionary compression algorithms such as Lempel-Ziv 77, the Burrows-Wheeler transform, straight line programs, and macro schemes. In order to handle massive amounts of data, compression often has to be achieved in a streaming fashion. Thus, practically applied compression algorithms, such as Lempel-Ziv 77, have been extensively studied in an online setting. To the best of our knowledge, there has been no such work, and therefore are no theoretical underpinnings, for the string attractor problem. We introduce a natural online variant of both the $k$-attractor and the string attractor problem. First, we show that the Lempel-Ziv factorization corresponds to the best online algorithm for this problem, resulting in an upper bound of $\mathcal{O}(\log(n))$ on the competitive ratio. On the other hand, there are families of sparse strings which have constant-size optimal attractors, e.g., prefixes of the infinite Sturmian words and Thue-Morse words, which are created by iterative application of a morphism. We consider the most famous of these Sturmian words, the Fibonacci word, and show that any online algorithm has a cost growing with the length of the word, for a matching lower bound of $\Omega(\log(n))$. For the online $k$-attractor problem, we show tight (strict) $k$-competitiveness.
[ { "created": "Mon, 22 Jul 2024 12:44:25 GMT", "version": "v1" } ]
2024-07-23
[ [ "Whittington", "Philip", "" ] ]
In today's data-centric world, fast and effective compression of data is paramount. To measure success towards the second goal, Kempa and Prezza [STOC2018] introduce the string attractor, a combinatorial object unifying dictionary-based compression. Given a string $T \in \Sigma^n$, a string attractor ($k$-attractor) is a set of positions $\Gamma\subseteq [1,n]$, such that every distinct substring (of length at most $k$) has at least one occurrence that contains one of the selected positions. String attractors are shown to be approximated by and thus measure the quality of many important dictionary compression algorithms such as Lempel-Ziv 77, the Burrows-Wheeler transform, straight line programs, and macro schemes. In order to handle massive amounts of data, compression often has to be achieved in a streaming fashion. Thus, practically applied compression algorithms, such as Lempel-Ziv 77, have been extensively studied in an online setting. To the best of our knowledge, there has been no such work, and therefore are no theoretical underpinnings, for the string attractor problem. We introduce a natural online variant of both the $k$-attractor and the string attractor problem. First, we show that the Lempel-Ziv factorization corresponds to the best online algorithm for this problem, resulting in an upper bound of $\mathcal{O}(\log(n))$ on the competitive ratio. On the other hand, there are families of sparse strings which have constant-size optimal attractors, e.g., prefixes of the infinite Sturmian words and Thue-Morse words, which are created by iterative application of a morphism. We consider the most famous of these Sturmian words, the Fibonacci word, and show that any online algorithm has a cost growing with the length of the word, for a matching lower bound of $\Omega(\log(n))$. For the online $k$-attractor problem, we show tight (strict) $k$-competitiveness.
1906.06675
Sadaf Salehkalaibar
Sadaf Salehkalaibar, Mohammad Hossein Yassaee and Vincent Y. F. Tan
Covert Communication Over a Compound Channel
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider fundamental communication limits over a compound channel. Covert communication in the information-theoretic context has been primarily concerned with fundamental limits when the transmitter wishes to communicate to legitimate receiver(s) while ensuring that the communication is not detected by an adversary. This paper, however, considers an alternative, and no less important, setting in which the object to be masked is the state of the compound channel. Such a communication model has applications in the prevention of malicious parties seeking to jam the communication signal when, for example, the signal-to-noise ratio of a wireless channel is found to be low. Our main contribution is the establishment of bounds on the throughput-key region when the covertness constraint is defined in terms of the total variation distance. In addition, for the scenario in which the key length is infinite, we provide a sufficient condition for when the bounds coincide for the scaling of the throughput, which follows the square-root law. Numerical examples, including that of a Gaussian channel, are provided to illustrate our results.
[ { "created": "Sun, 16 Jun 2019 12:59:07 GMT", "version": "v1" } ]
2019-06-18
[ [ "Salehkalaibar", "Sadaf", "" ], [ "Yassaee", "Mohammad Hossein", "" ], [ "Tan", "Vincent Y. F.", "" ] ]
In this paper, we consider fundamental communication limits over a compound channel. Covert communication in the information-theoretic context has been primarily concerned with fundamental limits when the transmitter wishes to communicate to legitimate receiver(s) while ensuring that the communication is not detected by an adversary. This paper, however, considers an alternative, and no less important, setting in which the object to be masked is the state of the compound channel. Such a communication model has applications in the prevention of malicious parties seeking to jam the communication signal when, for example, the signal-to-noise ratio of a wireless channel is found to be low. Our main contribution is the establishment of bounds on the throughput-key region when the covertness constraint is defined in terms of the total variation distance. In addition, for the scenario in which the key length is infinite, we provide a sufficient condition for when the bounds coincide for the scaling of the throughput, which follows the square-root law. Numerical examples, including that of a Gaussian channel, are provided to illustrate our results.
1905.11381
Jirong Yi
Jirong Yi, Hui Xie, Leixin Zhou, Xiaodong Wu, Weiyu Xu, Raghuraman Mudumbai
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
44 Pages, 2 Theorems, 35 Figures, 29 Tables. arXiv admin note: substantial text overlap with arXiv:1901.09413
null
null
null
cs.CR cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep-learning based classification algorithms have been shown to be susceptible to adversarial attacks: minor changes to the input of classifiers can dramatically change their outputs, while being imperceptible to humans. In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. Drawing on ideas from information and coding theory, we propose a general class of defenses for detecting classifier errors caused by abnormally small input perturbations. We further show theoretical guarantees for the performance of this detection method. We present experimental results with (a) a voice recognition system, and (b) a digit recognition system using the MNIST database, to demonstrate the effectiveness of the proposed defense methods. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.
[ { "created": "Sat, 25 May 2019 21:57:51 GMT", "version": "v1" } ]
2019-05-29
[ [ "Yi", "Jirong", "" ], [ "Xie", "Hui", "" ], [ "Zhou", "Leixin", "" ], [ "Wu", "Xiaodong", "" ], [ "Xu", "Weiyu", "" ], [ "Mudumbai", "Raghuraman", "" ] ]
Deep-learning based classification algorithms have been shown to be susceptible to adversarial attacks: minor changes to the input of classifiers can dramatically change their outputs, while being imperceptible to humans. In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. Drawing on ideas from information and coding theory, we propose a general class of defenses for detecting classifier errors caused by abnormally small input perturbations. We further show theoretical guarantees for the performance of this detection method. We present experimental results with (a) a voice recognition system, and (b) a digit recognition system using the MNIST database, to demonstrate the effectiveness of the proposed defense methods. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.
2303.07682
Haobin Tang
Haobin Tang, Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao
QI-TTS: Questioning Intonation Control for Emotional Speech Synthesis
Accepted by ICASSP 2023
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent expressive text to speech (TTS) models focus on synthesizing emotional speech, but some fine-grained styles such as intonation are neglected. In this paper, we propose QI-TTS which aims to better transfer and control intonation to further deliver the speaker's questioning intention while transferring emotion from reference speech. We propose a multi-style extractor to extract style embedding from two different levels. While the sentence level represents emotion, the final syllable level represents intonation. For fine-grained intonation control, we use relative attributes to represent intonation intensity at the syllable level.Experiments have validated the effectiveness of QI-TTS for improving intonation expressiveness in emotional speech synthesis.
[ { "created": "Tue, 14 Mar 2023 07:53:19 GMT", "version": "v1" } ]
2023-03-15
[ [ "Tang", "Haobin", "" ], [ "Zhang", "Xulong", "" ], [ "Wang", "Jianzong", "" ], [ "Cheng", "Ning", "" ], [ "Xiao", "Jing", "" ] ]
Recent expressive text to speech (TTS) models focus on synthesizing emotional speech, but some fine-grained styles such as intonation are neglected. In this paper, we propose QI-TTS which aims to better transfer and control intonation to further deliver the speaker's questioning intention while transferring emotion from reference speech. We propose a multi-style extractor to extract style embedding from two different levels. While the sentence level represents emotion, the final syllable level represents intonation. For fine-grained intonation control, we use relative attributes to represent intonation intensity at the syllable level.Experiments have validated the effectiveness of QI-TTS for improving intonation expressiveness in emotional speech synthesis.
2201.12581
Xiaoyang Li
Xiaoyang Li, Fan Liu, Ziqin Zhou, Guangxu Zhu, Shuai Wang, Kaibin Huang, and Yi Gong
Integrated Sensing, Communication, and Computation Over-the-Air: MIMO Beamforming Design
This paper has been submitted to IEEE for possible publication
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To support the unprecedented growth of the Internet of Things (IoT) applications, tremendous data need to be collected by the IoT devices and delivered to the server for further computation. By utilizing the same signals for both radar sensing and data communication, the integrated sensing and communication (ISAC) technique has broken the barriers between data collection and delivery in the physical layer. By exploiting the analog-wave addition in a multi-access channel, over-the-air computation (AirComp) enables function computation via transmissions in the physical layer. The promising performance of ISAC and AirComp motivates the current work on developing a framework called integrated sensing, communication, and computation over-the-air (ISCCO). The performance metrics of radar sensing and AirComp are evaluated by the mean squared errors of the estimated target response matrix and the received computation results, respectively. The design challenge of MIMO ISCCO lies in the joint optimization of beamformers for sensing, communication, and computation at both the IoT devices and the server, which results in a non-convex problem. To solve this problem, an algorithmic solution based on the technique of semidefinite relaxation is proposed. The use case of target location estimation based on ISCCO is demonstrated in simulation to show the performance superiority.
[ { "created": "Sat, 29 Jan 2022 12:58:11 GMT", "version": "v1" }, { "created": "Tue, 22 Feb 2022 12:09:16 GMT", "version": "v2" } ]
2022-02-23
[ [ "Li", "Xiaoyang", "" ], [ "Liu", "Fan", "" ], [ "Zhou", "Ziqin", "" ], [ "Zhu", "Guangxu", "" ], [ "Wang", "Shuai", "" ], [ "Huang", "Kaibin", "" ], [ "Gong", "Yi", "" ] ]
To support the unprecedented growth of the Internet of Things (IoT) applications, tremendous data need to be collected by the IoT devices and delivered to the server for further computation. By utilizing the same signals for both radar sensing and data communication, the integrated sensing and communication (ISAC) technique has broken the barriers between data collection and delivery in the physical layer. By exploiting the analog-wave addition in a multi-access channel, over-the-air computation (AirComp) enables function computation via transmissions in the physical layer. The promising performance of ISAC and AirComp motivates the current work on developing a framework called integrated sensing, communication, and computation over-the-air (ISCCO). The performance metrics of radar sensing and AirComp are evaluated by the mean squared errors of the estimated target response matrix and the received computation results, respectively. The design challenge of MIMO ISCCO lies in the joint optimization of beamformers for sensing, communication, and computation at both the IoT devices and the server, which results in a non-convex problem. To solve this problem, an algorithmic solution based on the technique of semidefinite relaxation is proposed. The use case of target location estimation based on ISCCO is demonstrated in simulation to show the performance superiority.
2109.05771
Dev Yashpal Sheth
Ananya B. Sai, Tanay Dixit, Dev Yashpal Sheth, Sreyas Mohan, Mitesh M. Khapra
Perturbation CheckLists for Evaluating NLG Evaluation Metrics
Accepted at EMNLP 2021. See https://iitmnlp.github.io/EvalEval/ for our templates and code
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e.g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc. Across existing datasets for 6 NLG tasks, we observe that the human evaluation scores on these multiple criteria are often not correlated. For example, there is a very low correlation between human scores on fluency and data coverage for the task of structured data to text generation. This suggests that the current recipe of proposing new automatic evaluation metrics for NLG by showing that they correlate well with scores assigned by humans for a single criteria (overall quality) alone is inadequate. Indeed, our extensive study involving 25 automatic evaluation metrics across 6 different tasks and 18 different evaluation criteria shows that there is no single metric which correlates well with human scores on all desirable criteria, for most NLG tasks. Given this situation, we propose CheckLists for better design and evaluation of automatic metrics. We design templates which target a specific criteria (e.g., coverage) and perturb the output such that the quality gets affected only along this specific criteria (e.g., the coverage drops). We show that existing evaluation metrics are not robust against even such simple perturbations and disagree with scores assigned by humans to the perturbed output. The proposed templates thus allow for a fine-grained assessment of automatic evaluation metrics exposing their limitations and will facilitate better design, analysis and evaluation of such metrics.
[ { "created": "Mon, 13 Sep 2021 08:26:26 GMT", "version": "v1" } ]
2021-09-14
[ [ "Sai", "Ananya B.", "" ], [ "Dixit", "Tanay", "" ], [ "Sheth", "Dev Yashpal", "" ], [ "Mohan", "Sreyas", "" ], [ "Khapra", "Mitesh M.", "" ] ]
Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e.g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc. Across existing datasets for 6 NLG tasks, we observe that the human evaluation scores on these multiple criteria are often not correlated. For example, there is a very low correlation between human scores on fluency and data coverage for the task of structured data to text generation. This suggests that the current recipe of proposing new automatic evaluation metrics for NLG by showing that they correlate well with scores assigned by humans for a single criteria (overall quality) alone is inadequate. Indeed, our extensive study involving 25 automatic evaluation metrics across 6 different tasks and 18 different evaluation criteria shows that there is no single metric which correlates well with human scores on all desirable criteria, for most NLG tasks. Given this situation, we propose CheckLists for better design and evaluation of automatic metrics. We design templates which target a specific criteria (e.g., coverage) and perturb the output such that the quality gets affected only along this specific criteria (e.g., the coverage drops). We show that existing evaluation metrics are not robust against even such simple perturbations and disagree with scores assigned by humans to the perturbed output. The proposed templates thus allow for a fine-grained assessment of automatic evaluation metrics exposing their limitations and will facilitate better design, analysis and evaluation of such metrics.
2104.14802
Xinjian Zhang
Xinjian Zhang, Yi Xu, Su Yang, Longwen Gao, Huyang Sun
Dance Generation with Style Embedding: Learning and Transferring Latent Representations of Dance Styles
Submit to IJCAI-21
null
null
null
cs.MM cs.GR cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Choreography refers to creation of dance steps and motions for dances according to the latent knowledge in human mind, where the created dance motions are in general style-specific and consistent. So far, such latent style-specific knowledge about dance styles cannot be represented explicitly in human language and has not yet been learned in previous works on music-to-dance generation tasks. In this paper, we propose a novel music-to-dance synthesis framework with controllable style embeddings. These embeddings are learned representations of style-consistent kinematic abstraction of reference dance clips, which act as controllable factors to impose style constraints on dance generation in a latent manner. Thus, the dance styles can be transferred to dance motions by merely modifying the style embeddings. To support this study, we build a large music-to-dance dataset. The qualitative and quantitative evaluations demonstrate the advantage of our proposed framework, as well as the ability of synthesizing diverse styles of dances from identical music via style embeddings.
[ { "created": "Fri, 30 Apr 2021 07:36:49 GMT", "version": "v1" } ]
2021-05-03
[ [ "Zhang", "Xinjian", "" ], [ "Xu", "Yi", "" ], [ "Yang", "Su", "" ], [ "Gao", "Longwen", "" ], [ "Sun", "Huyang", "" ] ]
Choreography refers to creation of dance steps and motions for dances according to the latent knowledge in human mind, where the created dance motions are in general style-specific and consistent. So far, such latent style-specific knowledge about dance styles cannot be represented explicitly in human language and has not yet been learned in previous works on music-to-dance generation tasks. In this paper, we propose a novel music-to-dance synthesis framework with controllable style embeddings. These embeddings are learned representations of style-consistent kinematic abstraction of reference dance clips, which act as controllable factors to impose style constraints on dance generation in a latent manner. Thus, the dance styles can be transferred to dance motions by merely modifying the style embeddings. To support this study, we build a large music-to-dance dataset. The qualitative and quantitative evaluations demonstrate the advantage of our proposed framework, as well as the ability of synthesizing diverse styles of dances from identical music via style embeddings.
2402.15492
Xuyang Li
Xuyang Li, Hamed Bolandi, Mahdi Masmoudi, Talal Salem, Nizar Lajnef, Vishnu Naresh Boddeti
Mechanics-Informed Autoencoder Enables Automated Detection and Localization of Unforeseen Structural Damage
null
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Structural health monitoring (SHM) ensures the safety and longevity of structures like buildings and bridges. As the volume and scale of structures and the impact of their failure continue to grow, there is a dire need for SHM techniques that are scalable, inexpensive, can operate passively without human intervention, and are customized for each mechanical structure without the need for complex baseline models. We present MIDAS, a novel "deploy-and-forget" approach for automated detection and localization of damage in structures. It is a synergistic integration of entirely passive measurements from inexpensive sensors, data compression, and a mechanics-informed autoencoder. Once deployed, MIDAS continuously learns and adapts a bespoke baseline model for each structure, learning from its undamaged state's response characteristics. After learning from just 3 hours of data, it can autonomously detect and localize different types of unforeseen damage. Results from numerical simulations and experiments indicate that incorporating the mechanical characteristics into the autoencoder allows for up to a 35% improvement in the detection and localization of minor damage over a standard autoencoder. Our approach holds significant promise for reducing human intervention and inspection costs while enabling proactive and preventive maintenance strategies. This will extend the lifespan, reliability, and sustainability of civil infrastructures.
[ { "created": "Fri, 23 Feb 2024 18:31:02 GMT", "version": "v1" }, { "created": "Thu, 18 Jul 2024 06:29:37 GMT", "version": "v2" } ]
2024-07-19
[ [ "Li", "Xuyang", "" ], [ "Bolandi", "Hamed", "" ], [ "Masmoudi", "Mahdi", "" ], [ "Salem", "Talal", "" ], [ "Lajnef", "Nizar", "" ], [ "Boddeti", "Vishnu Naresh", "" ] ]
Structural health monitoring (SHM) ensures the safety and longevity of structures like buildings and bridges. As the volume and scale of structures and the impact of their failure continue to grow, there is a dire need for SHM techniques that are scalable, inexpensive, can operate passively without human intervention, and are customized for each mechanical structure without the need for complex baseline models. We present MIDAS, a novel "deploy-and-forget" approach for automated detection and localization of damage in structures. It is a synergistic integration of entirely passive measurements from inexpensive sensors, data compression, and a mechanics-informed autoencoder. Once deployed, MIDAS continuously learns and adapts a bespoke baseline model for each structure, learning from its undamaged state's response characteristics. After learning from just 3 hours of data, it can autonomously detect and localize different types of unforeseen damage. Results from numerical simulations and experiments indicate that incorporating the mechanical characteristics into the autoencoder allows for up to a 35% improvement in the detection and localization of minor damage over a standard autoencoder. Our approach holds significant promise for reducing human intervention and inspection costs while enabling proactive and preventive maintenance strategies. This will extend the lifespan, reliability, and sustainability of civil infrastructures.
2212.05885
Haosu Zhou
Haosu Zhou, and Nan Li
Image-based Artificial Intelligence empowered surrogate model and shape morpher for real-time blank shape optimisation in the hot stamping process
32 pages, 11 figures
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
As the complexity of modern manufacturing technologies increases, traditional trial-and-error design, which requires iterative and expensive simulations, becomes unreliable and time-consuming. This difficulty is especially significant for the design of hot-stamped safety-critical components, such as ultra-high-strength-steel (UHSS) B-pillars. To reduce design costs and ensure manufacturability, scalar-based Artificial-Intelligence-empowered surrogate modelling (SAISM) has been investigated and implemented, which can allow real-time manufacturability-constrained structural design optimisation. However, SAISM suffers from low accuracy and generalisability, and usually requires a high volume of training samples. To solve this problem, an image-based Artificial-intelligence-empowered surrogate modelling (IAISM) approach is developed in this research, in combination with an auto-decoder-based blank shape generator. The IAISM, which is based on a Mask-Res-SE-U-Net architecture, is trained to predict the full thinning field of the as-formed component given an arbitrary blank shape. Excellent prediction performance of IAISM is achieved with only 256 training samples, which indicates the small-data learning nature of engineering AI tasks using structured data representations. The trained auto-decoder, trained Mask-Res-SE-U-Net, and Adam optimiser are integrated to conduct blank optimisation by modifying the latent vector. The optimiser can rapidly find blank shapes that satisfy manufacturability criteria. As a high-accuracy and generalisable surrogate modelling and optimisation tool, the proposed pipeline is promising to be integrated into a full-chain digital twin to conduct real-time, multi-objective design optimisation.
[ { "created": "Thu, 1 Dec 2022 20:17:48 GMT", "version": "v1" } ]
2022-12-13
[ [ "Zhou", "Haosu", "" ], [ "Li", "Nan", "" ] ]
As the complexity of modern manufacturing technologies increases, traditional trial-and-error design, which requires iterative and expensive simulations, becomes unreliable and time-consuming. This difficulty is especially significant for the design of hot-stamped safety-critical components, such as ultra-high-strength-steel (UHSS) B-pillars. To reduce design costs and ensure manufacturability, scalar-based Artificial-Intelligence-empowered surrogate modelling (SAISM) has been investigated and implemented, which can allow real-time manufacturability-constrained structural design optimisation. However, SAISM suffers from low accuracy and generalisability, and usually requires a high volume of training samples. To solve this problem, an image-based Artificial-intelligence-empowered surrogate modelling (IAISM) approach is developed in this research, in combination with an auto-decoder-based blank shape generator. The IAISM, which is based on a Mask-Res-SE-U-Net architecture, is trained to predict the full thinning field of the as-formed component given an arbitrary blank shape. Excellent prediction performance of IAISM is achieved with only 256 training samples, which indicates the small-data learning nature of engineering AI tasks using structured data representations. The trained auto-decoder, trained Mask-Res-SE-U-Net, and Adam optimiser are integrated to conduct blank optimisation by modifying the latent vector. The optimiser can rapidly find blank shapes that satisfy manufacturability criteria. As a high-accuracy and generalisable surrogate modelling and optimisation tool, the proposed pipeline is promising to be integrated into a full-chain digital twin to conduct real-time, multi-objective design optimisation.
2101.10492
Dayu Zhu
Dayu Zhu, Wenshan Cai
Fast Non-line-of-sight Imaging with Two-step Deep Remapping
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Conventional imaging only records photons directly sent from the object to the detector, while non-line-of-sight (NLOS) imaging takes the indirect light into account. Most NLOS solutions employ a transient scanning process, followed by a physical based algorithm to reconstruct the NLOS scenes. However, the transient detection requires sophisticated apparatus, with long scanning time and low robustness to ambient environment, and the reconstruction algorithms are typically time-consuming and computationally expensive. Here we propose a new NLOS solution to address the above defects, with innovations on both equipment and algorithm. We apply inexpensive commercial Lidar for detection, with much higher scanning speed and better compatibility to real-world imaging. Our reconstruction framework is deep learning based, with a generative two-step remapping strategy to guarantee high reconstruction fidelity. The overall detection and reconstruction process allows for millisecond responses, with reconstruction precision of millimeter level. We have experimentally tested the proposed solution on both synthetic and real objects, and further demonstrated our method to be applicable to full-color NLOS imaging.
[ { "created": "Tue, 26 Jan 2021 00:08:54 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 01:23:18 GMT", "version": "v2" } ]
2021-03-29
[ [ "Zhu", "Dayu", "" ], [ "Cai", "Wenshan", "" ] ]
Conventional imaging only records photons directly sent from the object to the detector, while non-line-of-sight (NLOS) imaging takes the indirect light into account. Most NLOS solutions employ a transient scanning process, followed by a physical based algorithm to reconstruct the NLOS scenes. However, the transient detection requires sophisticated apparatus, with long scanning time and low robustness to ambient environment, and the reconstruction algorithms are typically time-consuming and computationally expensive. Here we propose a new NLOS solution to address the above defects, with innovations on both equipment and algorithm. We apply inexpensive commercial Lidar for detection, with much higher scanning speed and better compatibility to real-world imaging. Our reconstruction framework is deep learning based, with a generative two-step remapping strategy to guarantee high reconstruction fidelity. The overall detection and reconstruction process allows for millisecond responses, with reconstruction precision of millimeter level. We have experimentally tested the proposed solution on both synthetic and real objects, and further demonstrated our method to be applicable to full-color NLOS imaging.
2302.09124
Vishnu Nair
Vishnu Nair, Hanxiu 'Hazel' Zhu, Brian A. Smith
ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration Systems for Blind and Low Vision Users
null
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23), April 2023
10.1145/3544548.3581302
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blind and low vision (BLV) users often rely on alt text to understand what a digital image is showing. However, recent research has investigated how touch-based image exploration on touchscreens can supplement alt text. Touchscreen-based image exploration systems allow BLV users to deeply understand images while granting a strong sense of agency. Yet, prior work has found that these systems require a lot of effort to use, and little work has been done to explore these systems' bottlenecks on a deeper level and propose solutions to these issues. To address this, we present ImageAssist, a set of three tools that assist BLV users through the process of exploring images by touch -- scaffolding the exploration process. We perform a series of studies with BLV users to design and evaluate ImageAssist, and our findings reveal several implications for image exploration tools for BLV users.
[ { "created": "Fri, 17 Feb 2023 20:16:28 GMT", "version": "v1" } ]
2023-02-21
[ [ "Nair", "Vishnu", "" ], [ "Zhu", "Hanxiu 'Hazel'", "" ], [ "Smith", "Brian A.", "" ] ]
Blind and low vision (BLV) users often rely on alt text to understand what a digital image is showing. However, recent research has investigated how touch-based image exploration on touchscreens can supplement alt text. Touchscreen-based image exploration systems allow BLV users to deeply understand images while granting a strong sense of agency. Yet, prior work has found that these systems require a lot of effort to use, and little work has been done to explore these systems' bottlenecks on a deeper level and propose solutions to these issues. To address this, we present ImageAssist, a set of three tools that assist BLV users through the process of exploring images by touch -- scaffolding the exploration process. We perform a series of studies with BLV users to design and evaluate ImageAssist, and our findings reveal several implications for image exploration tools for BLV users.
1609.00048
Joel Tropp
Joel A. Tropp, Alp Yurtsever, Madeleine Udell, Volkan Cevher
Practical sketching algorithms for low-rank matrix approximation
null
SIAM J. Matrix Analysis and Applications, Vol. 38, num. 4, pp. 1454-1485, Dec. 2017
10.1137/17M1111590
null
cs.NA cs.DS math.NA stat.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.
[ { "created": "Wed, 31 Aug 2016 21:30:26 GMT", "version": "v1" }, { "created": "Tue, 2 Jan 2018 18:13:40 GMT", "version": "v2" } ]
2018-01-03
[ [ "Tropp", "Joel A.", "" ], [ "Yurtsever", "Alp", "" ], [ "Udell", "Madeleine", "" ], [ "Cevher", "Volkan", "" ] ]
This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.
1709.06764
Andres Milioto
Andres Milioto, Philipp Lottes, Cyrill Stachniss
Real-time Semantic Segmentation of Crop and Weed for Precision Agriculture Robots Leveraging Background Knowledge in CNNs
Accepted for publication at IEEE International Conference on Robotics and Automation 2018 (ICRA 2018)
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20Hz, and is suitable for online operation in the fields.
[ { "created": "Wed, 20 Sep 2017 08:24:18 GMT", "version": "v1" }, { "created": "Fri, 2 Mar 2018 15:46:48 GMT", "version": "v2" } ]
2018-03-05
[ [ "Milioto", "Andres", "" ], [ "Lottes", "Philipp", "" ], [ "Stachniss", "Cyrill", "" ] ]
Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20Hz, and is suitable for online operation in the fields.
2404.17276
Charalampos Symeonidis
Charalampos Symeonidis and Nikos Nikolaidis
Efficient Deterministic Renewable Energy Forecasting Guided by Multiple-Location Weather Data
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Electricity generated from renewable energy sources has been established as an efficient remedy for both energy shortages and the environmental pollution stemming from conventional energy production methods. Solar and wind power are two of the most dominant renewable energy sources. The accurate forecasting of the energy generation of those sources facilitates their integration into electric grids, by minimizing the negative impact of uncertainty regarding their management and operation. This paper proposes a novel methodology for deterministic wind and solar energy generation forecasting for multiple generation sites, utilizing multi-location weather forecasts. The method employs a U-shaped Temporal Convolutional Auto-Encoder (UTCAE) architecture for temporal processing of weather-related and energy-related time-series across each site. The Multi-sized Kernels convolutional Spatio-Temporal Attention (MKST-Attention), inspired by the multi-head scaled-dot product attention mechanism, is also proposed aiming to efficiently transfer temporal patterns from weather data to energy data, without a priori knowledge of the locations of the power stations and the locations of provided weather data. The conducted experimental evaluation on a day-ahead solar and wind energy forecasting scenario on five datasets demonstrated that the proposed method achieves top results, outperforming all competitive time-series forecasting state-of-the-art methods.
[ { "created": "Fri, 26 Apr 2024 09:30:55 GMT", "version": "v1" } ]
2024-04-29
[ [ "Symeonidis", "Charalampos", "" ], [ "Nikolaidis", "Nikos", "" ] ]
Electricity generated from renewable energy sources has been established as an efficient remedy for both energy shortages and the environmental pollution stemming from conventional energy production methods. Solar and wind power are two of the most dominant renewable energy sources. The accurate forecasting of the energy generation of those sources facilitates their integration into electric grids, by minimizing the negative impact of uncertainty regarding their management and operation. This paper proposes a novel methodology for deterministic wind and solar energy generation forecasting for multiple generation sites, utilizing multi-location weather forecasts. The method employs a U-shaped Temporal Convolutional Auto-Encoder (UTCAE) architecture for temporal processing of weather-related and energy-related time-series across each site. The Multi-sized Kernels convolutional Spatio-Temporal Attention (MKST-Attention), inspired by the multi-head scaled-dot product attention mechanism, is also proposed aiming to efficiently transfer temporal patterns from weather data to energy data, without a priori knowledge of the locations of the power stations and the locations of provided weather data. The conducted experimental evaluation on a day-ahead solar and wind energy forecasting scenario on five datasets demonstrated that the proposed method achieves top results, outperforming all competitive time-series forecasting state-of-the-art methods.
2111.11647
Young-Gyu Yoon
Junmo Cho, Dong-Hwan Lee, Young-Gyu Yoon
Inducing Functions through Reinforcement Learning without Task Specification
14 pages
null
null
null
cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report a bio-inspired framework for training a neural network through reinforcement learning to induce high level functions within the network. Based on the interpretation that animals have gained their cognitive functions such as object recognition - without ever being specifically trained for - as a result of maximizing their fitness to the environment, we place our agent in an environment where developing certain functions may facilitate decision making. The experimental results show that high level functions, such as image classification and hidden variable estimation, can be naturally and simultaneously induced without any pre-training or specifying them.
[ { "created": "Tue, 23 Nov 2021 04:42:02 GMT", "version": "v1" } ]
2021-11-24
[ [ "Cho", "Junmo", "" ], [ "Lee", "Dong-Hwan", "" ], [ "Yoon", "Young-Gyu", "" ] ]
We report a bio-inspired framework for training a neural network through reinforcement learning to induce high level functions within the network. Based on the interpretation that animals have gained their cognitive functions such as object recognition - without ever being specifically trained for - as a result of maximizing their fitness to the environment, we place our agent in an environment where developing certain functions may facilitate decision making. The experimental results show that high level functions, such as image classification and hidden variable estimation, can be naturally and simultaneously induced without any pre-training or specifying them.
2208.09596
Keyu Wen
Qingrong Cheng, Keyu Wen, Xiaodong Gu
Vision-Language Matching for Text-to-Image Synthesis via Generative Adversarial Networks
14 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-image synthesis aims to generate a photo-realistic and semantic consistent image from a specific text description. The images synthesized by off-the-shelf models usually contain limited components compared with the corresponding image and text description, which decreases the image quality and the textual-visual consistency. To address this issue, we propose a novel Vision-Language Matching strategy for text-to-image synthesis, named VLMGAN*, which introduces a dual vision-language matching mechanism to strengthen the image quality and semantic consistency. The dual vision-language matching mechanism considers textual-visual matching between the generated image and the corresponding text description, and visual-visual consistent constraints between the synthesized image and the real image. Given a specific text description, VLMGAN* firstly encodes it into textual features and then feeds them to a dual vision-language matching-based generative model to synthesize a photo-realistic and textual semantic consistent image. Besides, the popular evaluation metrics for text-to-image synthesis are borrowed from simple image generation, which mainly evaluates the reality and diversity of the synthesized images. Therefore, we introduce a metric named Vision-Language Matching Score (VLMS) to evaluate the performance of text-to-image synthesis which can consider both the image quality and the semantic consistency between synthesized image and the description. The proposed dual multi-level vision-language matching strategy can be applied to other text-to-image synthesis methods. We implement this strategy on two popular baselines, which are marked with ${\text{VLMGAN}_{+\text{AttnGAN}}}$ and ${\text{VLMGAN}_{+\text{DFGAN}}}$. The experimental results on two widely-used datasets show that the model achieves significant improvements over other state-of-the-art methods.
[ { "created": "Sat, 20 Aug 2022 03:34:04 GMT", "version": "v1" } ]
2022-08-23
[ [ "Cheng", "Qingrong", "" ], [ "Wen", "Keyu", "" ], [ "Gu", "Xiaodong", "" ] ]
Text-to-image synthesis aims to generate a photo-realistic and semantic consistent image from a specific text description. The images synthesized by off-the-shelf models usually contain limited components compared with the corresponding image and text description, which decreases the image quality and the textual-visual consistency. To address this issue, we propose a novel Vision-Language Matching strategy for text-to-image synthesis, named VLMGAN*, which introduces a dual vision-language matching mechanism to strengthen the image quality and semantic consistency. The dual vision-language matching mechanism considers textual-visual matching between the generated image and the corresponding text description, and visual-visual consistent constraints between the synthesized image and the real image. Given a specific text description, VLMGAN* firstly encodes it into textual features and then feeds them to a dual vision-language matching-based generative model to synthesize a photo-realistic and textual semantic consistent image. Besides, the popular evaluation metrics for text-to-image synthesis are borrowed from simple image generation, which mainly evaluates the reality and diversity of the synthesized images. Therefore, we introduce a metric named Vision-Language Matching Score (VLMS) to evaluate the performance of text-to-image synthesis which can consider both the image quality and the semantic consistency between synthesized image and the description. The proposed dual multi-level vision-language matching strategy can be applied to other text-to-image synthesis methods. We implement this strategy on two popular baselines, which are marked with ${\text{VLMGAN}_{+\text{AttnGAN}}}$ and ${\text{VLMGAN}_{+\text{DFGAN}}}$. The experimental results on two widely-used datasets show that the model achieves significant improvements over other state-of-the-art methods.
1805.07457
Jyh-Jing Hwang
Jyh-Jing Hwang, Tsung-Wei Ke, Jianbo Shi, Stella X. Yu
Adversarial Structure Matching for Structured Prediction Tasks
In CVPR 2019. Webpage & Code: https://jyhjinghwang.github.io/projects/asm.html
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pixel-wise losses, e.g., cross-entropy or L2, have been widely used in structured prediction tasks as a spatial extension of generic image classification or regression. However, its i.i.d. assumption neglects the structural regularity present in natural images. Various attempts have been made to incorporate structural reasoning mostly through structure priors in a cooperative way where co-occurring patterns are encouraged. We, on the other hand, approach this problem from an opposing angle and propose a new framework, Adversarial Structure Matching (ASM), for training such structured prediction networks via an adversarial process, in which we train a structure analyzer that provides the supervisory signals, the ASM loss. The structure analyzer is trained to maximize the ASM loss, or to emphasize recurring multi-scale hard negative structural mistakes among co-occurring patterns. On the contrary, the structured prediction network is trained to reduce those mistakes and is thus enabled to distinguish fine-grained structures. As a result, training structured prediction networks using ASM reduces contextual confusion among objects and improves boundary localization. We demonstrate that our ASM outperforms pixel-wise IID loss or structural prior GAN loss on three different structured prediction tasks: semantic segmentation, monocular depth estimation, and surface normal prediction.
[ { "created": "Fri, 18 May 2018 22:03:58 GMT", "version": "v1" }, { "created": "Mon, 21 Oct 2019 16:46:07 GMT", "version": "v2" } ]
2019-10-22
[ [ "Hwang", "Jyh-Jing", "" ], [ "Ke", "Tsung-Wei", "" ], [ "Shi", "Jianbo", "" ], [ "Yu", "Stella X.", "" ] ]
Pixel-wise losses, e.g., cross-entropy or L2, have been widely used in structured prediction tasks as a spatial extension of generic image classification or regression. However, its i.i.d. assumption neglects the structural regularity present in natural images. Various attempts have been made to incorporate structural reasoning mostly through structure priors in a cooperative way where co-occurring patterns are encouraged. We, on the other hand, approach this problem from an opposing angle and propose a new framework, Adversarial Structure Matching (ASM), for training such structured prediction networks via an adversarial process, in which we train a structure analyzer that provides the supervisory signals, the ASM loss. The structure analyzer is trained to maximize the ASM loss, or to emphasize recurring multi-scale hard negative structural mistakes among co-occurring patterns. On the contrary, the structured prediction network is trained to reduce those mistakes and is thus enabled to distinguish fine-grained structures. As a result, training structured prediction networks using ASM reduces contextual confusion among objects and improves boundary localization. We demonstrate that our ASM outperforms pixel-wise IID loss or structural prior GAN loss on three different structured prediction tasks: semantic segmentation, monocular depth estimation, and surface normal prediction.
1112.0974
Jan Lellmann
Jan Lellmann, Frank Lenzen, and Christoph Schn\"orr
Optimality Bounds for a Variational Relaxation of the Image Partitioning Problem
null
null
null
null
cs.CV math.CO math.FA math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a variational convex relaxation of a class of optimal partitioning and multiclass labeling problems, which has recently proven quite successful and can be seen as a continuous analogue of Linear Programming (LP) relaxation methods for finite-dimensional problems. While for the latter case several optimality bounds are known, to our knowledge no such bounds exist in the continuous setting. We provide such a bound by analyzing a probabilistic rounding method, showing that it is possible to obtain an integral solution of the original partitioning problem from a solution of the relaxed problem with an a priori upper bound on the objective, ensuring the quality of the result from the viewpoint of optimization. The approach has a natural interpretation as an approximate, multiclass variant of the celebrated coarea formula.
[ { "created": "Mon, 5 Dec 2011 16:05:32 GMT", "version": "v1" } ]
2011-12-06
[ [ "Lellmann", "Jan", "" ], [ "Lenzen", "Frank", "" ], [ "Schnörr", "Christoph", "" ] ]
We consider a variational convex relaxation of a class of optimal partitioning and multiclass labeling problems, which has recently proven quite successful and can be seen as a continuous analogue of Linear Programming (LP) relaxation methods for finite-dimensional problems. While for the latter case several optimality bounds are known, to our knowledge no such bounds exist in the continuous setting. We provide such a bound by analyzing a probabilistic rounding method, showing that it is possible to obtain an integral solution of the original partitioning problem from a solution of the relaxed problem with an a priori upper bound on the objective, ensuring the quality of the result from the viewpoint of optimization. The approach has a natural interpretation as an approximate, multiclass variant of the celebrated coarea formula.
2203.09231
Marcos Faundez-Zanuy
Marcos Faundez-Zanuy, Daniel Rodr\'iguez-Porcheron
Speaker recognition using residual signal of linear and nonlinear prediction models
4 pages, published in 5th International Conference on spoken language processing. Vol.2 pp.121-124. ICSLP 1998. ISBN 1-876346-17-5
5th International Conference on spoken language processing. Vol.2 pp.121-124. ICSLP 1998. ISBN 1-876346-17-5
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
This Paper discusses the usefulness of the residual signal for speaker recognition. It is shown that the combination of both a measure defined over LPCC coefficients and a measure defined over the energy of the residual signal gives rise to an improvement over the classical method which considers only the LPCC coefficients. If the residual signal is obtained from a linear prediction analysis, the improvement is 2.63% (error rate drops from 6.31% to 3.68%) and if it is computed through a nonlinear predictive neural nets based model, the improvement is 3.68%.
[ { "created": "Thu, 17 Mar 2022 10:36:58 GMT", "version": "v1" } ]
2022-03-18
[ [ "Faundez-Zanuy", "Marcos", "" ], [ "Rodríguez-Porcheron", "Daniel", "" ] ]
This Paper discusses the usefulness of the residual signal for speaker recognition. It is shown that the combination of both a measure defined over LPCC coefficients and a measure defined over the energy of the residual signal gives rise to an improvement over the classical method which considers only the LPCC coefficients. If the residual signal is obtained from a linear prediction analysis, the improvement is 2.63% (error rate drops from 6.31% to 3.68%) and if it is computed through a nonlinear predictive neural nets based model, the improvement is 3.68%.