id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
cs/0408021
Florentin Smarandache
Florentin Smarandache, Jean Dezert
An Algorithm for Quasi-Associative and Quasi-Markovian Rules of Combination in Information Fusion
9 pages
International Journal of Applied Mathematics & Statistics, Vol. 22, No. S11 (Special Issue on Soft Computing), 33-42, 2011
null
null
cs.AI
null
In this paper one proposes a simple algorithm of combining the fusion rules, those rules which first use the conjunctive rule and then the transfer of conflicting mass to the non-empty sets, in such a way that they gain the property of associativity and fulfill the Markovian requirement for dynamic fusion. Also, a new rule, SDL-improved, is presented.
[ { "created": "Sun, 8 Aug 2004 19:41:23 GMT", "version": "v1" }, { "created": "Sat, 14 Aug 2004 16:59:51 GMT", "version": "v2" } ]
2010-09-14
[ [ "Smarandache", "Florentin", "" ], [ "Dezert", "Jean", "" ] ]
In this paper one proposes a simple algorithm of combining the fusion rules, those rules which first use the conjunctive rule and then the transfer of conflicting mass to the non-empty sets, in such a way that they gain the property of associativity and fulfill the Markovian requirement for dynamic fusion. Also, a new rule, SDL-improved, is presented.
1907.05364
Felix Batsch
Felix Batsch, Alireza Daneshkhah, Madeline Cheah, Stratis Kanarachos, Anthony Baxendale
Performance Boundary Identification for the Evaluation of Automated Vehicles using Gaussian Process Classification
6 pages, 5 figures, accepted at 2019 IEEE Intelligent Transportation Systems Conference - ITSC 2019, Auckland, New Zealand, October 2019
null
null
null
cs.LG cs.RO stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety is an essential aspect in the facilitation of automated vehicle deployment. Current testing practices are not enough, and going beyond them leads to infeasible testing requirements, such as needing to drive billions of kilometres on public roads. Automated vehicles are exposed to an indefinite number of scenarios. Handling of the most challenging scenarios should be tested, which leads to the question of how such corner cases can be determined. We propose an approach to identify the performance boundary, where these corner cases are located, using Gaussian Process Classification. We also demonstrate the classification on an exemplary traffic jam approach scenario, showing that it is feasible and would lead to more efficient testing practices.
[ { "created": "Thu, 11 Jul 2019 16:35:59 GMT", "version": "v1" } ]
2019-07-12
[ [ "Batsch", "Felix", "" ], [ "Daneshkhah", "Alireza", "" ], [ "Cheah", "Madeline", "" ], [ "Kanarachos", "Stratis", "" ], [ "Baxendale", "Anthony", "" ] ]
Safety is an essential aspect in the facilitation of automated vehicle deployment. Current testing practices are not enough, and going beyond them leads to infeasible testing requirements, such as needing to drive billions of kilometres on public roads. Automated vehicles are exposed to an indefinite number of scenarios. Handling of the most challenging scenarios should be tested, which leads to the question of how such corner cases can be determined. We propose an approach to identify the performance boundary, where these corner cases are located, using Gaussian Process Classification. We also demonstrate the classification on an exemplary traffic jam approach scenario, showing that it is feasible and would lead to more efficient testing practices.
1209.1734
Vishnuvardhan Mannava M.E
Vishnuvardhan Mannava and T. Ramesh
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autonomic Computing Systems
International Journal on Soft Computing (IJSC), 15 pages, 11 figures
Vishnuvardhan, Mannava., & Ramesh, T. (2012). Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autonomic Computing Systems. International Journal on Soft Computing (IJSC), 3(3), 85-99
10.5121/ijsc
null
cs.SE cs.DC cs.NE
http://creativecommons.org/licenses/by-nc-sa/3.0/
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger problem. A composite design patterns shows a synergy that makes the composition more than just the sum of its parts which leads to ready-made software architectures. As far as we know, there are no studies on composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented software architecture for self-optimization in autonomic computing system using design patterns composition and multi objective evolutionary algorithms that software designers and/or programmers can exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions. The use of composite design patterns in the architecture and quantitative measurements are presented. A simple UML class diagram is used to describe the architecture.
[ { "created": "Sat, 8 Sep 2012 17:39:46 GMT", "version": "v1" } ]
2012-09-11
[ [ "Mannava", "Vishnuvardhan", "" ], [ "Ramesh", "T.", "" ] ]
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger problem. A composite design patterns shows a synergy that makes the composition more than just the sum of its parts which leads to ready-made software architectures. As far as we know, there are no studies on composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented software architecture for self-optimization in autonomic computing system using design patterns composition and multi objective evolutionary algorithms that software designers and/or programmers can exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions. The use of composite design patterns in the architecture and quantitative measurements are presented. A simple UML class diagram is used to describe the architecture.
2310.15021
Zhiyuan Fan
Zhiyuan Fan, Shizhu He
Efficient Data Learning for Open Information Extraction with Pre-trained Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Open Information Extraction (OpenIE) is a fundamental yet challenging task in Natural Language Processing, which involves extracting all triples (subject, predicate, object) from a given sentence. While labeling-based methods have their merits, generation-based techniques offer unique advantages, such as the ability to generate tokens not present in the original sentence. However, these generation-based methods often require a significant amount of training data to learn the task form of OpenIE and substantial training time to overcome slow model convergence due to the order penalty. In this paper, we introduce a novel framework, OK-IE, that ingeniously transforms the task form of OpenIE into the pre-training task form of the T5 model, thereby reducing the need for extensive training data. Furthermore, we introduce an innovative concept of Anchor to control the sequence of model outputs, effectively eliminating the impact of order penalty on model convergence and significantly reducing training time. Experimental results indicate that, compared to previous SOTA methods, OK-IE requires only 1/100 of the training data (900 instances) and 1/120 of the training time (3 minutes) to achieve comparable results.
[ { "created": "Mon, 23 Oct 2023 15:19:24 GMT", "version": "v1" }, { "created": "Wed, 26 Jun 2024 08:23:10 GMT", "version": "v2" } ]
2024-06-27
[ [ "Fan", "Zhiyuan", "" ], [ "He", "Shizhu", "" ] ]
Open Information Extraction (OpenIE) is a fundamental yet challenging task in Natural Language Processing, which involves extracting all triples (subject, predicate, object) from a given sentence. While labeling-based methods have their merits, generation-based techniques offer unique advantages, such as the ability to generate tokens not present in the original sentence. However, these generation-based methods often require a significant amount of training data to learn the task form of OpenIE and substantial training time to overcome slow model convergence due to the order penalty. In this paper, we introduce a novel framework, OK-IE, that ingeniously transforms the task form of OpenIE into the pre-training task form of the T5 model, thereby reducing the need for extensive training data. Furthermore, we introduce an innovative concept of Anchor to control the sequence of model outputs, effectively eliminating the impact of order penalty on model convergence and significantly reducing training time. Experimental results indicate that, compared to previous SOTA methods, OK-IE requires only 1/100 of the training data (900 instances) and 1/120 of the training time (3 minutes) to achieve comparable results.
1004.4729
Venkatesan Chakaravarthy
Venkatesan T. Chakaravarthy and Vinayaka Pandit and Yogish Sabharwal
On the Complexity of the $k$-Anonymization Problem
9 pages, 2 figures
null
null
null
cs.CC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of anonymizing tables containing personal information before releasing them for public use. One of the formulations considered in this context is the $k$-anonymization problem: given a table, suppress a minimum number of cells so that in the transformed table, each row is identical to atleast $k-1$ other rows. The problem is known to be NP-hard and MAXSNP-hard; but in the known reductions, the number of columns in the constructed tables is arbitrarily large. However, in practical settings the number of columns is much smaller. So, we study the complexity of the practical setting in which the number of columns $m$ is small. We show that the problem is NP-hard, even when the number of columns $m$ is a constant ($m=3$). We also prove MAXSNP-hardness for this restricted version and derive that the problem cannot be approximated within a factor of (6238/6237). Our reduction uses alphabets $\Sigma$ of arbitrarily large size. A natural question is whether the problem remains NP-hard when both $m$ and $|\Sigma|$ are small. We prove that the $k$-anonymization problem is in $P$ when both $m$ and $|\Sigma|$ are constants.
[ { "created": "Tue, 27 Apr 2010 07:46:35 GMT", "version": "v1" } ]
2010-04-28
[ [ "Chakaravarthy", "Venkatesan T.", "" ], [ "Pandit", "Vinayaka", "" ], [ "Sabharwal", "Yogish", "" ] ]
We study the problem of anonymizing tables containing personal information before releasing them for public use. One of the formulations considered in this context is the $k$-anonymization problem: given a table, suppress a minimum number of cells so that in the transformed table, each row is identical to atleast $k-1$ other rows. The problem is known to be NP-hard and MAXSNP-hard; but in the known reductions, the number of columns in the constructed tables is arbitrarily large. However, in practical settings the number of columns is much smaller. So, we study the complexity of the practical setting in which the number of columns $m$ is small. We show that the problem is NP-hard, even when the number of columns $m$ is a constant ($m=3$). We also prove MAXSNP-hardness for this restricted version and derive that the problem cannot be approximated within a factor of (6238/6237). Our reduction uses alphabets $\Sigma$ of arbitrarily large size. A natural question is whether the problem remains NP-hard when both $m$ and $|\Sigma|$ are small. We prove that the $k$-anonymization problem is in $P$ when both $m$ and $|\Sigma|$ are constants.
2302.01724
Qingpeng Cai
Qingpeng Cai, Shuchang Liu, Xueliang Wang, Tianyou Zuo, Wentao Xie, Bin Yang, Dong Zheng, Peng Jiang, Kun Gai
Reinforcing User Retention in a Billion Scale Short Video Recommender System
null
The Web Conference 2023 Industry Track
null
null
cs.LG cs.IR
http://creativecommons.org/publicdomain/zero/1.0/
Recently, short video platforms have achieved rapid user growth by recommending interesting content to users. The objective of the recommendation is to optimize user retention, thereby driving the growth of DAU (Daily Active Users). Retention is a long-term feedback after multiple interactions of users and the system, and it is hard to decompose retention reward to each item or a list of items. Thus traditional point-wise and list-wise models are not able to optimize retention. In this paper, we choose reinforcement learning methods to optimize the retention as they are designed to maximize the long-term performance. We formulate the problem as an infinite-horizon request-based Markov Decision Process, and our objective is to minimize the accumulated time interval of multiple sessions, which is equal to improving the app open frequency and user retention. However, current reinforcement learning algorithms can not be directly applied in this setting due to uncertainty, bias, and long delay time incurred by the properties of user retention. We propose a novel method, dubbed RLUR, to address the aforementioned challenges. Both offline and live experiments show that RLUR can significantly improve user retention. RLUR has been fully launched in Kuaishou app for a long time, and achieves consistent performance improvement on user retention and DAU.
[ { "created": "Fri, 3 Feb 2023 13:25:43 GMT", "version": "v1" }, { "created": "Tue, 7 Feb 2023 04:12:02 GMT", "version": "v2" }, { "created": "Sun, 12 Feb 2023 07:02:19 GMT", "version": "v3" } ]
2023-02-14
[ [ "Cai", "Qingpeng", "" ], [ "Liu", "Shuchang", "" ], [ "Wang", "Xueliang", "" ], [ "Zuo", "Tianyou", "" ], [ "Xie", "Wentao", "" ], [ "Yang", "Bin", "" ], [ "Zheng", "Dong", "" ], [ "Jiang", "Peng", "" ], [ "Gai", "Kun", "" ] ]
Recently, short video platforms have achieved rapid user growth by recommending interesting content to users. The objective of the recommendation is to optimize user retention, thereby driving the growth of DAU (Daily Active Users). Retention is a long-term feedback after multiple interactions of users and the system, and it is hard to decompose retention reward to each item or a list of items. Thus traditional point-wise and list-wise models are not able to optimize retention. In this paper, we choose reinforcement learning methods to optimize the retention as they are designed to maximize the long-term performance. We formulate the problem as an infinite-horizon request-based Markov Decision Process, and our objective is to minimize the accumulated time interval of multiple sessions, which is equal to improving the app open frequency and user retention. However, current reinforcement learning algorithms can not be directly applied in this setting due to uncertainty, bias, and long delay time incurred by the properties of user retention. We propose a novel method, dubbed RLUR, to address the aforementioned challenges. Both offline and live experiments show that RLUR can significantly improve user retention. RLUR has been fully launched in Kuaishou app for a long time, and achieves consistent performance improvement on user retention and DAU.
2209.08554
Murad Tukan
Murad Tukan, Loay Mualem, Alaa Maalouf
Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Pruning is one of the predominant approaches for compressing deep neural networks (DNNs). Lately, coresets (provable data summarizations) were leveraged for pruning DNNs, adding the advantage of theoretical guarantees on the trade-off between the compression rate and the approximation error. However, coresets in this domain were either data-dependent or generated under restrictive assumptions on both the model's weights and inputs. In real-world scenarios, such assumptions are rarely satisfied, limiting the applicability of coresets. To this end, we suggest a novel and robust framework for computing such coresets under mild assumptions on the model's weights and without any assumption on the training data. The idea is to compute the importance of each neuron in each layer with respect to the output of the following layer. This is achieved by a combination of L\"{o}wner ellipsoid and Caratheodory theorem. Our method is simultaneously data-independent, applicable to various networks and datasets (due to the simplified assumptions), and theoretically supported. Experimental results show that our method outperforms existing coreset based neural pruning approaches across a wide range of networks and datasets. For example, our method achieved a $62\%$ compression rate on ResNet50 on ImageNet with $1.09\%$ drop in accuracy.
[ { "created": "Sun, 18 Sep 2022 12:45:26 GMT", "version": "v1" } ]
2022-09-20
[ [ "Tukan", "Murad", "" ], [ "Mualem", "Loay", "" ], [ "Maalouf", "Alaa", "" ] ]
Pruning is one of the predominant approaches for compressing deep neural networks (DNNs). Lately, coresets (provable data summarizations) were leveraged for pruning DNNs, adding the advantage of theoretical guarantees on the trade-off between the compression rate and the approximation error. However, coresets in this domain were either data-dependent or generated under restrictive assumptions on both the model's weights and inputs. In real-world scenarios, such assumptions are rarely satisfied, limiting the applicability of coresets. To this end, we suggest a novel and robust framework for computing such coresets under mild assumptions on the model's weights and without any assumption on the training data. The idea is to compute the importance of each neuron in each layer with respect to the output of the following layer. This is achieved by a combination of L\"{o}wner ellipsoid and Caratheodory theorem. Our method is simultaneously data-independent, applicable to various networks and datasets (due to the simplified assumptions), and theoretically supported. Experimental results show that our method outperforms existing coreset based neural pruning approaches across a wide range of networks and datasets. For example, our method achieved a $62\%$ compression rate on ResNet50 on ImageNet with $1.09\%$ drop in accuracy.
1903.08752
Nirupam Gupta
Nirupam Gupta and Nitin H. Vaidya
Byzantine Fault Tolerant Distributed Linear Regression
Manuscript revised by adding; a new improved filtering technique, and convergence analysis with noise
null
null
null
cs.LG cs.DC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the problem of Byzantine fault tolerance in distributed linear regression in a multi-agent system. However, the proposed algorithms are given for a more general class of distributed optimization problems, of which distributed linear regression is a special case. The system comprises of a server and multiple agents, where each agent is holding a certain number of data points and responses that satisfy a linear relationship (could be noisy). The objective of the server is to determine this relationship, given that some of the agents in the system (up to a known number) are Byzantine faulty (aka. actively adversarial). We show that the server can achieve this objective, in a deterministic manner, by robustifying the original distributed gradient descent method using norm based filters, namely 'norm filtering' and 'norm-cap filtering', incurring an additional log-linear computation cost in each iteration. The proposed algorithms improve upon the existing methods on three levels: i) no assumptions are required on the probability distribution of data points, ii) system can be partially asynchronous, and iii) the computational overhead (in order to handle Byzantine faulty agents) is log-linear in number of agents and linear in dimension of data points. The proposed algorithms differ from each other in the assumptions made for their correctness, and the gradient filter they use.
[ { "created": "Wed, 20 Mar 2019 21:37:42 GMT", "version": "v1" }, { "created": "Thu, 4 Apr 2019 15:05:46 GMT", "version": "v2" } ]
2019-04-05
[ [ "Gupta", "Nirupam", "" ], [ "Vaidya", "Nitin H.", "" ] ]
This paper considers the problem of Byzantine fault tolerance in distributed linear regression in a multi-agent system. However, the proposed algorithms are given for a more general class of distributed optimization problems, of which distributed linear regression is a special case. The system comprises of a server and multiple agents, where each agent is holding a certain number of data points and responses that satisfy a linear relationship (could be noisy). The objective of the server is to determine this relationship, given that some of the agents in the system (up to a known number) are Byzantine faulty (aka. actively adversarial). We show that the server can achieve this objective, in a deterministic manner, by robustifying the original distributed gradient descent method using norm based filters, namely 'norm filtering' and 'norm-cap filtering', incurring an additional log-linear computation cost in each iteration. The proposed algorithms improve upon the existing methods on three levels: i) no assumptions are required on the probability distribution of data points, ii) system can be partially asynchronous, and iii) the computational overhead (in order to handle Byzantine faulty agents) is log-linear in number of agents and linear in dimension of data points. The proposed algorithms differ from each other in the assumptions made for their correctness, and the gradient filter they use.
1302.0962
Veenu Mangat
Savinderjit Kaur (Department of Information Technology, UIET, PU, Chandigarh, India), Veenu Mangat (Department of Information Technology, UIET, PU, Chandigarh, India)
Improved Accuracy of PSO and DE using Normalization: an Application to Stock Price Prediction
null
(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 3, No. 9, 2012
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data Mining is being actively applied to stock market since 1980s. It has been used to predict stock prices, stock indexes, for portfolio management, trend detection and for developing recommender systems. The various algorithms which have been used for the same include ANN, SVM, ARIMA, GARCH etc. Different hybrid models have been developed by combining these algorithms with other algorithms like roughest, fuzzy logic, GA, PSO, DE, ACO etc. to improve the efficiency. This paper proposes DE-SVM model (Differential EvolutionSupport vector Machine) for stock price prediction. DE has been used to select best free parameters combination for SVM to improve results. The paper also compares the results of prediction with the outputs of SVM alone and PSO-SVM model (Particle Swarm Optimization). The effect of normalization of data on the accuracy of prediction has also been studied.
[ { "created": "Tue, 5 Feb 2013 09:01:13 GMT", "version": "v1" } ]
2013-02-06
[ [ "Kaur", "Savinderjit", "", "Department of Information Technology, UIET, PU,\n Chandigarh, India" ], [ "Mangat", "Veenu", "", "Department of Information Technology, UIET,\n PU, Chandigarh, India" ] ]
Data Mining is being actively applied to stock market since 1980s. It has been used to predict stock prices, stock indexes, for portfolio management, trend detection and for developing recommender systems. The various algorithms which have been used for the same include ANN, SVM, ARIMA, GARCH etc. Different hybrid models have been developed by combining these algorithms with other algorithms like roughest, fuzzy logic, GA, PSO, DE, ACO etc. to improve the efficiency. This paper proposes DE-SVM model (Differential EvolutionSupport vector Machine) for stock price prediction. DE has been used to select best free parameters combination for SVM to improve results. The paper also compares the results of prediction with the outputs of SVM alone and PSO-SVM model (Particle Swarm Optimization). The effect of normalization of data on the accuracy of prediction has also been studied.
1802.00047
Rui Zhang
Alexander Shapiro, Yao Xie, Rui Zhang
Matrix completion with deterministic pattern - a geometric perspective
null
null
10.1109/TSP.2018.2885494
null
cs.LG math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the matrix completion problem with a deterministic pattern of observed entries. In this setting, we aim to answer the question: under what condition there will be (at least locally) unique solution to the matrix completion problem, i.e., the underlying true matrix is identifiable. We answer the question from a certain point of view and outline a geometric perspective. We give an algebraically verifiable sufficient condition, which we call the well-posedness condition, for the local uniqueness of MRMC solutions. We argue that this condition is necessary for local stability of MRMC solutions, and we show that the condition is generic using the characteristic rank. We also argue that the low-rank approximation approaches are more stable than MRMC and further propose a sequential statistical testing procedure to determine the "true" rank from observed entries. Finally, we provide numerical examples aimed at verifying validity of the presented theory.
[ { "created": "Wed, 31 Jan 2018 20:03:07 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2018 02:29:26 GMT", "version": "v2" }, { "created": "Fri, 16 Feb 2018 03:25:36 GMT", "version": "v3" }, { "created": "Wed, 29 Aug 2018 19:03:53 GMT", "version": "v4" } ]
2019-01-30
[ [ "Shapiro", "Alexander", "" ], [ "Xie", "Yao", "" ], [ "Zhang", "Rui", "" ] ]
We consider the matrix completion problem with a deterministic pattern of observed entries. In this setting, we aim to answer the question: under what condition there will be (at least locally) unique solution to the matrix completion problem, i.e., the underlying true matrix is identifiable. We answer the question from a certain point of view and outline a geometric perspective. We give an algebraically verifiable sufficient condition, which we call the well-posedness condition, for the local uniqueness of MRMC solutions. We argue that this condition is necessary for local stability of MRMC solutions, and we show that the condition is generic using the characteristic rank. We also argue that the low-rank approximation approaches are more stable than MRMC and further propose a sequential statistical testing procedure to determine the "true" rank from observed entries. Finally, we provide numerical examples aimed at verifying validity of the presented theory.
2209.00796
Ling Yang
Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, Ming-Hsuan Yang
Diffusion Models: A Comprehensive Survey of Methods and Applications
54 pages, 18 figures, citing 368 (up-to-date) papers, project: https://github.com/YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy, accepted by ACM Computing Surveys
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models have emerged as a powerful new family of deep generative models with record-breaking performance in many applications, including image synthesis, video generation, and molecule design. In this survey, we provide an overview of the rapidly expanding body of work on diffusion models, categorizing the research into three key areas: efficient sampling, improved likelihood estimation, and handling data with special structures. We also discuss the potential for combining diffusion models with other generative models for enhanced results. We further review the wide-ranging applications of diffusion models in fields spanning from computer vision, natural language generation, temporal data modeling, to interdisciplinary applications in other scientific disciplines. This survey aims to provide a contextualized, in-depth look at the state of diffusion models, identifying the key areas of focus and pointing to potential areas for further exploration. Github: https://github.com/YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy.
[ { "created": "Fri, 2 Sep 2022 02:59:10 GMT", "version": "v1" }, { "created": "Thu, 23 Mar 2023 08:25:32 GMT", "version": "v10" }, { "created": "Wed, 11 Oct 2023 01:33:17 GMT", "version": "v11" }, { "created": "Tue, 6 Feb 2024 10:43:20 GMT", "version": "v12" }, { "created": "Mon, 24 Jun 2024 01:00:54 GMT", "version": "v13" }, { "created": "Tue, 6 Sep 2022 02:20:10 GMT", "version": "v2" }, { "created": "Wed, 7 Sep 2022 07:55:59 GMT", "version": "v3" }, { "created": "Fri, 9 Sep 2022 03:35:30 GMT", "version": "v4" }, { "created": "Mon, 12 Sep 2022 08:10:10 GMT", "version": "v5" }, { "created": "Thu, 15 Sep 2022 03:43:06 GMT", "version": "v6" }, { "created": "Mon, 3 Oct 2022 06:52:52 GMT", "version": "v7" }, { "created": "Mon, 17 Oct 2022 06:47:57 GMT", "version": "v8" }, { "created": "Mon, 24 Oct 2022 01:54:03 GMT", "version": "v9" } ]
2024-06-25
[ [ "Yang", "Ling", "" ], [ "Zhang", "Zhilong", "" ], [ "Song", "Yang", "" ], [ "Hong", "Shenda", "" ], [ "Xu", "Runsheng", "" ], [ "Zhao", "Yue", "" ], [ "Zhang", "Wentao", "" ], [ "Cui", "Bin", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
Diffusion models have emerged as a powerful new family of deep generative models with record-breaking performance in many applications, including image synthesis, video generation, and molecule design. In this survey, we provide an overview of the rapidly expanding body of work on diffusion models, categorizing the research into three key areas: efficient sampling, improved likelihood estimation, and handling data with special structures. We also discuss the potential for combining diffusion models with other generative models for enhanced results. We further review the wide-ranging applications of diffusion models in fields spanning from computer vision, natural language generation, temporal data modeling, to interdisciplinary applications in other scientific disciplines. This survey aims to provide a contextualized, in-depth look at the state of diffusion models, identifying the key areas of focus and pointing to potential areas for further exploration. Github: https://github.com/YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy.
1803.03571
Rion Brattig Correia
Rion Brattig Correia and Luciana P. de Ara\'ujo and Mauro M. Mattos and Luis M. Rocha
City-wide Analysis of Electronic Health Records Reveals Gender and Age Biases in the Administration of Known Drug-Drug Interactions
null
npj Digit. Med. 2, 74 (2019)
10.1038/s41746-019-0141-x
null
cs.SI cs.CY cs.IR q-bio.QM stat.ML
http://creativecommons.org/licenses/by/4.0/
The occurrence of drug-drug-interactions (DDI) from multiple drug dispensations is a serious problem, both for individuals and health-care systems, since patients with complications due to DDI are likely to reenter the system at a costlier level. We present a large-scale longitudinal study (18 months) of the DDI phenomenon at the primary- and secondary-care level using electronic health records (EHR) from the city of Blumenau in Southern Brazil (pop. $\approx 340,000$). We found that 181 distinct drug pairs known to interact were dispensed concomitantly to 12\% of the patients in the city's public health-care system. Further, 4\% of the patients were dispensed drug pairs that are likely to result in major adverse drug reactions (ADR)---with costs estimated to be much larger than previously reported in smaller studies. The large-scale analysis reveals that women have a 60\% increased risk of DDI as compared to men; the increase becomes 90\% when considering only DDI known to lead to major ADR. Furthermore, DDI risk increases substantially with age; patients aged 70-79 years have a 34\% risk of DDI when they are dispensed two or more drugs concomitantly. Interestingly, a statistical null model demonstrates that age- and female-specific risks from increased polypharmacy fail by far to explain the observed DDI risks in those populations, suggesting unknown social or biological causes. We also provide a network visualization of drugs and demographic factors that characterize the DDI phenomenon and demonstrate that accurate DDI prediction can be included in healthcare and public-health management, to reduce DDI-related ADR and costs.
[ { "created": "Fri, 9 Mar 2018 15:45:12 GMT", "version": "v1" }, { "created": "Fri, 7 Dec 2018 21:57:21 GMT", "version": "v2" }, { "created": "Wed, 20 Feb 2019 15:29:36 GMT", "version": "v3" }, { "created": "Thu, 2 Jan 2020 14:08:43 GMT", "version": "v4" } ]
2020-01-03
[ [ "Correia", "Rion Brattig", "" ], [ "de Araújo", "Luciana P.", "" ], [ "Mattos", "Mauro M.", "" ], [ "Rocha", "Luis M.", "" ] ]
The occurrence of drug-drug-interactions (DDI) from multiple drug dispensations is a serious problem, both for individuals and health-care systems, since patients with complications due to DDI are likely to reenter the system at a costlier level. We present a large-scale longitudinal study (18 months) of the DDI phenomenon at the primary- and secondary-care level using electronic health records (EHR) from the city of Blumenau in Southern Brazil (pop. $\approx 340,000$). We found that 181 distinct drug pairs known to interact were dispensed concomitantly to 12\% of the patients in the city's public health-care system. Further, 4\% of the patients were dispensed drug pairs that are likely to result in major adverse drug reactions (ADR)---with costs estimated to be much larger than previously reported in smaller studies. The large-scale analysis reveals that women have a 60\% increased risk of DDI as compared to men; the increase becomes 90\% when considering only DDI known to lead to major ADR. Furthermore, DDI risk increases substantially with age; patients aged 70-79 years have a 34\% risk of DDI when they are dispensed two or more drugs concomitantly. Interestingly, a statistical null model demonstrates that age- and female-specific risks from increased polypharmacy fail by far to explain the observed DDI risks in those populations, suggesting unknown social or biological causes. We also provide a network visualization of drugs and demographic factors that characterize the DDI phenomenon and demonstrate that accurate DDI prediction can be included in healthcare and public-health management, to reduce DDI-related ADR and costs.
2010.15090
Vishal Sunder
Vishal Sunder and Eric Fosler-Lussier
Handling Class Imbalance in Low-Resource Dialogue Systems by Combining Few-Shot Classification and Interpolation
5 pages, 4 figures, 3 tables
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Utterance classification performance in low-resource dialogue systems is constrained by an inevitably high degree of data imbalance in class labels. We present a new end-to-end pairwise learning framework that is designed specifically to tackle this phenomenon by inducing a few-shot classification capability in the utterance representations and augmenting data through an interpolation of utterance representations. Our approach is a general purpose training methodology, agnostic to the neural architecture used for encoding utterances. We show significant improvements in macro-F1 score over standard cross-entropy training for three different neural architectures, demonstrating improvements on a Virtual Patient dialogue dataset as well as a low-resourced emulation of the Switchboard dialogue act classification dataset.
[ { "created": "Wed, 28 Oct 2020 17:05:24 GMT", "version": "v1" } ]
2020-10-29
[ [ "Sunder", "Vishal", "" ], [ "Fosler-Lussier", "Eric", "" ] ]
Utterance classification performance in low-resource dialogue systems is constrained by an inevitably high degree of data imbalance in class labels. We present a new end-to-end pairwise learning framework that is designed specifically to tackle this phenomenon by inducing a few-shot classification capability in the utterance representations and augmenting data through an interpolation of utterance representations. Our approach is a general purpose training methodology, agnostic to the neural architecture used for encoding utterances. We show significant improvements in macro-F1 score over standard cross-entropy training for three different neural architectures, demonstrating improvements on a Virtual Patient dialogue dataset as well as a low-resourced emulation of the Switchboard dialogue act classification dataset.
2203.07920
Jingxiong Gao
Zichen Xu, Yunxiao Du, Kanqi Zhang, Jiacheng Huang, Jie Liu, Jingxiong Gao, Christopher Stewart
Cost-effective BlackWater Raft on Highly Unreliable Nodes at Scale Out
18 pages, 26 figures, We are already revising the Camera Ready version of IEEE Transaction on Cloud Computing
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Raft algorithm maintains strong consistency across data replicas in Cloud. This algorithm divides nodes into leaders and followers, to satisfy read/write requests spanning geo-diverse sites. With the increase of workload, Raft shall provide scale-out performance in proportion. However, traditional scale-out techniques encounter bottlenecks in Raft, and when the provisioned sites exhaust local resources, the performance loss will grow exponentially. To provide scalability in Raft, this paper proposes a cost-effective mechanism for elastic auto-scaling in Raft, called BlackWater-Raft or BW-Raft. BW-Raft extends the original Raft with the following abstractions: (1) secretary nodes that take over expensive log synchronization operations from the leader, relaxing the performance constraints on locks. (2) massive low cost observer nodes that handle reads only, improving throughput for typical data intensive services. These abstractions are stateless, allowing elastic scale-out on unreliable yet cheap spot instances. In theory, we demonstrate that BW-Raft can maintain Raft's strong consistency guarantees when scaling out, processing a 50X increase in the number of nodes compared to the original Raft. We have prototyped the BW-Raft on key-value services and evaluated it with many state-of-the-arts on Amazon EC2 and Alibaba Cloud. Our results show that within the same budget, BW-Raft's resource footprint increments are 5-7X smaller than Multi-Raft, and 2X better than original Raft. Using spot instances, BW-Raft can reduces costs by 84.5\% compared to Multi-Raft. In the real world experiments, BW-Raft improves goodput of the 95th-percentile SLO by 9.4X, thus serving as an alternative for services scaling out with strong consistency.
[ { "created": "Tue, 15 Mar 2022 14:00:17 GMT", "version": "v1" } ]
2022-03-16
[ [ "Xu", "Zichen", "" ], [ "Du", "Yunxiao", "" ], [ "Zhang", "Kanqi", "" ], [ "Huang", "Jiacheng", "" ], [ "Liu", "Jie", "" ], [ "Gao", "Jingxiong", "" ], [ "Stewart", "Christopher", "" ] ]
The Raft algorithm maintains strong consistency across data replicas in Cloud. This algorithm divides nodes into leaders and followers, to satisfy read/write requests spanning geo-diverse sites. With the increase of workload, Raft shall provide scale-out performance in proportion. However, traditional scale-out techniques encounter bottlenecks in Raft, and when the provisioned sites exhaust local resources, the performance loss will grow exponentially. To provide scalability in Raft, this paper proposes a cost-effective mechanism for elastic auto-scaling in Raft, called BlackWater-Raft or BW-Raft. BW-Raft extends the original Raft with the following abstractions: (1) secretary nodes that take over expensive log synchronization operations from the leader, relaxing the performance constraints on locks. (2) massive low cost observer nodes that handle reads only, improving throughput for typical data intensive services. These abstractions are stateless, allowing elastic scale-out on unreliable yet cheap spot instances. In theory, we demonstrate that BW-Raft can maintain Raft's strong consistency guarantees when scaling out, processing a 50X increase in the number of nodes compared to the original Raft. We have prototyped the BW-Raft on key-value services and evaluated it with many state-of-the-arts on Amazon EC2 and Alibaba Cloud. Our results show that within the same budget, BW-Raft's resource footprint increments are 5-7X smaller than Multi-Raft, and 2X better than original Raft. Using spot instances, BW-Raft can reduces costs by 84.5\% compared to Multi-Raft. In the real world experiments, BW-Raft improves goodput of the 95th-percentile SLO by 9.4X, thus serving as an alternative for services scaling out with strong consistency.
1804.01307
Anjany Kumar Sekuboyina
Anjany Sekuboyina, Markus Rempfler, Jan Kuka\v{c}ka, Giles Tetteh, Alexander Valentinitsch, Jan S. Kirschke, and Bjoern H. Menze
Btrfly Net: Vertebrae Labelling with Energy-based Adversarial Learning of Local Spine Prior
Published as conference paper in Medical Image Computing and Computer Assisted Intervention - MICCAI 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust localisation and identification of vertebrae is essential for automated spine analysis. The contribution of this work to the task is two-fold: (1) Inspired by the human expert, we hypothesise that a sagittal and coronal reformation of the spine contain sufficient information for labelling the vertebrae. Thereby, we propose a butterfly-shaped network architecture (termed Btrfly Net) that efficiently combines the information across reformations. (2) Underpinning the Btrfly net, we present an energy-based adversarial training regime that encodes local spine structure as an anatomical prior into the network, thereby enabling it to achieve state-of-art performance in all standard metrics on a benchmark dataset of 302 scans without any post-processing during inference.
[ { "created": "Wed, 4 Apr 2018 09:00:33 GMT", "version": "v1" }, { "created": "Tue, 13 Nov 2018 16:20:52 GMT", "version": "v2" } ]
2018-11-14
[ [ "Sekuboyina", "Anjany", "" ], [ "Rempfler", "Markus", "" ], [ "Kukačka", "Jan", "" ], [ "Tetteh", "Giles", "" ], [ "Valentinitsch", "Alexander", "" ], [ "Kirschke", "Jan S.", "" ], [ "Menze", "Bjoern H.", "" ] ]
Robust localisation and identification of vertebrae is essential for automated spine analysis. The contribution of this work to the task is two-fold: (1) Inspired by the human expert, we hypothesise that a sagittal and coronal reformation of the spine contain sufficient information for labelling the vertebrae. Thereby, we propose a butterfly-shaped network architecture (termed Btrfly Net) that efficiently combines the information across reformations. (2) Underpinning the Btrfly net, we present an energy-based adversarial training regime that encodes local spine structure as an anatomical prior into the network, thereby enabling it to achieve state-of-art performance in all standard metrics on a benchmark dataset of 302 scans without any post-processing during inference.
1909.08866
Dimitri Gominski
Dimitri Gominski (LaSTIG), Martyna Poreba (LaSTIG), Val\'erie Gouet-Brunet (LaSTIG), Liming Chen (LaSTIG)
Challenging deep image descriptors for retrieval in heterogeneous iconographic collections
SUMAC '19, 2019
null
10.1145/3347317.3357246
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes to study the behavior of recent and efficient state-of-the-art deep-learning based image descriptors for content-based image retrieval, facing a panel of complex variations appearing in heterogeneous image datasets, in particular in cultural collections that may involve multi-source, multi-date and multi-view Permission to make digital
[ { "created": "Thu, 19 Sep 2019 08:54:51 GMT", "version": "v1" } ]
2019-09-20
[ [ "Gominski", "Dimitri", "", "LaSTIG" ], [ "Poreba", "Martyna", "", "LaSTIG" ], [ "Gouet-Brunet", "Valérie", "", "LaSTIG" ], [ "Chen", "Liming", "", "LaSTIG" ] ]
This article proposes to study the behavior of recent and efficient state-of-the-art deep-learning based image descriptors for content-based image retrieval, facing a panel of complex variations appearing in heterogeneous image datasets, in particular in cultural collections that may involve multi-source, multi-date and multi-view Permission to make digital
1511.08507
Steffen Wendzel
Steffen Wendzel, Carolin Palmer
Creativity in Mind: Evaluating and Maintaining Advances in Network Steganographic Research
to appear in Journal of Universal Computer Science (J.UCS)
Journal of Universal Computer Science, Vol. 21(12), 2015
10.3217/jucs-021-12-1684
null
cs.MM cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The research discipline of network steganography deals with the hiding of information within network transmissions, e.g. to transfer illicit information in networks with Internet censorship. The last decades of research on network steganography led to more than hundred techniques for hiding data in network transmissions. However, previous research has shown that most of these hiding techniques are either based on the same idea or introduce limited novelty, enabling the application of existing countermeasures. In this paper, we provide a link between the field of creativity and network steganographic research. We propose a framework and a metric to help evaluating the creativity bound to a given hiding technique. This way, we support two sides of the scientific peer review process as both authors and reviewers can use our framework to analyze the novelty and applicability of hiding techniques. At the same time, we contribute to a uniform terminology in network steganography.
[ { "created": "Thu, 26 Nov 2015 21:07:05 GMT", "version": "v1" } ]
2016-11-22
[ [ "Wendzel", "Steffen", "" ], [ "Palmer", "Carolin", "" ] ]
The research discipline of network steganography deals with the hiding of information within network transmissions, e.g. to transfer illicit information in networks with Internet censorship. The last decades of research on network steganography led to more than hundred techniques for hiding data in network transmissions. However, previous research has shown that most of these hiding techniques are either based on the same idea or introduce limited novelty, enabling the application of existing countermeasures. In this paper, we provide a link between the field of creativity and network steganographic research. We propose a framework and a metric to help evaluating the creativity bound to a given hiding technique. This way, we support two sides of the scientific peer review process as both authors and reviewers can use our framework to analyze the novelty and applicability of hiding techniques. At the same time, we contribute to a uniform terminology in network steganography.
2109.09526
Stavros Shiaeles Dr
Stavros Shiaeles, Nicholas Kolokotronis, Emanuele Bellini
IoT Vulnerability Data Crawling and Analysis
2019 IEEE World Congress on Services (SERVICES)
null
10.1109/SERVICES.2019.00028
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Internet of Things (IoT) is a whole new ecosystem comprised of heterogeneous connected devices -i.e. computers, laptops, smart-phones and tablets as well as embedded devices and sensors-that communicate to deliver capabilities making our living, cities, transport, energy, and many other areas more intelligent. The main concerns raised from the IoT ecosystem are the devices poor support for patching/updating and the poor on-board computational power. A number of issues stem from this: inherent vulnerabilities and the inability to detect and defend against external attacks. Also, due to the nature of their operation, the devices tend to be rather open to communication, which makes attacks easy to spread once reaching a network. The aim of this research is to investigate if it is possible to extract useful results regarding attacks' trends and be able to predict them, before it is too late, by crawling Deep/Dark and Surface web. The results of this work show that is possible to find the trend and be able to act proactively in order to protect the IoT ecosystem.
[ { "created": "Mon, 20 Sep 2021 13:20:51 GMT", "version": "v1" } ]
2021-09-21
[ [ "Shiaeles", "Stavros", "" ], [ "Kolokotronis", "Nicholas", "" ], [ "Bellini", "Emanuele", "" ] ]
Internet of Things (IoT) is a whole new ecosystem comprised of heterogeneous connected devices -i.e. computers, laptops, smart-phones and tablets as well as embedded devices and sensors-that communicate to deliver capabilities making our living, cities, transport, energy, and many other areas more intelligent. The main concerns raised from the IoT ecosystem are the devices poor support for patching/updating and the poor on-board computational power. A number of issues stem from this: inherent vulnerabilities and the inability to detect and defend against external attacks. Also, due to the nature of their operation, the devices tend to be rather open to communication, which makes attacks easy to spread once reaching a network. The aim of this research is to investigate if it is possible to extract useful results regarding attacks' trends and be able to predict them, before it is too late, by crawling Deep/Dark and Surface web. The results of this work show that is possible to find the trend and be able to act proactively in order to protect the IoT ecosystem.
1912.04822
Jocelyn Sunseri
Jocelyn Sunseri and David Ryan Koes
libmolgrid: GPU Accelerated Molecular Gridding for Deep Learning Applications
null
null
null
null
cs.LG physics.chem-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many ways to represent a molecule as input to a machine learning model and each is associated with loss and retention of certain kinds of information. In the interest of preserving three-dimensional spatial information, including bond angles and torsions, we have developed libmolgrid, a general-purpose library for representing three-dimensional molecules using multidimensional arrays. This library also provides functionality for composing batches of data suited to machine learning workflows, including data augmentation, class balancing, and example stratification according to a regression variable or data subgroup, and it further supports temporal and spatial recurrences over that data to facilitate work with recurrent neural networks, dynamical data, and size extensive modeling. It was designed for seamless integration with popular deep learning frameworks, including Caffe, PyTorch, and Keras, providing good performance by leveraging graphical processing units (GPUs) for computationally-intensive tasks and efficient memory usage through the use of memory views over preallocated buffers. libmolgrid is a free and open source project that is actively supported, serving the growing need in the molecular modeling community for tools that streamline the process of data ingestion, representation construction, and principled machine learning model development.
[ { "created": "Tue, 10 Dec 2019 17:03:56 GMT", "version": "v1" } ]
2019-12-11
[ [ "Sunseri", "Jocelyn", "" ], [ "Koes", "David Ryan", "" ] ]
There are many ways to represent a molecule as input to a machine learning model and each is associated with loss and retention of certain kinds of information. In the interest of preserving three-dimensional spatial information, including bond angles and torsions, we have developed libmolgrid, a general-purpose library for representing three-dimensional molecules using multidimensional arrays. This library also provides functionality for composing batches of data suited to machine learning workflows, including data augmentation, class balancing, and example stratification according to a regression variable or data subgroup, and it further supports temporal and spatial recurrences over that data to facilitate work with recurrent neural networks, dynamical data, and size extensive modeling. It was designed for seamless integration with popular deep learning frameworks, including Caffe, PyTorch, and Keras, providing good performance by leveraging graphical processing units (GPUs) for computationally-intensive tasks and efficient memory usage through the use of memory views over preallocated buffers. libmolgrid is a free and open source project that is actively supported, serving the growing need in the molecular modeling community for tools that streamline the process of data ingestion, representation construction, and principled machine learning model development.
1603.00831
Anton Milan
Anton Milan, Laura Leal-Taixe, Ian Reid, Stefan Roth, Konrad Schindler
MOT16: A Benchmark for Multi-Object Tracking
arXiv admin note: substantial text overlap with arXiv:1504.01942
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.
[ { "created": "Wed, 2 Mar 2016 19:07:56 GMT", "version": "v1" }, { "created": "Tue, 3 May 2016 23:55:38 GMT", "version": "v2" } ]
2016-05-05
[ [ "Milan", "Anton", "" ], [ "Leal-Taixe", "Laura", "" ], [ "Reid", "Ian", "" ], [ "Roth", "Stefan", "" ], [ "Schindler", "Konrad", "" ] ]
Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.
1910.01973
Martin Johannes Kraemer
Martin J Kraemer, William Seymour, Reuben Binns, Max Van Kleek, Ivan Flechais
Informing The Future of Data Protection in Smart Homes
Proceedings of the CHI 2019 Workshop on New Directions for the IoT: Automate, Share, Build, and Care, (arXiv:1906.06089)
null
null
IOTD/2019/12
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent changes to data protection regulation, particularly in Europe, are changing the design landscape for smart devices, requiring new design techniques to ensure that devices are able to adequately protect users' data. A particularly interesting space in which to explore and address these challenges is the smart home, which presents a multitude of difficult social and technical problems in an intimate and highly private context. This position paper outlines the motivation and research approach of a new project aiming to inform the future of data protection by design and by default in smart homes through a combination of ethnography and speculative design.
[ { "created": "Mon, 17 Jun 2019 09:57:41 GMT", "version": "v1" } ]
2019-10-07
[ [ "Kraemer", "Martin J", "" ], [ "Seymour", "William", "" ], [ "Binns", "Reuben", "" ], [ "Van Kleek", "Max", "" ], [ "Flechais", "Ivan", "" ] ]
Recent changes to data protection regulation, particularly in Europe, are changing the design landscape for smart devices, requiring new design techniques to ensure that devices are able to adequately protect users' data. A particularly interesting space in which to explore and address these challenges is the smart home, which presents a multitude of difficult social and technical problems in an intimate and highly private context. This position paper outlines the motivation and research approach of a new project aiming to inform the future of data protection by design and by default in smart homes through a combination of ethnography and speculative design.
1602.00877
Jonathan Scarlett
Jonathan Scarlett and Volkan Cevher
Partial Recovery Bounds for the Sparse Stochastic Block Model
Accepted to ISIT 2016
null
null
null
cs.IT cs.SI math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the information-theoretic limits of community detection in the symmetric two-community stochastic block model, with intra-community and inter-community edge probabilities $\frac{a}{n}$ and $\frac{b}{n}$ respectively. We consider the sparse setting, in which $a$ and $b$ do not scale with $n$, and provide upper and lower bounds on the proportion of community labels recovered on average. We provide a numerical example for which the bounds are near-matching for moderate values of $a - b$, and matching in the limit as $a-b$ grows large.
[ { "created": "Tue, 2 Feb 2016 11:00:10 GMT", "version": "v1" }, { "created": "Mon, 4 Apr 2016 16:59:26 GMT", "version": "v2" } ]
2016-04-05
[ [ "Scarlett", "Jonathan", "" ], [ "Cevher", "Volkan", "" ] ]
In this paper, we study the information-theoretic limits of community detection in the symmetric two-community stochastic block model, with intra-community and inter-community edge probabilities $\frac{a}{n}$ and $\frac{b}{n}$ respectively. We consider the sparse setting, in which $a$ and $b$ do not scale with $n$, and provide upper and lower bounds on the proportion of community labels recovered on average. We provide a numerical example for which the bounds are near-matching for moderate values of $a - b$, and matching in the limit as $a-b$ grows large.
2208.08084
Xinghao Chen
Zhijun Tu, Xinghao Chen, Pengju Ren, Yunhe Wang
AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values, thus greatly reducing the memory usage and computational complexity. Since the modern deep neural networks are of sophisticated design with complex architecture for the accuracy reason, the diversity on distributions of weights and activations is very high. Therefore, the conventional sign function cannot be well used for effectively binarizing full-precision values in BNNs. To this end, we present a simple yet effective approach called AdaBin to adaptively obtain the optimal binary sets $\{b_1, b_2\}$ ($b_1, b_2\in \mathbb{R}$) of weights and activations for each layer instead of a fixed set (\textit{i.e.}, $\{-1, +1\}$). In this way, the proposed method can better fit different distributions and increase the representation ability of binarized features. In practice, we use the center position and distance of 1-bit values to define a new binary quantization function. For the weights, we propose an equalization method to align the symmetrical center of binary distribution to real-valued distribution, and minimize the Kullback-Leibler divergence of them. Meanwhile, we introduce a gradient-based optimization method to get these two parameters for activations, which are jointly trained in an end-to-end manner. Experimental results on benchmark models and datasets demonstrate that the proposed AdaBin is able to achieve state-of-the-art performance. For instance, we obtain a 66.4% Top-1 accuracy on the ImageNet using ResNet-18 architecture, and a 69.4 mAP on PASCAL VOC using SSD300. The PyTorch code is available at \url{https://github.com/huawei-noah/Efficient-Computing/tree/master/BinaryNetworks/AdaBin} and the MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/AdaBin}.
[ { "created": "Wed, 17 Aug 2022 05:43:33 GMT", "version": "v1" }, { "created": "Mon, 3 Oct 2022 14:23:42 GMT", "version": "v2" } ]
2022-10-04
[ [ "Tu", "Zhijun", "" ], [ "Chen", "Xinghao", "" ], [ "Ren", "Pengju", "" ], [ "Wang", "Yunhe", "" ] ]
This paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values, thus greatly reducing the memory usage and computational complexity. Since the modern deep neural networks are of sophisticated design with complex architecture for the accuracy reason, the diversity on distributions of weights and activations is very high. Therefore, the conventional sign function cannot be well used for effectively binarizing full-precision values in BNNs. To this end, we present a simple yet effective approach called AdaBin to adaptively obtain the optimal binary sets $\{b_1, b_2\}$ ($b_1, b_2\in \mathbb{R}$) of weights and activations for each layer instead of a fixed set (\textit{i.e.}, $\{-1, +1\}$). In this way, the proposed method can better fit different distributions and increase the representation ability of binarized features. In practice, we use the center position and distance of 1-bit values to define a new binary quantization function. For the weights, we propose an equalization method to align the symmetrical center of binary distribution to real-valued distribution, and minimize the Kullback-Leibler divergence of them. Meanwhile, we introduce a gradient-based optimization method to get these two parameters for activations, which are jointly trained in an end-to-end manner. Experimental results on benchmark models and datasets demonstrate that the proposed AdaBin is able to achieve state-of-the-art performance. For instance, we obtain a 66.4% Top-1 accuracy on the ImageNet using ResNet-18 architecture, and a 69.4 mAP on PASCAL VOC using SSD300. The PyTorch code is available at \url{https://github.com/huawei-noah/Efficient-Computing/tree/master/BinaryNetworks/AdaBin} and the MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/AdaBin}.
2301.05897
Erik Isai Valle Salgado
Erik Isai Valle Salgado, Haoxin Yan, Yue Hong, Peiyuan Zhu, Shidong Zhu, Chengwei Liao, Yanxiang Wen, Xiu Li, Xiang Qian, Xiaohao Wang, Xinghui Li
Model-based Transfer Learning for Automatic Optical Inspection based on domain discrepancy
This is a fix of the published paper "Relational-based transfer learning for automatic optical inspection based on domain discrepancy"
Proc. SPIE 12317, Optoelectronic Imaging and Multimedia Technology IXMultimedia Technology IX, 2023
10.1117/12.2644087 10.1117/12.2644087
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Transfer learning is a promising method for AOI applications since it can significantly shorten sample collection time and improve efficiency in today's smart manufacturing. However, related research enhanced the network models by applying TL without considering the domain similarity among datasets, the data long-tailedness of a source dataset, and mainly used linear transformations to mitigate the lack of samples. This research applies model-based TL via domain similarity to improve the overall performance and data augmentation in both target and source domains to enrich the data quality and reduce the imbalance. Given a group of source datasets from similar industrial processes, we define which group is the most related to the target through the domain discrepancy score and the number of samples each has. Then, we transfer the chosen pre-trained backbone weights to train and fine-tune the target network. Our research suggests increases in the F1 score and the PR curve up to 20% compared with TL using benchmark datasets.
[ { "created": "Sat, 14 Jan 2023 11:32:39 GMT", "version": "v1" } ]
2023-01-18
[ [ "Salgado", "Erik Isai Valle", "" ], [ "Yan", "Haoxin", "" ], [ "Hong", "Yue", "" ], [ "Zhu", "Peiyuan", "" ], [ "Zhu", "Shidong", "" ], [ "Liao", "Chengwei", "" ], [ "Wen", "Yanxiang", "" ], [ "Li", "Xiu", "" ], [ "Qian", "Xiang", "" ], [ "Wang", "Xiaohao", "" ], [ "Li", "Xinghui", "" ] ]
Transfer learning is a promising method for AOI applications since it can significantly shorten sample collection time and improve efficiency in today's smart manufacturing. However, related research enhanced the network models by applying TL without considering the domain similarity among datasets, the data long-tailedness of a source dataset, and mainly used linear transformations to mitigate the lack of samples. This research applies model-based TL via domain similarity to improve the overall performance and data augmentation in both target and source domains to enrich the data quality and reduce the imbalance. Given a group of source datasets from similar industrial processes, we define which group is the most related to the target through the domain discrepancy score and the number of samples each has. Then, we transfer the chosen pre-trained backbone weights to train and fine-tune the target network. Our research suggests increases in the F1 score and the PR curve up to 20% compared with TL using benchmark datasets.
1506.02531
Cl\'ement Duhart Mr
Cl\'ement Duhart and Pierre Sauvage and Cyrille Bertelle
EMMA: A Resource Oriented Framework for Service Choreography over Wireless Sensor and Actor Networks
23 pages
null
null
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current Internet of Things (IoT) development requires service distribution over Wireless Sensor and Actor Networks (WSAN) to deal with the drastic increasing of network management complexity. Because of the specific constraints of WSAN, centralized approaches are strongly limited. Multi-hop communication used by WSAN introduces transmission latency, packet errors, router congestion and security issues. As it uses local services, a decentralized service model avoid long path communications between nodes and applications. But the main issue is then to have such local services installed on the desired nodes. Environment Monitoring and Management Agent (EMMA) system proposes a set of software to deploy and to execute such services over Wireless Sensor and Actor Networks (WSAN) through a middleware based on Resource Oriented Architecture (ROA). Its Internet integration and the local management of data heterogeneity are facilitated through the use of current standard protocols such as IPv6 LoW Power Wireless Area Networks (6LoWPAN) and Constrained Application Protocol (CoAP). This contribution presents EMMA middleware, methodology and tools used to determine efficient service mapping and its deployment.
[ { "created": "Mon, 8 Jun 2015 14:54:24 GMT", "version": "v1" } ]
2015-06-09
[ [ "Duhart", "Clément", "" ], [ "Sauvage", "Pierre", "" ], [ "Bertelle", "Cyrille", "" ] ]
Current Internet of Things (IoT) development requires service distribution over Wireless Sensor and Actor Networks (WSAN) to deal with the drastic increasing of network management complexity. Because of the specific constraints of WSAN, centralized approaches are strongly limited. Multi-hop communication used by WSAN introduces transmission latency, packet errors, router congestion and security issues. As it uses local services, a decentralized service model avoid long path communications between nodes and applications. But the main issue is then to have such local services installed on the desired nodes. Environment Monitoring and Management Agent (EMMA) system proposes a set of software to deploy and to execute such services over Wireless Sensor and Actor Networks (WSAN) through a middleware based on Resource Oriented Architecture (ROA). Its Internet integration and the local management of data heterogeneity are facilitated through the use of current standard protocols such as IPv6 LoW Power Wireless Area Networks (6LoWPAN) and Constrained Application Protocol (CoAP). This contribution presents EMMA middleware, methodology and tools used to determine efficient service mapping and its deployment.
2201.07216
Yiqiang Sheng
Satoshi Kamo, Yiqiang Sheng
Wide Area Network Intelligence with Application to Multimedia Service
null
null
null
null
cs.NI cs.AI cs.NE
http://creativecommons.org/licenses/by/4.0/
Network intelligence is a discipline that builds on the capabilities of network systems to act intelligently by the usage of network resources for delivering high-quality services in a changing environment. Wide area network intelligence is a class of network intelligence in wide area network which covers the core and the edge of Internet. In this paper, we propose a system based on machine learning for wide area network intelligence. The whole system consists of a core machine for pre-training and many terminal machines to accomplish faster responses. Each machine is one of dual-hemisphere models which are made of left and right hemispheres. The left hemisphere is used to improve latency by terminal response and the right hemisphere is used to improve communication by data generation. In an application on multimedia service, the proposed model is superior to the latest deep feed forward neural network in the data center with respect to the accuracy, latency and communication. Evaluation shows scalable improvement with regard to the number of terminal machines. Evaluation also shows the cost of improvement is longer learning time.
[ { "created": "Fri, 14 Jan 2022 23:45:54 GMT", "version": "v1" } ]
2022-01-20
[ [ "Kamo", "Satoshi", "" ], [ "Sheng", "Yiqiang", "" ] ]
Network intelligence is a discipline that builds on the capabilities of network systems to act intelligently by the usage of network resources for delivering high-quality services in a changing environment. Wide area network intelligence is a class of network intelligence in wide area network which covers the core and the edge of Internet. In this paper, we propose a system based on machine learning for wide area network intelligence. The whole system consists of a core machine for pre-training and many terminal machines to accomplish faster responses. Each machine is one of dual-hemisphere models which are made of left and right hemispheres. The left hemisphere is used to improve latency by terminal response and the right hemisphere is used to improve communication by data generation. In an application on multimedia service, the proposed model is superior to the latest deep feed forward neural network in the data center with respect to the accuracy, latency and communication. Evaluation shows scalable improvement with regard to the number of terminal machines. Evaluation also shows the cost of improvement is longer learning time.
1308.4757
Ziqiang Shi
Ziqiang Shi and Rujie Liu
Online and stochastic Douglas-Rachford splitting method for large scale machine learning
null
null
null
null
cs.NA cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online and stochastic learning has emerged as powerful tool in large scale optimization. In this work, we generalize the Douglas-Rachford splitting (DRs) method for minimizing composite functions to online and stochastic settings (to our best knowledge this is the first time DRs been generalized to sequential version). We first establish an $O(1/\sqrt{T})$ regret bound for batch DRs method. Then we proved that the online DRs splitting method enjoy an $O(1)$ regret bound and stochastic DRs splitting has a convergence rate of $O(1/\sqrt{T})$. The proof is simple and intuitive, and the results and technique can be served as a initiate for the research on the large scale machine learning employ the DRs method. Numerical experiments of the proposed method demonstrate the effectiveness of the online and stochastic update rule, and further confirm our regret and convergence analysis.
[ { "created": "Thu, 22 Aug 2013 03:40:41 GMT", "version": "v1" }, { "created": "Mon, 26 Aug 2013 06:21:25 GMT", "version": "v2" }, { "created": "Wed, 28 Aug 2013 06:50:16 GMT", "version": "v3" }, { "created": "Sat, 7 Sep 2013 04:30:41 GMT", "version": "v4" }, { "created": "Wed, 25 Sep 2013 08:20:54 GMT", "version": "v5" }, { "created": "Tue, 16 Aug 2016 07:05:38 GMT", "version": "v6" }, { "created": "Mon, 10 Oct 2016 08:46:25 GMT", "version": "v7" }, { "created": "Tue, 11 Oct 2016 00:52:10 GMT", "version": "v8" }, { "created": "Wed, 21 Dec 2016 07:05:13 GMT", "version": "v9" } ]
2016-12-22
[ [ "Shi", "Ziqiang", "" ], [ "Liu", "Rujie", "" ] ]
Online and stochastic learning has emerged as powerful tool in large scale optimization. In this work, we generalize the Douglas-Rachford splitting (DRs) method for minimizing composite functions to online and stochastic settings (to our best knowledge this is the first time DRs been generalized to sequential version). We first establish an $O(1/\sqrt{T})$ regret bound for batch DRs method. Then we proved that the online DRs splitting method enjoy an $O(1)$ regret bound and stochastic DRs splitting has a convergence rate of $O(1/\sqrt{T})$. The proof is simple and intuitive, and the results and technique can be served as a initiate for the research on the large scale machine learning employ the DRs method. Numerical experiments of the proposed method demonstrate the effectiveness of the online and stochastic update rule, and further confirm our regret and convergence analysis.
2303.04486
Sarah Bee
S.C. Bee, E. Papatheou, M Haywood-Alexander, R.S. Mills, L.A. Bull, K. Worden and N. Dervilis
Better Together: Using Multi-task Learning to Improve Feature Selection within Structural Datasets
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
There have been recent efforts to move to population-based structural health monitoring (PBSHM) systems. One area of PBSHM which has been recognised for potential development is the use of multi-task learning (MTL); algorithms which differ from traditional independent learning algorithms. Presented here is the use of the MTL, ''Joint Feature Selection with LASSO'', to provide automatic feature selection for a structural dataset. The classification task is to differentiate between the port and starboard side of a tailplane, for samples from two aircraft of the same model. The independent learner produced perfect F1 scores but had poor engineering insight; whereas the MTL results were interpretable, highlighting structural differences as opposed to differences in experimental set-up.
[ { "created": "Wed, 8 Mar 2023 10:19:55 GMT", "version": "v1" } ]
2023-03-09
[ [ "Bee", "S. C.", "" ], [ "Papatheou", "E.", "" ], [ "Haywood-Alexander", "M", "" ], [ "Mills", "R. S.", "" ], [ "Bull", "L. A.", "" ], [ "Worden", "K.", "" ], [ "Dervilis", "N.", "" ] ]
There have been recent efforts to move to population-based structural health monitoring (PBSHM) systems. One area of PBSHM which has been recognised for potential development is the use of multi-task learning (MTL); algorithms which differ from traditional independent learning algorithms. Presented here is the use of the MTL, ''Joint Feature Selection with LASSO'', to provide automatic feature selection for a structural dataset. The classification task is to differentiate between the port and starboard side of a tailplane, for samples from two aircraft of the same model. The independent learner produced perfect F1 scores but had poor engineering insight; whereas the MTL results were interpretable, highlighting structural differences as opposed to differences in experimental set-up.
2401.08719
Sang-Ki Ko
Seung-Yeop Baik, Mingi Jeon, Joonghyuk Hahn, Jungin Kim, Yo-Sub Han, Sang-Ki Ko
CodeComplex: A Time-Complexity Dataset for Bilingual Source Codes
null
null
null
null
cs.SE cs.CC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Analyzing the worst-case time complexity of a code is a crucial task in computer science and software engineering for ensuring the efficiency, reliability, and robustness of software systems. However, it is well-known that the problem of determining the worst-case time complexity of a given code written in general-purpose programming language is theoretically undecidable by the famous Halting problem proven by Alan Turing. Thus, we move towards more realistic scenarios where the inputs and outputs of a program exist. This allows us to discern the correctness of given codes, challenging to analyze their time complexity exhaustively. In response to this challenge, we introduce CodeComplex, a novel source code dataset where each code is manually annotated with a corresponding worst-case time complexity. CodeComplex comprises 4,900 Java codes and an equivalent number of Python codes, all sourced from programming competitions and annotated with complexity labels by a panel of algorithmic experts. To the best of our knowledge, CodeComplex stands as the most extensive code dataset tailored for predicting complexity. Subsequently, we present the outcomes of our experiments employing various baseline models, leveraging state-of-the-art neural models in code comprehension like CodeBERT, GraphCodeBERT, UniXcoder, PLBART, CodeT5, CodeT5+, and ChatGPT. We analyze how the dataset impacts the model's learning in predicting time complexity.
[ { "created": "Tue, 16 Jan 2024 06:54:44 GMT", "version": "v1" } ]
2024-01-18
[ [ "Baik", "Seung-Yeop", "" ], [ "Jeon", "Mingi", "" ], [ "Hahn", "Joonghyuk", "" ], [ "Kim", "Jungin", "" ], [ "Han", "Yo-Sub", "" ], [ "Ko", "Sang-Ki", "" ] ]
Analyzing the worst-case time complexity of a code is a crucial task in computer science and software engineering for ensuring the efficiency, reliability, and robustness of software systems. However, it is well-known that the problem of determining the worst-case time complexity of a given code written in general-purpose programming language is theoretically undecidable by the famous Halting problem proven by Alan Turing. Thus, we move towards more realistic scenarios where the inputs and outputs of a program exist. This allows us to discern the correctness of given codes, challenging to analyze their time complexity exhaustively. In response to this challenge, we introduce CodeComplex, a novel source code dataset where each code is manually annotated with a corresponding worst-case time complexity. CodeComplex comprises 4,900 Java codes and an equivalent number of Python codes, all sourced from programming competitions and annotated with complexity labels by a panel of algorithmic experts. To the best of our knowledge, CodeComplex stands as the most extensive code dataset tailored for predicting complexity. Subsequently, we present the outcomes of our experiments employing various baseline models, leveraging state-of-the-art neural models in code comprehension like CodeBERT, GraphCodeBERT, UniXcoder, PLBART, CodeT5, CodeT5+, and ChatGPT. We analyze how the dataset impacts the model's learning in predicting time complexity.
2109.14680
Jialing Liao
Jialing Liao and Olav Tirkkonen
Fundamental Rate-Memory Tradeoff for Coded Caching in Presence of User Inactivity
42pages, 5 figures, submitted to Trans Inf
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Coded caching utilizes proper file subpacketization and coded delivery to make full use of the multicast opportunities in content delivery, to alleviate file transfer load in massive content delivery scenarios. Most existing work considers deterministic environments. An important practical topic is to characterize the impact of the uncertainty from user inactivity on coded caching. We consider a one server cache-enabled network under homogeneous file and network settings in presence of user inactivity. Unlike random or probabilistic caching studied in the literature, deterministic coded caching is considered, with the objective to minimize the worst-case backhaul load by optimizing the file subpacketization and the caching strategy. First, a coded caching method is used, where each file is split into the same type of fragments labeled using sets with fixed cardinality, and the optimality of the selected cardinality is proved. Optimal file subpacketization by splitting the file into multiple types of fragments labeled with multiple cardinalities is then discussed. We show that the closed-form optimum turns out to be given by a fixed cardinality -- optimizing for user inactivity only affects file delivery, cache placement is not affected. A decentralized version is also discussed and analyzed, where each user fills its storage independently at random without centralized coordination, and user inactivity is taken into account in file delivery. Simulation results show that the optimization based centralized coded caching scheme provides performance comparable to the ideal scenario assuming full knowledge of user inactivity in the placement phase, while decentralized caching performs slightly worse against user inactivity.
[ { "created": "Wed, 29 Sep 2021 19:31:40 GMT", "version": "v1" } ]
2021-10-01
[ [ "Liao", "Jialing", "" ], [ "Tirkkonen", "Olav", "" ] ]
Coded caching utilizes proper file subpacketization and coded delivery to make full use of the multicast opportunities in content delivery, to alleviate file transfer load in massive content delivery scenarios. Most existing work considers deterministic environments. An important practical topic is to characterize the impact of the uncertainty from user inactivity on coded caching. We consider a one server cache-enabled network under homogeneous file and network settings in presence of user inactivity. Unlike random or probabilistic caching studied in the literature, deterministic coded caching is considered, with the objective to minimize the worst-case backhaul load by optimizing the file subpacketization and the caching strategy. First, a coded caching method is used, where each file is split into the same type of fragments labeled using sets with fixed cardinality, and the optimality of the selected cardinality is proved. Optimal file subpacketization by splitting the file into multiple types of fragments labeled with multiple cardinalities is then discussed. We show that the closed-form optimum turns out to be given by a fixed cardinality -- optimizing for user inactivity only affects file delivery, cache placement is not affected. A decentralized version is also discussed and analyzed, where each user fills its storage independently at random without centralized coordination, and user inactivity is taken into account in file delivery. Simulation results show that the optimization based centralized coded caching scheme provides performance comparable to the ideal scenario assuming full knowledge of user inactivity in the placement phase, while decentralized caching performs slightly worse against user inactivity.
1908.07323
Zewen He
Zewen He, He Huang, Yudong Wu, Guan Huang, Wensheng Zhang
Instance Scale Normalization for image understanding
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scale variation remains a challenging problem for object detection. Common paradigms usually adopt multiscale training & testing (image pyramid) or FPN (feature pyramid network) to process objects in a wide scale range. However, multi-scale methods aggravate more variations of scale that even deep convolution neural networks with FPN cannot handle well. In this work, we propose an innovative paradigm called Instance Scale Normalization (ISN) to resolve the above problem. ISN compresses the scale space of objects into a consistent range (ISN range), in both training and testing phases. This reassures the problem of scale variation fundamentally and reduces the difficulty of network optimization. Experiments show that ISN surpasses multi-scale counterpart significantly for object detection, instance segmentation, and multi-task human pose estimation, on several architectures. On COCO test-dev, our single model based on ISN achieves 46.5 mAP with a ResNet-101 backbone, which is among the state-of-the-art (SOTA) candidates for object detection.
[ { "created": "Tue, 20 Aug 2019 13:12:33 GMT", "version": "v1" }, { "created": "Wed, 10 Jun 2020 01:42:50 GMT", "version": "v2" } ]
2020-06-11
[ [ "He", "Zewen", "" ], [ "Huang", "He", "" ], [ "Wu", "Yudong", "" ], [ "Huang", "Guan", "" ], [ "Zhang", "Wensheng", "" ] ]
Scale variation remains a challenging problem for object detection. Common paradigms usually adopt multiscale training & testing (image pyramid) or FPN (feature pyramid network) to process objects in a wide scale range. However, multi-scale methods aggravate more variations of scale that even deep convolution neural networks with FPN cannot handle well. In this work, we propose an innovative paradigm called Instance Scale Normalization (ISN) to resolve the above problem. ISN compresses the scale space of objects into a consistent range (ISN range), in both training and testing phases. This reassures the problem of scale variation fundamentally and reduces the difficulty of network optimization. Experiments show that ISN surpasses multi-scale counterpart significantly for object detection, instance segmentation, and multi-task human pose estimation, on several architectures. On COCO test-dev, our single model based on ISN achieves 46.5 mAP with a ResNet-101 backbone, which is among the state-of-the-art (SOTA) candidates for object detection.
2105.08877
Erick Delage
Abderrahim Fathan and Erick Delage
Deep Reinforcement Learning for Optimal Stopping with Application in Financial Engineering
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimal stopping is the problem of deciding the right time at which to take a particular action in a stochastic system, in order to maximize an expected reward. It has many applications in areas such as finance, healthcare, and statistics. In this paper, we employ deep Reinforcement Learning (RL) to learn optimal stopping policies in two financial engineering applications: namely option pricing, and optimal option exercise. We present for the first time a comprehensive empirical evaluation of the quality of optimal stopping policies identified by three state of the art deep RL algorithms: double deep Q-learning (DDQN), categorical distributional RL (C51), and Implicit Quantile Networks (IQN). In the case of option pricing, our findings indicate that in a theoretical Black-Schole environment, IQN successfully identifies nearly optimal prices. On the other hand, it is slightly outperformed by C51 when confronted to real stock data movements in a put option exercise problem that involves assets from the S&P500 index. More importantly, the C51 algorithm is able to identify an optimal stopping policy that achieves 8% more out-of-sample returns than the best of four natural benchmark policies. We conclude with a discussion of our findings which should pave the way for relevant future research.
[ { "created": "Wed, 19 May 2021 01:52:04 GMT", "version": "v1" } ]
2021-05-20
[ [ "Fathan", "Abderrahim", "" ], [ "Delage", "Erick", "" ] ]
Optimal stopping is the problem of deciding the right time at which to take a particular action in a stochastic system, in order to maximize an expected reward. It has many applications in areas such as finance, healthcare, and statistics. In this paper, we employ deep Reinforcement Learning (RL) to learn optimal stopping policies in two financial engineering applications: namely option pricing, and optimal option exercise. We present for the first time a comprehensive empirical evaluation of the quality of optimal stopping policies identified by three state of the art deep RL algorithms: double deep Q-learning (DDQN), categorical distributional RL (C51), and Implicit Quantile Networks (IQN). In the case of option pricing, our findings indicate that in a theoretical Black-Schole environment, IQN successfully identifies nearly optimal prices. On the other hand, it is slightly outperformed by C51 when confronted to real stock data movements in a put option exercise problem that involves assets from the S&P500 index. More importantly, the C51 algorithm is able to identify an optimal stopping policy that achieves 8% more out-of-sample returns than the best of four natural benchmark policies. We conclude with a discussion of our findings which should pave the way for relevant future research.
2002.03323
Lina Mohjazi
Lina Mohjazi, Sami Muhaidat, Mehrdad Dianati, Mahmoud Al-Qutayri and Naofal Al-Dhahir
Performance Analysis of SWIPT Relaying Systems in the Presence of Impulsive Noise
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop an analytical framework to characterize the effect of impulsive noise on the performance of relay-assisted simultaneous wireless information and power transfer (SWIPT) systems. We derive novel closed-form expressions for the pairwise error probability (PEP) considering two variants based on the availability of channel state information (CSI), namely, blind relaying and CSI-assisted relaying. We further consider two energy harvesting (EH) techniques, i.e., instantaneous EH (IEH) and average EH (AEH). Capitalizing on the derived analytical results, we present a detailed numerical investigation of the diversity order for the underlying scenarios under the impulsive noise assumption. For the case when two relays and the availability of a direct link, it is demonstrated that the considered SWIPT system with blind AEH-relaying is able to achieve an asymptotic diversity order of less than 3, which is equal to the diversity order achieved by CSI-assisted IEH-relaying. This result suggests that, by employing the blind AEH relaying, the power consumption of the network can be reduced, due to eliminating the need of CSI estimation. This can be achieved without any performance loss. Our results further show that placing the relays close to the source can significantly mitigate the detrimental effects of impulsive noise. Extensive Monte Carlo simulation results are presented to validate the accuracy of the proposed analytical framework.
[ { "created": "Sun, 9 Feb 2020 09:13:26 GMT", "version": "v1" } ]
2020-02-11
[ [ "Mohjazi", "Lina", "" ], [ "Muhaidat", "Sami", "" ], [ "Dianati", "Mehrdad", "" ], [ "Al-Qutayri", "Mahmoud", "" ], [ "Al-Dhahir", "Naofal", "" ] ]
We develop an analytical framework to characterize the effect of impulsive noise on the performance of relay-assisted simultaneous wireless information and power transfer (SWIPT) systems. We derive novel closed-form expressions for the pairwise error probability (PEP) considering two variants based on the availability of channel state information (CSI), namely, blind relaying and CSI-assisted relaying. We further consider two energy harvesting (EH) techniques, i.e., instantaneous EH (IEH) and average EH (AEH). Capitalizing on the derived analytical results, we present a detailed numerical investigation of the diversity order for the underlying scenarios under the impulsive noise assumption. For the case when two relays and the availability of a direct link, it is demonstrated that the considered SWIPT system with blind AEH-relaying is able to achieve an asymptotic diversity order of less than 3, which is equal to the diversity order achieved by CSI-assisted IEH-relaying. This result suggests that, by employing the blind AEH relaying, the power consumption of the network can be reduced, due to eliminating the need of CSI estimation. This can be achieved without any performance loss. Our results further show that placing the relays close to the source can significantly mitigate the detrimental effects of impulsive noise. Extensive Monte Carlo simulation results are presented to validate the accuracy of the proposed analytical framework.
1803.11046
Swarup Chauhan
Swarup Chauhan (1 and 2), Kathleen Sell (2 and 5), Frieder Enzmann (2), Wolfram R\"uhaak (3), Thorsten Wille (4), Ingo Sass (1), Michael Kersten (2) ((1) Institute of Applied Geosciences, University of Technology, Darmstadt, Germany (2) Institute for Geosciences, Johannes Gutenberg-University, Mainz, Germany (3) Federal Institute for Geosciences and Natural Resources (BGR), Hannover, Germany (4) APS Antriebs-, Pr\"uf- und Steuertechnik GmbH, G\"ottingen-Rosdorf, Germany (5) igem - Institute for Geothermal Ressource Management, Bingen, Germany)
CobWeb - a toolbox for automatic tomographic image analysis based on machine learning techniques: application and examples
29 pages (article + appendix). 16 figures (8 Article/8 Appendix)
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
In this study, we introduce CobWeb 1.0 which is a graphical user interface tailored explicitly for accurate image segmentation and representative elementary volume analysis of digital rock images derived from high resolution tomography. The CobWeb code is a work package deployed as a series of windows executable binaries which use image processing and machine learning libraries of MATLAB. The user-friendly interface enables image segmentation and cross-validation employing K-means, Fuzzy C-means, least square support vector machine, and ensemble classification (bragging and boosting) segmentation techniques. A quick region of interest analysis including relative porosity trends, pore size distribution, and volume fraction of different phases can be performed on different geomaterials. Data can be exported to ParaView, DSI Studio (.fib), Microsoft Excel and MATLAB for further visualisation and statistical analysis. The efficiency of the new tool was verified using gas hydrate-bearing sediment samples and Berea sandstone, both from synchrotron tomography datasets, as well as Grosmont carbonate rock X-ray micro-tomographic dataset. Despite its high sub-micrometer resolution, the gas hydrate dataset was suffering from edge enhancement artefacts. These artefacts were primarily normalized by the dual filtering approach using both non-local means and anisotropic diffusion filtering. The desired automatic segmentation of the phases (brine, sand, and gas hydrate) was thus successfully achieved using the dual clustering approach.
[ { "created": "Thu, 29 Mar 2018 13:13:57 GMT", "version": "v1" }, { "created": "Fri, 6 Apr 2018 13:50:42 GMT", "version": "v2" }, { "created": "Mon, 9 Apr 2018 00:28:36 GMT", "version": "v3" } ]
2018-04-10
[ [ "Chauhan", "Swarup", "", "1 and 2" ], [ "Sell", "Kathleen", "", "2 and 5" ], [ "Enzmann", "Frieder", "" ], [ "Rühaak", "Wolfram", "" ], [ "Wille", "Thorsten", "" ], [ "Sass", "Ingo", "" ], [ "Kersten", "Michael", "" ] ]
In this study, we introduce CobWeb 1.0 which is a graphical user interface tailored explicitly for accurate image segmentation and representative elementary volume analysis of digital rock images derived from high resolution tomography. The CobWeb code is a work package deployed as a series of windows executable binaries which use image processing and machine learning libraries of MATLAB. The user-friendly interface enables image segmentation and cross-validation employing K-means, Fuzzy C-means, least square support vector machine, and ensemble classification (bragging and boosting) segmentation techniques. A quick region of interest analysis including relative porosity trends, pore size distribution, and volume fraction of different phases can be performed on different geomaterials. Data can be exported to ParaView, DSI Studio (.fib), Microsoft Excel and MATLAB for further visualisation and statistical analysis. The efficiency of the new tool was verified using gas hydrate-bearing sediment samples and Berea sandstone, both from synchrotron tomography datasets, as well as Grosmont carbonate rock X-ray micro-tomographic dataset. Despite its high sub-micrometer resolution, the gas hydrate dataset was suffering from edge enhancement artefacts. These artefacts were primarily normalized by the dual filtering approach using both non-local means and anisotropic diffusion filtering. The desired automatic segmentation of the phases (brine, sand, and gas hydrate) was thus successfully achieved using the dual clustering approach.
0712.0411
Suresh Thippireddy
Suresh Thippireddy and Sandeep Chalasani
Period of the d-Sequence Based Random Number Generator
8 pages, 4 figures
null
null
null
cs.CR
null
This paper presents an expression to compute the exact period of a recursive random number generator based on d-sequences. Using the multi-recursive version of this generator we can produce large number of pseudorandom sequences.
[ { "created": "Mon, 3 Dec 2007 23:29:42 GMT", "version": "v1" } ]
2007-12-05
[ [ "Thippireddy", "Suresh", "" ], [ "Chalasani", "Sandeep", "" ] ]
This paper presents an expression to compute the exact period of a recursive random number generator based on d-sequences. Using the multi-recursive version of this generator we can produce large number of pseudorandom sequences.
2005.07858
Seunghan Yang
Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim
Partial Domain Adaptation Using Graph Convolutional Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial domain adaptation (PDA), in which we assume the target label space is included in the source label space, is a general version of standard domain adaptation. Since the target label space is unknown, the main challenge of PDA is to reduce the learning impact of irrelevant source samples, named outliers, which do not belong to the target label space. Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions. To overcome these problems, we propose a graph partial domain adaptation (GPDA) network, which exploits Graph Convolutional Networks for jointly considering data structure and the feature distribution of each class. Specifically, we propose a label relational graph to align the distributions of the same category in two domains and introduce moving average centroid separation for learning networks from the label relational graph. We demonstrate that considering data structure and the distribution of each category is effective for PDA and our GPDA network achieves state-of-the-art performance on the Digit and Office-31 datasets.
[ { "created": "Sat, 16 May 2020 03:37:38 GMT", "version": "v1" } ]
2020-05-19
[ [ "Yang", "Seunghan", "" ], [ "Kim", "Youngeun", "" ], [ "Jung", "Dongki", "" ], [ "Kim", "Changick", "" ] ]
Partial domain adaptation (PDA), in which we assume the target label space is included in the source label space, is a general version of standard domain adaptation. Since the target label space is unknown, the main challenge of PDA is to reduce the learning impact of irrelevant source samples, named outliers, which do not belong to the target label space. Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions. To overcome these problems, we propose a graph partial domain adaptation (GPDA) network, which exploits Graph Convolutional Networks for jointly considering data structure and the feature distribution of each class. Specifically, we propose a label relational graph to align the distributions of the same category in two domains and introduce moving average centroid separation for learning networks from the label relational graph. We demonstrate that considering data structure and the distribution of each category is effective for PDA and our GPDA network achieves state-of-the-art performance on the Digit and Office-31 datasets.
1705.00754
Ranjay Krishna
Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, Juan Carlos Niebles
Dense-Captioning Events in Videos
16 pages, 16 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.
[ { "created": "Tue, 2 May 2017 01:21:58 GMT", "version": "v1" } ]
2017-05-03
[ [ "Krishna", "Ranjay", "" ], [ "Hata", "Kenji", "" ], [ "Ren", "Frederic", "" ], [ "Fei-Fei", "Li", "" ], [ "Niebles", "Juan Carlos", "" ] ]
Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.
2104.12046
Xiaowei Xu
Wentao Chen, Hailong Qiu, Jian Zhuang, Chutong Zhang, Yu Hu, Qing Lu, Tianchen Wang, Yiyu Shi, Meiping Huang, Xiaowe Xu
Quantization of Deep Neural Networks for Accurate Edge Computing
11 pages, 3 figures, 10 tables, accepted by the ACM Journal on Emerging Technologies in Computing Systems (JETC)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) have demonstrated their great potential in recent years, exceeding the per-formance of human experts in a wide range of applications. Due to their large sizes, however, compressiontechniques such as weight quantization and pruning are usually applied before they can be accommodated onthe edge. It is generally believed that quantization leads to performance degradation, and plenty of existingworks have explored quantization strategies aiming at minimum accuracy loss. In this paper, we argue thatquantization, which essentially imposes regularization on weight representations, can sometimes help toimprove accuracy. We conduct comprehensive experiments on three widely used applications: fully con-nected network (FCN) for biomedical image segmentation, convolutional neural network (CNN) for imageclassification on ImageNet, and recurrent neural network (RNN) for automatic speech recognition, and experi-mental results show that quantization can improve the accuracy by 1%, 1.95%, 4.23% on the three applicationsrespectively with 3.5x-6.4x memory reduction.
[ { "created": "Sun, 25 Apr 2021 02:05:12 GMT", "version": "v1" }, { "created": "Thu, 14 Oct 2021 07:14:14 GMT", "version": "v2" } ]
2021-10-15
[ [ "Chen", "Wentao", "" ], [ "Qiu", "Hailong", "" ], [ "Zhuang", "Jian", "" ], [ "Zhang", "Chutong", "" ], [ "Hu", "Yu", "" ], [ "Lu", "Qing", "" ], [ "Wang", "Tianchen", "" ], [ "Shi", "Yiyu", "" ], [ "Huang", "Meiping", "" ], [ "Xu", "Xiaowe", "" ] ]
Deep neural networks (DNNs) have demonstrated their great potential in recent years, exceeding the per-formance of human experts in a wide range of applications. Due to their large sizes, however, compressiontechniques such as weight quantization and pruning are usually applied before they can be accommodated onthe edge. It is generally believed that quantization leads to performance degradation, and plenty of existingworks have explored quantization strategies aiming at minimum accuracy loss. In this paper, we argue thatquantization, which essentially imposes regularization on weight representations, can sometimes help toimprove accuracy. We conduct comprehensive experiments on three widely used applications: fully con-nected network (FCN) for biomedical image segmentation, convolutional neural network (CNN) for imageclassification on ImageNet, and recurrent neural network (RNN) for automatic speech recognition, and experi-mental results show that quantization can improve the accuracy by 1%, 1.95%, 4.23% on the three applicationsrespectively with 3.5x-6.4x memory reduction.
1402.2231
William Bradley
J. N. Laska, W. F. Bradley, T. W. Rondeau, K. E. Nolan, and B. Vigoda
Compressive sensing for dynamic spectrum access networks: Techniques and tradeoffs
2011 IEEE Symposium on New Frontiers in Dynamic Spectrum, May 2011
null
10.1109/DYSPAN.2011.5936202
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the practical costs and benefits of CS for dynamic spectrum access (DSA) networks. Firstly, we review several fast and practical techniques for energy detection without full reconstruction and provide theoretical guarantees. We also define practical metrics to measure the performance of these techniques. Secondly, we perform comprehensive experiments comparing the techniques on real signals captured over the air. Our results show that we can significantly compressively acquire the signal while still accurately determining spectral occupancy.
[ { "created": "Mon, 10 Feb 2014 18:27:53 GMT", "version": "v1" } ]
2016-11-18
[ [ "Laska", "J. N.", "" ], [ "Bradley", "W. F.", "" ], [ "Rondeau", "T. W.", "" ], [ "Nolan", "K. E.", "" ], [ "Vigoda", "B.", "" ] ]
We explore the practical costs and benefits of CS for dynamic spectrum access (DSA) networks. Firstly, we review several fast and practical techniques for energy detection without full reconstruction and provide theoretical guarantees. We also define practical metrics to measure the performance of these techniques. Secondly, we perform comprehensive experiments comparing the techniques on real signals captured over the air. Our results show that we can significantly compressively acquire the signal while still accurately determining spectral occupancy.
1810.01049
Hu Ding
Hu Ding and Jinhui Xu
A Unified Framework for Clustering Constrained Data without Locality Property
null
null
null
null
cs.CG cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a class of constrained clustering problems of points in $\mathbb{R}^{d}$, where $d$ could be rather high. A common feature of these problems is that their optimal clusterings no longer have the locality property (due to the additional constraints), which is a key property required by many algorithms for their unconstrained counterparts. To overcome the difficulty caused by the loss of locality, we present in this paper a unified framework, called {\em Peeling-and-Enclosing (PnE)}, to iteratively solve two variants of the constrained clustering problems, {\em constrained $k$-means clustering} ($k$-CMeans) and {\em constrained $k$-median clustering} ($k$-CMedian). Our framework is based on two standalone geometric techniques, called {\em Simplex Lemma} and {\em Weaker Simplex Lemma}, for $k$-CMeans and $k$-CMedian, respectively. The simplex lemma (or weaker simplex lemma) enables us to efficiently approximate the mean (or median) point of an unknown set of points by searching a small-size grid, independent of the dimensionality of the space, in a simplex (or the surrounding region of a simplex), and thus can be used to handle high dimensional data. If $k$ and $\frac{1}{\epsilon}$ are fixed numbers, our framework generates, in nearly linear time ({\em i.e.,} $O(n(\log n)^{k+1}d)$), $O((\log n)^{k})$ $k$-tuple candidates for the $k$ mean or median points, and one of them induces a $(1+\epsilon)$-approximation for $k$-CMeans or $k$-CMedian, where $n$ is the number of points. Combining this unified framework with a problem-specific selection algorithm (which determines the best $k$-tuple candidate), we obtain a $(1+\epsilon)$-approximation for each of the constrained clustering problems. We expect that our technique will be applicable to other constrained clustering problems without locality.
[ { "created": "Tue, 2 Oct 2018 03:18:15 GMT", "version": "v1" } ]
2018-10-03
[ [ "Ding", "Hu", "" ], [ "Xu", "Jinhui", "" ] ]
In this paper, we consider a class of constrained clustering problems of points in $\mathbb{R}^{d}$, where $d$ could be rather high. A common feature of these problems is that their optimal clusterings no longer have the locality property (due to the additional constraints), which is a key property required by many algorithms for their unconstrained counterparts. To overcome the difficulty caused by the loss of locality, we present in this paper a unified framework, called {\em Peeling-and-Enclosing (PnE)}, to iteratively solve two variants of the constrained clustering problems, {\em constrained $k$-means clustering} ($k$-CMeans) and {\em constrained $k$-median clustering} ($k$-CMedian). Our framework is based on two standalone geometric techniques, called {\em Simplex Lemma} and {\em Weaker Simplex Lemma}, for $k$-CMeans and $k$-CMedian, respectively. The simplex lemma (or weaker simplex lemma) enables us to efficiently approximate the mean (or median) point of an unknown set of points by searching a small-size grid, independent of the dimensionality of the space, in a simplex (or the surrounding region of a simplex), and thus can be used to handle high dimensional data. If $k$ and $\frac{1}{\epsilon}$ are fixed numbers, our framework generates, in nearly linear time ({\em i.e.,} $O(n(\log n)^{k+1}d)$), $O((\log n)^{k})$ $k$-tuple candidates for the $k$ mean or median points, and one of them induces a $(1+\epsilon)$-approximation for $k$-CMeans or $k$-CMedian, where $n$ is the number of points. Combining this unified framework with a problem-specific selection algorithm (which determines the best $k$-tuple candidate), we obtain a $(1+\epsilon)$-approximation for each of the constrained clustering problems. We expect that our technique will be applicable to other constrained clustering problems without locality.
1210.2529
Chau Yuen
Mahshad Eslamifar, Woon Hau Chin, Chau Yuen and Yong Liang Guan
Performance Analysis of Two-Step Bi-Directional Relaying with Multiple Antennas
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study decode-and-forward multi-antenna relay systems that achieve bi-directional communication in two time slots. We investigate different downlink broadcast schemes which employ binary or analog network coding at the relay. We also analyze and compare their performances in terms of diversity order and symbol error probability. It is shown that if exact downlink channel state information is available at the relay, using analog network coding in the form of multi-antenna maximal-ratio transmit beamforming to precode the information vectors at the relay gives the best performance. Then, we propose a Max-Min antenna selection with binary network coding scheme that can approach this performance with only partial channel state information.
[ { "created": "Tue, 9 Oct 2012 08:41:47 GMT", "version": "v1" } ]
2012-10-10
[ [ "Eslamifar", "Mahshad", "" ], [ "Chin", "Woon Hau", "" ], [ "Yuen", "Chau", "" ], [ "Guan", "Yong Liang", "" ] ]
In this paper we study decode-and-forward multi-antenna relay systems that achieve bi-directional communication in two time slots. We investigate different downlink broadcast schemes which employ binary or analog network coding at the relay. We also analyze and compare their performances in terms of diversity order and symbol error probability. It is shown that if exact downlink channel state information is available at the relay, using analog network coding in the form of multi-antenna maximal-ratio transmit beamforming to precode the information vectors at the relay gives the best performance. Then, we propose a Max-Min antenna selection with binary network coding scheme that can approach this performance with only partial channel state information.
1806.02918
Maria Shugrina
Maria Shugrina, Amlan Kar, Karan Singh, Sanja Fidler
Color Sails: Discrete-Continuous Palettes for Deep Color Exploration
14 pages, 13 figures
null
null
null
cs.GR cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present color sails, a discrete-continuous color gamut representation that extends the color gradient analogy to three dimensions and allows interactive control of the color blending behavior. Our representation models a wide variety of color distributions in a compact manner, and lends itself to applications such as color exploration for graphic design, illustration and similar fields. We propose a Neural Network that can fit a color sail to any image. Then, the user can adjust color sail parameters to change the base colors, their blending behavior and the number of colors, exploring a wide range of options for the original design. In addition, we propose a Deep Learning model that learns to automatically segment an image into color-compatible alpha masks, each equipped with its own color sail. This allows targeted color exploration by either editing their corresponding color sails or using standard software packages. Our model is trained on a custom diverse dataset of art and design. We provide both quantitative evaluations, and a user study, demonstrating the effectiveness of color sail interaction. Interactive demos are available at www.colorsails.com.
[ { "created": "Thu, 7 Jun 2018 22:42:00 GMT", "version": "v1" } ]
2018-06-11
[ [ "Shugrina", "Maria", "" ], [ "Kar", "Amlan", "" ], [ "Singh", "Karan", "" ], [ "Fidler", "Sanja", "" ] ]
We present color sails, a discrete-continuous color gamut representation that extends the color gradient analogy to three dimensions and allows interactive control of the color blending behavior. Our representation models a wide variety of color distributions in a compact manner, and lends itself to applications such as color exploration for graphic design, illustration and similar fields. We propose a Neural Network that can fit a color sail to any image. Then, the user can adjust color sail parameters to change the base colors, their blending behavior and the number of colors, exploring a wide range of options for the original design. In addition, we propose a Deep Learning model that learns to automatically segment an image into color-compatible alpha masks, each equipped with its own color sail. This allows targeted color exploration by either editing their corresponding color sails or using standard software packages. Our model is trained on a custom diverse dataset of art and design. We provide both quantitative evaluations, and a user study, demonstrating the effectiveness of color sail interaction. Interactive demos are available at www.colorsails.com.
2202.08514
Sayed Hashim
Muhammad Ali, Sayed Hashim
Survey on Self-supervised Representation Learning Using Image Transformations
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep neural networks need huge amount of training data, while in real world there is a scarcity of data available for training purposes. To resolve these issues, self-supervised learning (SSL) methods are used. SSL using geometric transformations (GT) is a simple yet powerful technique used in unsupervised representation learning. Although multiple survey papers have reviewed SSL techniques, there is none that only focuses on those that use geometric transformations. Furthermore, such methods have not been covered in depth in papers where they are reviewed. Our motivation to present this work is that geometric transformations have shown to be powerful supervisory signals in unsupervised representation learning. Moreover, many such works have found tremendous success, but have not gained much attention. We present a concise survey of SSL approaches that use geometric transformations. We shortlist six representative models that use image transformations including those based on predicting and autoencoding transformations. We review their architecture as well as learning methodologies. We also compare the performance of these models in the object recognition task on CIFAR-10 and ImageNet datasets. Our analysis indicates the AETv2 performs the best in most settings. Rotation with feature decoupling also performed well in some settings. We then derive insights from the observed results. Finally, we conclude with a summary of the results and insights as well as highlighting open problems to be addressed and indicating various future directions.
[ { "created": "Thu, 17 Feb 2022 08:37:50 GMT", "version": "v1" } ]
2022-02-18
[ [ "Ali", "Muhammad", "" ], [ "Hashim", "Sayed", "" ] ]
Deep neural networks need huge amount of training data, while in real world there is a scarcity of data available for training purposes. To resolve these issues, self-supervised learning (SSL) methods are used. SSL using geometric transformations (GT) is a simple yet powerful technique used in unsupervised representation learning. Although multiple survey papers have reviewed SSL techniques, there is none that only focuses on those that use geometric transformations. Furthermore, such methods have not been covered in depth in papers where they are reviewed. Our motivation to present this work is that geometric transformations have shown to be powerful supervisory signals in unsupervised representation learning. Moreover, many such works have found tremendous success, but have not gained much attention. We present a concise survey of SSL approaches that use geometric transformations. We shortlist six representative models that use image transformations including those based on predicting and autoencoding transformations. We review their architecture as well as learning methodologies. We also compare the performance of these models in the object recognition task on CIFAR-10 and ImageNet datasets. Our analysis indicates the AETv2 performs the best in most settings. Rotation with feature decoupling also performed well in some settings. We then derive insights from the observed results. Finally, we conclude with a summary of the results and insights as well as highlighting open problems to be addressed and indicating various future directions.
2207.04145
Neil Perry
Neil Perry, Bruce Spang, Saba Eskandarian, Dan Boneh
Strong Anonymity for Mesh Messaging
21 pages, 11 figures; added reference to introduction
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Messaging systems built on mesh networks consisting of smartphones communicating over Bluetooth have been used by protesters around the world after governments have disrupted Internet connectivity. Unfortunately, existing systems have been shown to be insecure; most concerningly by not adequately hiding metadata. This is further complicated by the fact that wireless communication such as Bluetooth is inherently a broadcasting medium. In this paper, we present a new threat model that captures the security requirements of protesters in this setting. We then provide a solution that satisfies the required security properties, hides all relevant metadata, scales to moderately sized protests, and supports group messaging. This is achieved by broadcasting all messages in a way that limits the overhead of duplicate messages, ensuring that ciphertexts do not leak metadata, and limiting what can be learned by observing user behavior. We also build a model of our system and numerically evaluate it to support our claims and analyze how many users it supports. Finally, we discuss further extensions that remove potential bottlenecks in scaling and support substantially more users.
[ { "created": "Fri, 8 Jul 2022 22:25:27 GMT", "version": "v1" }, { "created": "Mon, 22 Aug 2022 19:50:51 GMT", "version": "v2" } ]
2022-08-24
[ [ "Perry", "Neil", "" ], [ "Spang", "Bruce", "" ], [ "Eskandarian", "Saba", "" ], [ "Boneh", "Dan", "" ] ]
Messaging systems built on mesh networks consisting of smartphones communicating over Bluetooth have been used by protesters around the world after governments have disrupted Internet connectivity. Unfortunately, existing systems have been shown to be insecure; most concerningly by not adequately hiding metadata. This is further complicated by the fact that wireless communication such as Bluetooth is inherently a broadcasting medium. In this paper, we present a new threat model that captures the security requirements of protesters in this setting. We then provide a solution that satisfies the required security properties, hides all relevant metadata, scales to moderately sized protests, and supports group messaging. This is achieved by broadcasting all messages in a way that limits the overhead of duplicate messages, ensuring that ciphertexts do not leak metadata, and limiting what can be learned by observing user behavior. We also build a model of our system and numerically evaluate it to support our claims and analyze how many users it supports. Finally, we discuss further extensions that remove potential bottlenecks in scaling and support substantially more users.
1502.05767
Atilim Gunes Baydin
Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, Jeffrey Mark Siskind
Automatic differentiation in machine learning: a survey
43 pages, 5 figures
Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. The Journal of Machine Learning Research, 18(153):1--43, 2018
null
null
cs.SC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.
[ { "created": "Fri, 20 Feb 2015 04:20:47 GMT", "version": "v1" }, { "created": "Sun, 19 Apr 2015 16:49:13 GMT", "version": "v2" }, { "created": "Thu, 17 Aug 2017 16:45:07 GMT", "version": "v3" }, { "created": "Mon, 5 Feb 2018 15:57:57 GMT", "version": "v4" } ]
2018-07-18
[ [ "Baydin", "Atilim Gunes", "" ], [ "Pearlmutter", "Barak A.", "" ], [ "Radul", "Alexey Andreyevich", "" ], [ "Siskind", "Jeffrey Mark", "" ] ]
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.
2406.02771
Kathrin Donandt
Kathrin Donandt, Karim B\"ottger, Dirk S\"offker
Improved context-sensitive transformer model for inland vessel trajectory prediction
null
Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, pp. 5903-5908
10.1109/ITSC57777.2023.10422417
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Physics-related and model-based vessel trajectory prediction is highly accurate but requires specific knowledge of the vessel under consideration which is not always practical. Machine learning-based trajectory prediction models do not require expert knowledge, but rely on the implicit knowledge extracted from massive amounts of data. Several deep learning (DL) methods for vessel trajectory prediction have recently been suggested. The DL models developed typically only process information about the (dis)location of vessels defined with respect to a global reference system. In the context of inland navigation, this can be problematic, since without knowledge of the limited navigable space, irrealistic trajectories are likely to be determined. If spatial constraintes are introduced, e.g., by implementing an additional submodule to process map data, however, overall complexity increases. Instead of processing the vessel displacement information on the one hand and the spatial information on the other hand, the paper proposes the merging of both information. Here, fairway-related and navigation-related displacement information are used directly. In this way, the previously proposed context-sensitive Classification Transformer (CSCT) shows an improved spatial awareness. Additionally, the CSCT is adapted to assess the model uncertainty by enabling dropout during inference. This approach is trained on different inland waterways to analyze its generalizability. As the improved CSCT obtains lower prediction errors and enables to estimate the trustworthiness of each prediction, it is more suitable for safety-critical applications in inland navigation than previously developed models.
[ { "created": "Tue, 4 Jun 2024 20:39:14 GMT", "version": "v1" } ]
2024-06-06
[ [ "Donandt", "Kathrin", "" ], [ "Böttger", "Karim", "" ], [ "Söffker", "Dirk", "" ] ]
Physics-related and model-based vessel trajectory prediction is highly accurate but requires specific knowledge of the vessel under consideration which is not always practical. Machine learning-based trajectory prediction models do not require expert knowledge, but rely on the implicit knowledge extracted from massive amounts of data. Several deep learning (DL) methods for vessel trajectory prediction have recently been suggested. The DL models developed typically only process information about the (dis)location of vessels defined with respect to a global reference system. In the context of inland navigation, this can be problematic, since without knowledge of the limited navigable space, irrealistic trajectories are likely to be determined. If spatial constraintes are introduced, e.g., by implementing an additional submodule to process map data, however, overall complexity increases. Instead of processing the vessel displacement information on the one hand and the spatial information on the other hand, the paper proposes the merging of both information. Here, fairway-related and navigation-related displacement information are used directly. In this way, the previously proposed context-sensitive Classification Transformer (CSCT) shows an improved spatial awareness. Additionally, the CSCT is adapted to assess the model uncertainty by enabling dropout during inference. This approach is trained on different inland waterways to analyze its generalizability. As the improved CSCT obtains lower prediction errors and enables to estimate the trustworthiness of each prediction, it is more suitable for safety-critical applications in inland navigation than previously developed models.
1811.06147
Joel Mackenzie
Rodger Benham, Joel Mackenzie, Alistair Moffat, and J. Shane Culpepper
Boosting Search Performance Using Query Variations
Published in ACM TOIS, 2019
null
10.1145/3345001
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rank fusion is a powerful technique that allows multiple sources of information to be combined into a single result set. However, to date fusion has not been regarded as being cost-effective in cases where strict per-query efficiency guarantees are required, such as in web search. In this work we propose a novel solution to rank fusion by splitting the computation into two parts -- one phase that is carried out offline to generate pre-computed centroid answers for queries with broadly similar information needs, and then a second online phase that uses the corresponding topic centroid to compute a result page for each query. We explore efficiency improvements to classic fusion algorithms whose costs can be amortized as a pre-processing step, and can then be combined with re-ranking approaches to dramatically improve effectiveness in multi-stage retrieval systems with little efficiency overhead at query time. Experimental results using the ClueWeb12B collection and the UQV100 query variations demonstrate that centroid-based approaches allow improved retrieval effectiveness at little or no loss in query throughput or latency, and with reasonable pre-processing requirements. We additionally show that queries that do not match any of the pre-computed clusters can be accurately identified and efficiently processed in our proposed ranking pipeline.
[ { "created": "Thu, 15 Nov 2018 02:43:06 GMT", "version": "v1" }, { "created": "Tue, 10 Nov 2020 00:34:14 GMT", "version": "v2" } ]
2020-11-11
[ [ "Benham", "Rodger", "" ], [ "Mackenzie", "Joel", "" ], [ "Moffat", "Alistair", "" ], [ "Culpepper", "J. Shane", "" ] ]
Rank fusion is a powerful technique that allows multiple sources of information to be combined into a single result set. However, to date fusion has not been regarded as being cost-effective in cases where strict per-query efficiency guarantees are required, such as in web search. In this work we propose a novel solution to rank fusion by splitting the computation into two parts -- one phase that is carried out offline to generate pre-computed centroid answers for queries with broadly similar information needs, and then a second online phase that uses the corresponding topic centroid to compute a result page for each query. We explore efficiency improvements to classic fusion algorithms whose costs can be amortized as a pre-processing step, and can then be combined with re-ranking approaches to dramatically improve effectiveness in multi-stage retrieval systems with little efficiency overhead at query time. Experimental results using the ClueWeb12B collection and the UQV100 query variations demonstrate that centroid-based approaches allow improved retrieval effectiveness at little or no loss in query throughput or latency, and with reasonable pre-processing requirements. We additionally show that queries that do not match any of the pre-computed clusters can be accurately identified and efficiently processed in our proposed ranking pipeline.
2408.05094
Tingchen Fu
Tingchen Fu, Yupeng Hou, Julian McAuley, Rui Yan
Unlocking Decoding-time Controllability: Gradient-Free Multi-Objective Alignment with Contrastive Prompts
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The task of multi-objective alignment aims at balancing and controlling the different alignment objectives (e.g., helpfulness, harmlessness and honesty) of large language models to meet the personalized requirements of different users. However, previous methods tend to train multiple models to deal with various user preferences, with the number of trained models growing linearly with the number of alignment objectives and the number of different preferences. Meanwhile, existing methods are generally poor in extensibility and require significant re-training for each new alignment objective considered. Considering the limitation of previous approaches, we propose MCA (Multi-objective Contrastive Alignemnt), which constructs an expert prompt and an adversarial prompt for each objective to contrast at the decoding time and balances the objectives through combining the contrast. Our approach is verified to be superior to previous methods in obtaining a well-distributed Pareto front among different alignment objectives.
[ { "created": "Fri, 9 Aug 2024 14:36:42 GMT", "version": "v1" } ]
2024-08-12
[ [ "Fu", "Tingchen", "" ], [ "Hou", "Yupeng", "" ], [ "McAuley", "Julian", "" ], [ "Yan", "Rui", "" ] ]
The task of multi-objective alignment aims at balancing and controlling the different alignment objectives (e.g., helpfulness, harmlessness and honesty) of large language models to meet the personalized requirements of different users. However, previous methods tend to train multiple models to deal with various user preferences, with the number of trained models growing linearly with the number of alignment objectives and the number of different preferences. Meanwhile, existing methods are generally poor in extensibility and require significant re-training for each new alignment objective considered. Considering the limitation of previous approaches, we propose MCA (Multi-objective Contrastive Alignemnt), which constructs an expert prompt and an adversarial prompt for each objective to contrast at the decoding time and balances the objectives through combining the contrast. Our approach is verified to be superior to previous methods in obtaining a well-distributed Pareto front among different alignment objectives.
1710.08868
Bence Zuti
Bence Zuti
Modern-day Universities and Regional Development
Conference Paper
ISBN 978-963-89779-2-2. 2014. pp. 221-228
10.5281/zenodo.227114
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Nowadays it is quite evident that knowledge-based society necessarily involves the revaluation of human and intangible assets, as the advancement of local economies significantly depend on the qualitative and quantitative characteristics of human capital[Lundvall, 2004]. As we can instantaneously link the universities as main actors in the creation of highly-qualified labour force, the role of universities increases parallel to the previously mentioned progresses. Universities are the general institutions of education, however i nthe need of adaptation to present local needs, their activities have broadened in the past decades [Wright et al, 2008; Etzkowitz, 2002]. Most universities experienced a transition period in which next to their classic activities, namely education and research, so called third mission activities also started to count, thus serving many purposes of economy and society.
[ { "created": "Thu, 19 Oct 2017 06:45:57 GMT", "version": "v1" } ]
2017-10-27
[ [ "Zuti", "Bence", "" ] ]
Nowadays it is quite evident that knowledge-based society necessarily involves the revaluation of human and intangible assets, as the advancement of local economies significantly depend on the qualitative and quantitative characteristics of human capital[Lundvall, 2004]. As we can instantaneously link the universities as main actors in the creation of highly-qualified labour force, the role of universities increases parallel to the previously mentioned progresses. Universities are the general institutions of education, however i nthe need of adaptation to present local needs, their activities have broadened in the past decades [Wright et al, 2008; Etzkowitz, 2002]. Most universities experienced a transition period in which next to their classic activities, namely education and research, so called third mission activities also started to count, thus serving many purposes of economy and society.
2312.00845
Jong Chul Ye
Hyeonho Jeong, Geon Yeong Park, Jong Chul Ye
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models
Project page: https://video-motion-customization.github.io
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Text-to-video diffusion models have advanced video generation significantly. However, customizing these models to generate videos with tailored motions presents a substantial challenge. In specific, they encounter hurdles in (a) accurately reproducing motion from a target video, and (b) creating diverse visual variations. For example, straightforward extensions of static image customization methods to video often lead to intricate entanglements of appearance and motion data. To tackle this, here we present the Video Motion Customization (VMC) framework, a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models. Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference. The diffusion process then preserves low-frequency motion trajectories while mitigating high-frequency motion-unrelated noise in image space. We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts. Our codes, data and the project demo can be found at https://video-motion-customization.github.io
[ { "created": "Fri, 1 Dec 2023 06:50:11 GMT", "version": "v1" } ]
2023-12-05
[ [ "Jeong", "Hyeonho", "" ], [ "Park", "Geon Yeong", "" ], [ "Ye", "Jong Chul", "" ] ]
Text-to-video diffusion models have advanced video generation significantly. However, customizing these models to generate videos with tailored motions presents a substantial challenge. In specific, they encounter hurdles in (a) accurately reproducing motion from a target video, and (b) creating diverse visual variations. For example, straightforward extensions of static image customization methods to video often lead to intricate entanglements of appearance and motion data. To tackle this, here we present the Video Motion Customization (VMC) framework, a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models. Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference. The diffusion process then preserves low-frequency motion trajectories while mitigating high-frequency motion-unrelated noise in image space. We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts. Our codes, data and the project demo can be found at https://video-motion-customization.github.io
2404.15435
S\"uleyman \"Ozdel
Enkelejda Kasneci, Hong Gao, Suleyman Ozdel, Virmarie Maquiling, Enkeleda Thaqi, Carrie Lau, Yao Rong, Gjergji Kasneci, Efe Bozkir
Introduction to Eye Tracking: A Hands-On Tutorial for Students and Practitioners
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eye-tracking technology is widely used in various application areas such as psychology, neuroscience, marketing, and human-computer interaction, as it is a valuable tool for understanding how people process information and interact with their environment. This tutorial provides a comprehensive introduction to eye tracking, from the basics of eye anatomy and physiology to the principles and applications of different eye-tracking systems. The guide is designed to provide a hands-on learning experience for everyone interested in working with eye-tracking technology. Therefore, we include practical case studies to teach students and professionals how to effectively set up and operate an eye-tracking system. The tutorial covers a variety of eye-tracking systems, calibration techniques, data collection, and analysis methods, including fixations, saccades, pupil diameter, and visual scan path analysis. In addition, we emphasize the importance of considering ethical aspects when conducting eye-tracking research and experiments, especially informed consent and participant privacy. We aim to give the reader a solid understanding of basic eye-tracking principles and the practical skills needed to conduct their experiments. Python-based code snippets and illustrative examples are included in the tutorials and can be downloaded at: https://gitlab.lrz.de/hctl/Eye-Tracking-Tutorial.
[ { "created": "Tue, 23 Apr 2024 18:25:05 GMT", "version": "v1" } ]
2024-04-25
[ [ "Kasneci", "Enkelejda", "" ], [ "Gao", "Hong", "" ], [ "Ozdel", "Suleyman", "" ], [ "Maquiling", "Virmarie", "" ], [ "Thaqi", "Enkeleda", "" ], [ "Lau", "Carrie", "" ], [ "Rong", "Yao", "" ], [ "Kasneci", "Gjergji", "" ], [ "Bozkir", "Efe", "" ] ]
Eye-tracking technology is widely used in various application areas such as psychology, neuroscience, marketing, and human-computer interaction, as it is a valuable tool for understanding how people process information and interact with their environment. This tutorial provides a comprehensive introduction to eye tracking, from the basics of eye anatomy and physiology to the principles and applications of different eye-tracking systems. The guide is designed to provide a hands-on learning experience for everyone interested in working with eye-tracking technology. Therefore, we include practical case studies to teach students and professionals how to effectively set up and operate an eye-tracking system. The tutorial covers a variety of eye-tracking systems, calibration techniques, data collection, and analysis methods, including fixations, saccades, pupil diameter, and visual scan path analysis. In addition, we emphasize the importance of considering ethical aspects when conducting eye-tracking research and experiments, especially informed consent and participant privacy. We aim to give the reader a solid understanding of basic eye-tracking principles and the practical skills needed to conduct their experiments. Python-based code snippets and illustrative examples are included in the tutorials and can be downloaded at: https://gitlab.lrz.de/hctl/Eye-Tracking-Tutorial.
2009.07185
Gregor Betz
Gregor Betz and Christian Voigt and Kyle Richardson
Critical Thinking for Language Models
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic corpus of deductively valid arguments, and generate artificial argumentative texts to train and evaluate GPT-2. Significant transfer learning effects can be observed: Training a model on three simple core schemes allows it to accurately complete conclusions of different, and more complex types of arguments, too. The language models generalize the core argument schemes in a correct way. Moreover, we obtain consistent and promising results for NLU benchmarks. In particular, pre-training on the argument schemes raises zero-shot accuracy on the GLUE diagnostics by up to 15 percentage points. The findings suggest that intermediary pre-training on texts that exemplify basic reasoning abilities (such as typically covered in critical thinking textbooks) might help language models to acquire a broad range of reasoning skills. The synthetic argumentative texts presented in this paper are a promising starting point for building such a "critical thinking curriculum for language models."
[ { "created": "Tue, 15 Sep 2020 15:49:19 GMT", "version": "v1" }, { "created": "Thu, 17 Dec 2020 14:42:42 GMT", "version": "v2" } ]
2020-12-18
[ [ "Betz", "Gregor", "" ], [ "Voigt", "Christian", "" ], [ "Richardson", "Kyle", "" ] ]
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic corpus of deductively valid arguments, and generate artificial argumentative texts to train and evaluate GPT-2. Significant transfer learning effects can be observed: Training a model on three simple core schemes allows it to accurately complete conclusions of different, and more complex types of arguments, too. The language models generalize the core argument schemes in a correct way. Moreover, we obtain consistent and promising results for NLU benchmarks. In particular, pre-training on the argument schemes raises zero-shot accuracy on the GLUE diagnostics by up to 15 percentage points. The findings suggest that intermediary pre-training on texts that exemplify basic reasoning abilities (such as typically covered in critical thinking textbooks) might help language models to acquire a broad range of reasoning skills. The synthetic argumentative texts presented in this paper are a promising starting point for building such a "critical thinking curriculum for language models."
1712.09552
Yogesh Simmhan
Yogesh Simmhan
Big Data and Fog Computing
To Appear as a contribution in Encyclopedia of Big Data Technologies, Sherif Sakr and Albert Zomaya eds., Springer Nature, 2018
Book chapter "Big Data and Fog Computing", 2018. In: Encyclopedia of Big Data Technologies, Sakr S., Zomaya A. (eds), Springer, Cham
10.1007/978-3-319-63962-8_41-1
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fog computing serves as a computing layer that sits between the edge devices and the cloud in the network topology. They have more compute capacity than the edge but much less so than cloud data centers. They typically have high uptime and always-on Internet connectivity. Applications that make use of the fog can avoid the network performance limitation of cloud computing while being less resource constrained than edge computing. As a result, they offer a useful balance of the current paradigms. This article explores various aspects of fog computing in the context of big data.
[ { "created": "Wed, 27 Dec 2017 11:27:30 GMT", "version": "v1" } ]
2019-05-13
[ [ "Simmhan", "Yogesh", "" ] ]
Fog computing serves as a computing layer that sits between the edge devices and the cloud in the network topology. They have more compute capacity than the edge but much less so than cloud data centers. They typically have high uptime and always-on Internet connectivity. Applications that make use of the fog can avoid the network performance limitation of cloud computing while being less resource constrained than edge computing. As a result, they offer a useful balance of the current paradigms. This article explores various aspects of fog computing in the context of big data.
1911.04708
Shingo Fujimoto
Shingo Fujimoto, Yoshiki Higashikado, Takuma Takeuchi
ConnectionChain: Secure Interworking of Blockchains
4 pages, 5 figures, posted for IEEE BCCA 2019
null
10.1109/IOTSMS48152.2019.8939267
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blockchain is a core technology to manage the value of cryptocurrencies, or to record trails of important business trades. The Smart Contract on blockchain is expected to improve security on blockchain system with automated operation, but it cannot be the solution when the application service required to operate tightly related blockchain ledgers as service business logic. This paper proposed the method to extend the functionality of traditional Smart Contract on blockchain, and introduced the prototype system, named 'ConnectionChain'.
[ { "created": "Tue, 12 Nov 2019 07:35:10 GMT", "version": "v1" } ]
2020-01-07
[ [ "Fujimoto", "Shingo", "" ], [ "Higashikado", "Yoshiki", "" ], [ "Takeuchi", "Takuma", "" ] ]
Blockchain is a core technology to manage the value of cryptocurrencies, or to record trails of important business trades. The Smart Contract on blockchain is expected to improve security on blockchain system with automated operation, but it cannot be the solution when the application service required to operate tightly related blockchain ledgers as service business logic. This paper proposed the method to extend the functionality of traditional Smart Contract on blockchain, and introduced the prototype system, named 'ConnectionChain'.
2309.05885
Oliver Bra\v{c}evac
Yuyan Bao, Guannan Wei, Oliver Bra\v{c}evac, and Tiark Rompf
Modeling Reachability Types with Logical Relations
null
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
Reachability types are a recent proposal to bring Rust-style reasoning about memory properties to higher-level languages. While key type soundness results for reachability types have been established using syntactic techniques in prior work, stronger metatheoretic properties have so far been unexplored. This paper presents an alternative semantic model of reachability types using logical relations, providing a framework in which to study key properties of interest such as (1) semantic type soundness, including of not syntactically well-typed code fragments, (2) termination, especially in the presence of higher-order state, and (3) program equivalence, especially reordering of non-interfering expressions for parallelization or compiler optimization.
[ { "created": "Tue, 12 Sep 2023 00:13:53 GMT", "version": "v1" } ]
2023-09-13
[ [ "Bao", "Yuyan", "" ], [ "Wei", "Guannan", "" ], [ "Bračevac", "Oliver", "" ], [ "Rompf", "Tiark", "" ] ]
Reachability types are a recent proposal to bring Rust-style reasoning about memory properties to higher-level languages. While key type soundness results for reachability types have been established using syntactic techniques in prior work, stronger metatheoretic properties have so far been unexplored. This paper presents an alternative semantic model of reachability types using logical relations, providing a framework in which to study key properties of interest such as (1) semantic type soundness, including of not syntactically well-typed code fragments, (2) termination, especially in the presence of higher-order state, and (3) program equivalence, especially reordering of non-interfering expressions for parallelization or compiler optimization.
0809.3009
Grenville Croll
Karin Hodnigg, Roland T. Mittermeir
Metrics-Based Spreadsheet Visualization: Support for Focused Maintenance
16 Pages, 7 Colour Figures
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 79-94 ISBN 978-905617-69-2
null
null
cs.SE cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legacy spreadsheets are both, an asset, and an enduring problem concerning spreadsheets in business. To make spreadsheets stay alive and remain correct, comprehension of a given spreadsheet is highly important. Visualization techniques should ease the complex and mindblowing challenges of finding structures in a huge set of spreadsheet cells for building an adequate mental model of spreadsheet programs. Since spreadsheet programs are as diverse as the purpose they are serving and as inhomogeneous as their programmers, to find an appropriate representation or visualization technique for every spreadsheet program seems futile. We thus propose different visualization and representation methods that may ease spreadsheet comprehension but should not be applied with all kind of spreadsheet programs. Therefore, this paper proposes to use (complexity) measures as indicators for proper visualization.
[ { "created": "Wed, 17 Sep 2008 20:58:29 GMT", "version": "v1" } ]
2008-09-19
[ [ "Hodnigg", "Karin", "" ], [ "Mittermeir", "Roland T.", "" ] ]
Legacy spreadsheets are both, an asset, and an enduring problem concerning spreadsheets in business. To make spreadsheets stay alive and remain correct, comprehension of a given spreadsheet is highly important. Visualization techniques should ease the complex and mindblowing challenges of finding structures in a huge set of spreadsheet cells for building an adequate mental model of spreadsheet programs. Since spreadsheet programs are as diverse as the purpose they are serving and as inhomogeneous as their programmers, to find an appropriate representation or visualization technique for every spreadsheet program seems futile. We thus propose different visualization and representation methods that may ease spreadsheet comprehension but should not be applied with all kind of spreadsheet programs. Therefore, this paper proposes to use (complexity) measures as indicators for proper visualization.
1301.6209
Jung Hyun Bae
Jung Hyun Bae, Jungwon Lee, Inyup Kang
On the achievable region for interference networks with point-to-point codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies evaluation of the capacity region for interference networks with point-to-point (p2p) capacity-achieving codes. Such capacity region has recently been characterized as union of several sub-regions each of which has distinctive operational characteristics. Detailed evaluation of this region, therefore, can be accomplished in a very simple manner by acknowledging such characteristics, which, in turn, provides an insight for a simple implementation scenario. Completely generalized message assignment which is also practically relevant is considered in this paper, and it is shown to provide strictly larger achievable rates than what traditional message assignment does when a receiver with joint decoding capability is used.
[ { "created": "Sat, 26 Jan 2013 04:29:54 GMT", "version": "v1" } ]
2013-01-29
[ [ "Bae", "Jung Hyun", "" ], [ "Lee", "Jungwon", "" ], [ "Kang", "Inyup", "" ] ]
This paper studies evaluation of the capacity region for interference networks with point-to-point (p2p) capacity-achieving codes. Such capacity region has recently been characterized as union of several sub-regions each of which has distinctive operational characteristics. Detailed evaluation of this region, therefore, can be accomplished in a very simple manner by acknowledging such characteristics, which, in turn, provides an insight for a simple implementation scenario. Completely generalized message assignment which is also practically relevant is considered in this paper, and it is shown to provide strictly larger achievable rates than what traditional message assignment does when a receiver with joint decoding capability is used.
1905.07356
Matthijs Westera
Matthijs Westera and Gemma Boleda
Don't Blame Distributional Semantics if it can't do Entailment
To appear in Proceedings of the 13th International Conference on Computational Semantics (IWCS 2019), Gothenburg, Sweden
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributional semantics has had enormous empirical success in Computational Linguistics and Cognitive Science in modeling various semantic phenomena, such as semantic similarity, and distributional models are widely used in state-of-the-art Natural Language Processing systems. However, the theoretical status of distributional semantics within a broader theory of language and cognition is still unclear: What does distributional semantics model? Can it be, on its own, a fully adequate model of the meanings of linguistic expressions? The standard answer is that distributional semantics is not fully adequate in this regard, because it falls short on some of the central aspects of formal semantic approaches: truth conditions, entailment, reference, and certain aspects of compositionality. We argue that this standard answer rests on a misconception: These aspects do not belong in a theory of expression meaning, they are instead aspects of speaker meaning, i.e., communicative intentions in a particular context. In a slogan: words do not refer, speakers do. Clearing this up enables us to argue that distributional semantics on its own is an adequate model of expression meaning. Our proposal sheds light on the role of distributional semantics in a broader theory of language and cognition, its relationship to formal semantics, and its place in computational models.
[ { "created": "Fri, 17 May 2019 16:26:05 GMT", "version": "v1" } ]
2019-05-20
[ [ "Westera", "Matthijs", "" ], [ "Boleda", "Gemma", "" ] ]
Distributional semantics has had enormous empirical success in Computational Linguistics and Cognitive Science in modeling various semantic phenomena, such as semantic similarity, and distributional models are widely used in state-of-the-art Natural Language Processing systems. However, the theoretical status of distributional semantics within a broader theory of language and cognition is still unclear: What does distributional semantics model? Can it be, on its own, a fully adequate model of the meanings of linguistic expressions? The standard answer is that distributional semantics is not fully adequate in this regard, because it falls short on some of the central aspects of formal semantic approaches: truth conditions, entailment, reference, and certain aspects of compositionality. We argue that this standard answer rests on a misconception: These aspects do not belong in a theory of expression meaning, they are instead aspects of speaker meaning, i.e., communicative intentions in a particular context. In a slogan: words do not refer, speakers do. Clearing this up enables us to argue that distributional semantics on its own is an adequate model of expression meaning. Our proposal sheds light on the role of distributional semantics in a broader theory of language and cognition, its relationship to formal semantics, and its place in computational models.
1206.1418
Dinh Que Tran
Thuy Van T. Duong, Dinh Que Tran and Cong Hung Tran
A weighted combination similarity measure for mobility patterns in wireless networks
15 pages, 2 figures; International Journal of Computer Networks & Communications (IJCNC) http://airccse.org/journal/ijc2012.html
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The similarity between trajectory patterns in clustering has played an important role in discovering movement behaviour of different groups of mobile objects. Several approaches have been proposed to measure the similarity between sequences in trajectory data. Most of these measures are based on Euclidean space or on spatial network and some of them have been concerned with temporal aspect or ordering types. However, they are not appropriate to characteristics of spatiotemporal mobility patterns in wireless networks. In this paper, we propose a new similarity measure for mobility patterns in cellular space of wireless network. The framework for constructing our measure is composed of two phases as follows. First, we present formal definitions to capture mathematically two spatial and temporal similarity measures for mobility patterns. And then, we define the total similarity measure by means of a weighted combination of these similarities. The truth of the partial and total similarity measures are proved in mathematics. Furthermore, instead of the time interval or ordering, our work makes use of the timestamp at which two mobility patterns share the same cell. A case study is also described to give a comparison of the combination measure with other ones.
[ { "created": "Thu, 7 Jun 2012 07:58:18 GMT", "version": "v1" } ]
2012-06-08
[ [ "Duong", "Thuy Van T.", "" ], [ "Tran", "Dinh Que", "" ], [ "Tran", "Cong Hung", "" ] ]
The similarity between trajectory patterns in clustering has played an important role in discovering movement behaviour of different groups of mobile objects. Several approaches have been proposed to measure the similarity between sequences in trajectory data. Most of these measures are based on Euclidean space or on spatial network and some of them have been concerned with temporal aspect or ordering types. However, they are not appropriate to characteristics of spatiotemporal mobility patterns in wireless networks. In this paper, we propose a new similarity measure for mobility patterns in cellular space of wireless network. The framework for constructing our measure is composed of two phases as follows. First, we present formal definitions to capture mathematically two spatial and temporal similarity measures for mobility patterns. And then, we define the total similarity measure by means of a weighted combination of these similarities. The truth of the partial and total similarity measures are proved in mathematics. Furthermore, instead of the time interval or ordering, our work makes use of the timestamp at which two mobility patterns share the same cell. A case study is also described to give a comparison of the combination measure with other ones.
1707.01696
Yanlong Huang
Yanlong Huang, Jo\~ao Silv\'erio, Leonel Rozo, and Darwin G. Caldwell
Generalized Task-Parameterized Skill Learning
8 pages
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programming by demonstration has recently gained much attention due to its user-friendly and natural way to transfer human skills to robots. In order to facilitate the learning of multiple demonstrations and meanwhile generalize to new situations, a task-parameterized Gaussian mixture model (TP-GMM) has been recently developed. This model has achieved reliable performance in areas such as human-robot collaboration and dual-arm manipulation. However, the crucial task frames and associated parameters in this learning framework are often set by the human teacher, which renders three problems that have not been addressed yet: (i) task frames are treated equally, without considering their individual importance, (ii) task parameters are defined without taking into account additional task constraints, such as robot joint limits and motion smoothness, and (iii) a fixed number of task frames are pre-defined regardless of whether some of them may be redundant or even irrelevant for the task at hand. In this paper, we generalize the task-parameterized learning by addressing the aforementioned problems. Moreover, we provide a novel learning perspective which allows the robot to refine and adapt previously learned skills in a low dimensional space. Several examples are studied in both simulated and real robotic systems, showing the applicability of our approach.
[ { "created": "Thu, 6 Jul 2017 09:08:08 GMT", "version": "v1" }, { "created": "Mon, 5 Mar 2018 15:52:51 GMT", "version": "v2" } ]
2018-03-06
[ [ "Huang", "Yanlong", "" ], [ "Silvério", "João", "" ], [ "Rozo", "Leonel", "" ], [ "Caldwell", "Darwin G.", "" ] ]
Programming by demonstration has recently gained much attention due to its user-friendly and natural way to transfer human skills to robots. In order to facilitate the learning of multiple demonstrations and meanwhile generalize to new situations, a task-parameterized Gaussian mixture model (TP-GMM) has been recently developed. This model has achieved reliable performance in areas such as human-robot collaboration and dual-arm manipulation. However, the crucial task frames and associated parameters in this learning framework are often set by the human teacher, which renders three problems that have not been addressed yet: (i) task frames are treated equally, without considering their individual importance, (ii) task parameters are defined without taking into account additional task constraints, such as robot joint limits and motion smoothness, and (iii) a fixed number of task frames are pre-defined regardless of whether some of them may be redundant or even irrelevant for the task at hand. In this paper, we generalize the task-parameterized learning by addressing the aforementioned problems. Moreover, we provide a novel learning perspective which allows the robot to refine and adapt previously learned skills in a low dimensional space. Several examples are studied in both simulated and real robotic systems, showing the applicability of our approach.
2110.14340
Kazuaki Matsumura
Kazuaki Matsumura, Simon Garcia De Gonzalo, Antonio J. Pe\~na
JACC: An OpenACC Runtime Framework with Kernel-Level and Multi-GPU Parallelization
Extended version of a paper to appear in: Proceedings of the 28th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC), December 17-18, 2021
null
10.1109/HiPC53243.2021.00032
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid development in computing technology has paved the way for directive-based programming models towards a principal role in maintaining software portability of performance-critical applications. Efforts on such models involve a least engineering cost for enabling computational acceleration on multiple architectures while programmers are only required to add meta information upon sequential code. Optimizations for obtaining the best possible efficiency, however, are often challenging. The insertions of directives by the programmer can lead to side-effects that limit the available compiler optimization possible, which could result in performance degradation. This is exacerbated when targeting multi-GPU systems, as pragmas do not automatically adapt to such systems, and require expensive and time consuming code adjustment by programmers. This paper introduces JACC, an OpenACC runtime framework which enables the dynamic extension of OpenACC programs by serving as a transparent layer between the program and the compiler. We add a versatile code-translation method for multi-device utilization by which manually-optimized applications can be distributed automatically while keeping original code structure and parallelism. We show in some cases nearly linear scaling on the part of kernel execution with the NVIDIA V100 GPUs. While adaptively using multi-GPUs, the resulting performance improvements amortize the latency of GPU-to-GPU communications.
[ { "created": "Wed, 27 Oct 2021 10:43:48 GMT", "version": "v1" }, { "created": "Fri, 17 Dec 2021 14:05:57 GMT", "version": "v2" }, { "created": "Wed, 27 Apr 2022 12:43:52 GMT", "version": "v3" } ]
2022-04-28
[ [ "Matsumura", "Kazuaki", "" ], [ "De Gonzalo", "Simon Garcia", "" ], [ "Peña", "Antonio J.", "" ] ]
The rapid development in computing technology has paved the way for directive-based programming models towards a principal role in maintaining software portability of performance-critical applications. Efforts on such models involve a least engineering cost for enabling computational acceleration on multiple architectures while programmers are only required to add meta information upon sequential code. Optimizations for obtaining the best possible efficiency, however, are often challenging. The insertions of directives by the programmer can lead to side-effects that limit the available compiler optimization possible, which could result in performance degradation. This is exacerbated when targeting multi-GPU systems, as pragmas do not automatically adapt to such systems, and require expensive and time consuming code adjustment by programmers. This paper introduces JACC, an OpenACC runtime framework which enables the dynamic extension of OpenACC programs by serving as a transparent layer between the program and the compiler. We add a versatile code-translation method for multi-device utilization by which manually-optimized applications can be distributed automatically while keeping original code structure and parallelism. We show in some cases nearly linear scaling on the part of kernel execution with the NVIDIA V100 GPUs. While adaptively using multi-GPUs, the resulting performance improvements amortize the latency of GPU-to-GPU communications.
0712.0271
Enrico Magli
M. Grangetto, E. Magli, G. Olmo
Distributed Arithmetic Coding for the Asymmetric Slepian-Wolf problem
submitted to IEEE Transactions on Signal processing, Nov. 2007. Revised version accepted with minor revisions
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed source coding schemes are typically based on the use of channels codes as source codes. In this paper we propose a new paradigm, termed "distributed arithmetic coding", which exploits the fact that arithmetic codes are good source as well as channel codes. In particular, we propose a distributed binary arithmetic coder for Slepian-Wolf coding with decoder side information, along with a soft joint decoder. The proposed scheme provides several advantages over existing Slepian-Wolf coders, especially its good performance at small block lengths, and the ability to incorporate arbitrary source models in the encoding process, e.g. context-based statistical models. We have compared the performance of distributed arithmetic coding with turbo codes and low-density parity-check codes, and found that the proposed approach has very competitive performance.
[ { "created": "Mon, 3 Dec 2007 12:27:00 GMT", "version": "v1" }, { "created": "Tue, 11 Nov 2008 09:29:16 GMT", "version": "v2" } ]
2008-11-11
[ [ "Grangetto", "M.", "" ], [ "Magli", "E.", "" ], [ "Olmo", "G.", "" ] ]
Distributed source coding schemes are typically based on the use of channels codes as source codes. In this paper we propose a new paradigm, termed "distributed arithmetic coding", which exploits the fact that arithmetic codes are good source as well as channel codes. In particular, we propose a distributed binary arithmetic coder for Slepian-Wolf coding with decoder side information, along with a soft joint decoder. The proposed scheme provides several advantages over existing Slepian-Wolf coders, especially its good performance at small block lengths, and the ability to incorporate arbitrary source models in the encoding process, e.g. context-based statistical models. We have compared the performance of distributed arithmetic coding with turbo codes and low-density parity-check codes, and found that the proposed approach has very competitive performance.
1601.06744
Yongge Wang
Yongge Wang
Octonion Algebra and Noise-Free Fully Homomorphic Encryption (FHE) Schemes
this paper has some issues to be addressed
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brakerski showed that linearly decryptable fully homomorphic encryption (FHE) schemes cannot be secure in the chosen plaintext attack (CPA) model. In this paper, we show that linearly decryptable FHE schemes cannot be secure even in the ciphertext only security model. Then we consider the maximum security that a linearly decryptable FHE scheme could achieve. This paper designs fully homomorphic symmetric key encryption (FHE) schemes without bootstrapping (that is, noise-free FHE schemes). The proposed FHE schemes are based on quaternion/octonion algebra and Jordan algebra over finite rings Z_n and are secure in the weak ciphertext-only security model assuming the hardness of solving multivariate quadratic equation systems and solving univariate high degree polynomial equation systems in Z_n. It is up to our knowledge that this is the first noise-free FHE scheme that has ever been designed with a security proof (even in the weak ciphertext-only security model). It is argued that the weak ciphertext-only security model is sufficient for various applications such as privacy preserving computation in cloud. As an example, the proposed FHE schemes are used to construct obfuscated programs. This example could be further used to show that the scheme presented in this paper could be combined with existing FHE schemes with bootstrapping to obtain more efficient FHE schemes with bootstrapping in the fully CPA model. At the end of the paper, we point out the insecurity of several recently proposed noise-free FHE schemes.
[ { "created": "Mon, 25 Jan 2016 19:55:57 GMT", "version": "v1" }, { "created": "Tue, 22 Mar 2016 08:46:42 GMT", "version": "v2" }, { "created": "Fri, 24 Feb 2017 03:34:31 GMT", "version": "v3" }, { "created": "Thu, 6 Jun 2019 10:11:54 GMT", "version": "v4" } ]
2019-06-07
[ [ "Wang", "Yongge", "" ] ]
Brakerski showed that linearly decryptable fully homomorphic encryption (FHE) schemes cannot be secure in the chosen plaintext attack (CPA) model. In this paper, we show that linearly decryptable FHE schemes cannot be secure even in the ciphertext only security model. Then we consider the maximum security that a linearly decryptable FHE scheme could achieve. This paper designs fully homomorphic symmetric key encryption (FHE) schemes without bootstrapping (that is, noise-free FHE schemes). The proposed FHE schemes are based on quaternion/octonion algebra and Jordan algebra over finite rings Z_n and are secure in the weak ciphertext-only security model assuming the hardness of solving multivariate quadratic equation systems and solving univariate high degree polynomial equation systems in Z_n. It is up to our knowledge that this is the first noise-free FHE scheme that has ever been designed with a security proof (even in the weak ciphertext-only security model). It is argued that the weak ciphertext-only security model is sufficient for various applications such as privacy preserving computation in cloud. As an example, the proposed FHE schemes are used to construct obfuscated programs. This example could be further used to show that the scheme presented in this paper could be combined with existing FHE schemes with bootstrapping to obtain more efficient FHE schemes with bootstrapping in the fully CPA model. At the end of the paper, we point out the insecurity of several recently proposed noise-free FHE schemes.
1210.6447
Om Prakash
S. Yashvir, Om Prakash
Disk Scheduling: Selection of Algorithm
9 pages; http://www.ijascse.in/publications-2012--2
null
null
null
cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of this paper is to take some aspects of disk scheduling and scheduling algorithms. The disk scheduling is discussed with a sneak peak in general and selection of algorithm in particular.
[ { "created": "Wed, 24 Oct 2012 07:56:48 GMT", "version": "v1" } ]
2012-10-25
[ [ "Yashvir", "S.", "" ], [ "Prakash", "Om", "" ] ]
The objective of this paper is to take some aspects of disk scheduling and scheduling algorithms. The disk scheduling is discussed with a sneak peak in general and selection of algorithm in particular.
1106.0436
Venkatesan Guruswami
Venkatesan Guruswami
Linear-algebraic list decoding of folded Reed-Solomon codes
16 pages. Extended abstract in Proc. of IEEE Conference on Computational Complexity (CCC), 2011
null
10.1109/CCC.2011.22
null
cs.IT cs.DS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Folded Reed-Solomon codes are an explicit family of codes that achieve the optimal trade-off between rate and error-correction capability: specifically, for any $\eps > 0$, the author and Rudra (2006,08) presented an $n^{O(1/\eps)}$ time algorithm to list decode appropriate folded RS codes of rate $R$ from a fraction $1-R-\eps$ of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices if one settles for a smaller decoding radius (but still enough for a statement of the above form). Here we give a simple linear-algebra based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in {\em quadratic} time. The theoretical drawback of folded RS codes are that both the decoding complexity and proven worst-case list-size bound are $n^{\Omega(1/\eps)}$. By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list size bound of $O(1/\eps^2)$ which is quite close to the existential $O(1/\eps)$ bound (however, the decoding complexity remains $n^{\Omega(1/\eps)}$). Our work highlights that constructing an explicit {\em subspace-evasive} subset that has small intersection with low-dimensional subspaces could lead to explicit codes with better list-decoding guarantees.
[ { "created": "Thu, 2 Jun 2011 14:18:27 GMT", "version": "v1" } ]
2016-11-17
[ [ "Guruswami", "Venkatesan", "" ] ]
Folded Reed-Solomon codes are an explicit family of codes that achieve the optimal trade-off between rate and error-correction capability: specifically, for any $\eps > 0$, the author and Rudra (2006,08) presented an $n^{O(1/\eps)}$ time algorithm to list decode appropriate folded RS codes of rate $R$ from a fraction $1-R-\eps$ of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices if one settles for a smaller decoding radius (but still enough for a statement of the above form). Here we give a simple linear-algebra based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in {\em quadratic} time. The theoretical drawback of folded RS codes are that both the decoding complexity and proven worst-case list-size bound are $n^{\Omega(1/\eps)}$. By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list size bound of $O(1/\eps^2)$ which is quite close to the existential $O(1/\eps)$ bound (however, the decoding complexity remains $n^{\Omega(1/\eps)}$). Our work highlights that constructing an explicit {\em subspace-evasive} subset that has small intersection with low-dimensional subspaces could lead to explicit codes with better list-decoding guarantees.
2102.05218
Qian Yang
Qian Yang, Jianyi Zhang, Weituo Hao, Gregory Spell, Lawrence Carin
FLOP: Federated Learning on Medical Datasets using Partial Networks
To appear in KDD 2021
null
10.1145/3447548.3467185
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outbreak of COVID-19 Disease due to the novel coronavirus has caused a shortage of medical resources. To aid and accelerate the diagnosis process, automatic diagnosis of COVID-19 via deep learning models has recently been explored by researchers across the world. While different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19, the data itself is still scarce due to patient privacy concerns. Federated Learning (FL) is a natural solution because it allows different organizations to cooperatively learn an effective deep learning model without sharing raw data. However, recent studies show that FL still lacks privacy protection and may cause data leakage. We investigate this challenging problem by proposing a simple yet effective algorithm, named \textbf{F}ederated \textbf{L}earning \textbf{o}n Medical Datasets using \textbf{P}artial Networks (FLOP), that shares only a partial model between the server and clients. Extensive experiments on benchmark data and real-world healthcare tasks show that our approach achieves comparable or better performance while reducing the privacy and security risks. Of particular interest, we conduct experiments on the COVID-19 dataset and find that our FLOP algorithm can allow different hospitals to collaboratively and effectively train a partially shared model without sharing local patients' data.
[ { "created": "Wed, 10 Feb 2021 01:56:58 GMT", "version": "v1" }, { "created": "Wed, 23 Jun 2021 01:41:15 GMT", "version": "v2" } ]
2021-06-24
[ [ "Yang", "Qian", "" ], [ "Zhang", "Jianyi", "" ], [ "Hao", "Weituo", "" ], [ "Spell", "Gregory", "" ], [ "Carin", "Lawrence", "" ] ]
The outbreak of COVID-19 Disease due to the novel coronavirus has caused a shortage of medical resources. To aid and accelerate the diagnosis process, automatic diagnosis of COVID-19 via deep learning models has recently been explored by researchers across the world. While different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19, the data itself is still scarce due to patient privacy concerns. Federated Learning (FL) is a natural solution because it allows different organizations to cooperatively learn an effective deep learning model without sharing raw data. However, recent studies show that FL still lacks privacy protection and may cause data leakage. We investigate this challenging problem by proposing a simple yet effective algorithm, named \textbf{F}ederated \textbf{L}earning \textbf{o}n Medical Datasets using \textbf{P}artial Networks (FLOP), that shares only a partial model between the server and clients. Extensive experiments on benchmark data and real-world healthcare tasks show that our approach achieves comparable or better performance while reducing the privacy and security risks. Of particular interest, we conduct experiments on the COVID-19 dataset and find that our FLOP algorithm can allow different hospitals to collaboratively and effectively train a partially shared model without sharing local patients' data.
2402.00724
Jan Valo\v{s}ek
Jan Valosek, Theo Mathieu, Raphaelle Schlienger, Olivia S. Kowalczyk, Julien Cohen-Adad
Automatic Segmentation of the Spinal Cord Nerve Rootlets
null
Imaging Neuroscience, 2 (2024) 1-14
10.1162/imag_a_00218
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Precise identification of spinal nerve rootlets is relevant to delineate spinal levels for the study of functional activity in the spinal cord. The goal of this study was to develop an automatic method for the semantic segmentation of spinal nerve rootlets from T2-weighted magnetic resonance imaging (MRI) scans. Images from two open-access MRI datasets were used to train a 3D multi-class convolutional neural network using an active learning approach to segment C2-C8 dorsal nerve rootlets. Each output class corresponds to a spinal level. The method was tested on 3T T2-weighted images from datasets unseen during training to assess inter-site, inter-session, and inter-resolution variability. The test Dice score was 0.67 +- 0.16 (mean +- standard deviation across testing images and rootlets levels), suggesting a good performance. The method also demonstrated low inter-vendor and inter-site variability (coefficient of variation <= 1.41 %), as well as low inter-session variability (coefficient of variation <= 1.30 %) indicating stable predictions across different MRI vendors, sites, and sessions. The proposed methodology is open-source and readily available in the Spinal Cord Toolbox (SCT) v6.2 and higher.
[ { "created": "Thu, 1 Feb 2024 16:14:54 GMT", "version": "v1" }, { "created": "Wed, 1 May 2024 05:46:56 GMT", "version": "v2" } ]
2024-07-26
[ [ "Valosek", "Jan", "" ], [ "Mathieu", "Theo", "" ], [ "Schlienger", "Raphaelle", "" ], [ "Kowalczyk", "Olivia S.", "" ], [ "Cohen-Adad", "Julien", "" ] ]
Precise identification of spinal nerve rootlets is relevant to delineate spinal levels for the study of functional activity in the spinal cord. The goal of this study was to develop an automatic method for the semantic segmentation of spinal nerve rootlets from T2-weighted magnetic resonance imaging (MRI) scans. Images from two open-access MRI datasets were used to train a 3D multi-class convolutional neural network using an active learning approach to segment C2-C8 dorsal nerve rootlets. Each output class corresponds to a spinal level. The method was tested on 3T T2-weighted images from datasets unseen during training to assess inter-site, inter-session, and inter-resolution variability. The test Dice score was 0.67 +- 0.16 (mean +- standard deviation across testing images and rootlets levels), suggesting a good performance. The method also demonstrated low inter-vendor and inter-site variability (coefficient of variation <= 1.41 %), as well as low inter-session variability (coefficient of variation <= 1.30 %) indicating stable predictions across different MRI vendors, sites, and sessions. The proposed methodology is open-source and readily available in the Spinal Cord Toolbox (SCT) v6.2 and higher.
1310.7962
Pece Mitrevski
Pece Mitrevski, Olivera Kostoska and Marjan Angeleski
E-Business Implications for Productivity and Competitiveness
null
Annals of the "Constantin Brancusi" University of Targu - Jiu, Economy Series, Issue 1/2009, ISSN 1844-7007, pp. 253-262
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information and Communication Technology (ICT) affects to a great extent the output and productivity growth. Evidence suggests that investment growth in ICT has rapidly accelerated the TFP (total factor productivity) growth within the European Union. Such progress is particularly essential for the sectors which themselves produce new technology, but it is dispersing to other sectors, as well. Nevertheless, decrease in ICT investment does not necessarily decline the ICT contribution to output and productivity growth. These variations come out from the problems related to the particular phenomenon proper assessment, but predominantly from the companies' special requirements, as well as the necessary adjustments of labour employed. Hence, this paper aims at estimating the huge distinction in terms of ICT and TFB contributions to labour productivity growth among some of the European member states, as well as the factors which might stand behind the particular findings.
[ { "created": "Sat, 14 Sep 2013 18:01:29 GMT", "version": "v1" } ]
2013-10-31
[ [ "Mitrevski", "Pece", "" ], [ "Kostoska", "Olivera", "" ], [ "Angeleski", "Marjan", "" ] ]
Information and Communication Technology (ICT) affects to a great extent the output and productivity growth. Evidence suggests that investment growth in ICT has rapidly accelerated the TFP (total factor productivity) growth within the European Union. Such progress is particularly essential for the sectors which themselves produce new technology, but it is dispersing to other sectors, as well. Nevertheless, decrease in ICT investment does not necessarily decline the ICT contribution to output and productivity growth. These variations come out from the problems related to the particular phenomenon proper assessment, but predominantly from the companies' special requirements, as well as the necessary adjustments of labour employed. Hence, this paper aims at estimating the huge distinction in terms of ICT and TFB contributions to labour productivity growth among some of the European member states, as well as the factors which might stand behind the particular findings.
2207.10290
Xiaoliang Liu
Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance
Accepted by ICME2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks are powerful, but they also have shortcomings such as their sensitivity to adversarial examples, noise, blur, occlusion, etc. Moreover, ensuring the reliability and robustness of deep neural network models is crucial for their application in safety-critical areas. Much previous work has been proposed to improve specific robustness. However, we find that the specific robustness is often improved at the sacrifice of the additional robustness or generalization ability of the neural network model. In particular, adversarial training methods significantly hurt the generalization performance on unperturbed data when improving adversarial robustness. In this paper, we propose a new data processing and training method, called AugRmixAT, which can simultaneously improve the generalization ability and multiple robustness of neural network models. Finally, we validate the effectiveness of AugRmixAT on the CIFAR-10/100 and Tiny-ImageNet datasets. The experiments demonstrate that AugRmixAT can improve the model's generalization performance while enhancing the white-box robustness, black-box robustness, common corruption robustness, and partial occlusion robustness.
[ { "created": "Thu, 21 Jul 2022 04:02:24 GMT", "version": "v1" } ]
2022-07-22
[ [ "Liu", "Xiaoliang", "" ], [ "Shen", "Furao", "" ], [ "Zhao", "Jian", "" ], [ "Nie", "Changhai", "" ] ]
Deep neural networks are powerful, but they also have shortcomings such as their sensitivity to adversarial examples, noise, blur, occlusion, etc. Moreover, ensuring the reliability and robustness of deep neural network models is crucial for their application in safety-critical areas. Much previous work has been proposed to improve specific robustness. However, we find that the specific robustness is often improved at the sacrifice of the additional robustness or generalization ability of the neural network model. In particular, adversarial training methods significantly hurt the generalization performance on unperturbed data when improving adversarial robustness. In this paper, we propose a new data processing and training method, called AugRmixAT, which can simultaneously improve the generalization ability and multiple robustness of neural network models. Finally, we validate the effectiveness of AugRmixAT on the CIFAR-10/100 and Tiny-ImageNet datasets. The experiments demonstrate that AugRmixAT can improve the model's generalization performance while enhancing the white-box robustness, black-box robustness, common corruption robustness, and partial occlusion robustness.
2107.12637
Rajashekhar V S
Rajashekhar Vachiravelu Saminathan
Topology Design and Position Analysis of a Reconfigurable Modular Hybrid-Parallel Manipulator
11 pages, 14 figures, Accepted for IDETC/CIE 2019 the ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the modern days, manipulators are found in the automated assembly lines of industries that produce products in masses. These manipulators can be used only in one configuration, that is either serial or parallel. In this paper, a new module which has two degrees of freedom is introduced. By connecting the two and three modules in series, 4 and 6 DoF hybrid manipulators can be formed respectively. By erecting 3 modules in parallel and with some minor modifications, a 6 DoF parallel manipulator can be formed. Hence the manipulator is reconfigurable and can be used as hybrid or parallel manipulator by disassembling and assembling. The topology design, forward and inverse position analysis has been done for the two hybrid configurations and the parallel configuration. This manipulator can be used in industries where flexible manufacturing system is to be deployed. The three configurations of the parallel manipulator has been experimentally demonstrated using a graphical user interface (GUI) control through a computer.
[ { "created": "Tue, 27 Jul 2021 07:19:14 GMT", "version": "v1" } ]
2021-07-28
[ [ "Saminathan", "Rajashekhar Vachiravelu", "" ] ]
In the modern days, manipulators are found in the automated assembly lines of industries that produce products in masses. These manipulators can be used only in one configuration, that is either serial or parallel. In this paper, a new module which has two degrees of freedom is introduced. By connecting the two and three modules in series, 4 and 6 DoF hybrid manipulators can be formed respectively. By erecting 3 modules in parallel and with some minor modifications, a 6 DoF parallel manipulator can be formed. Hence the manipulator is reconfigurable and can be used as hybrid or parallel manipulator by disassembling and assembling. The topology design, forward and inverse position analysis has been done for the two hybrid configurations and the parallel configuration. This manipulator can be used in industries where flexible manufacturing system is to be deployed. The three configurations of the parallel manipulator has been experimentally demonstrated using a graphical user interface (GUI) control through a computer.
1804.08813
Wenpeng Yin
Wenpeng Yin, Hinrich Sch\"utze and Dan Roth
End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions
ACL'2018 camera-ready; 6 pages, 3 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work deals with SciTail, a natural entailment challenge derived from a multi-choice question answering problem. The premises and hypotheses in SciTail were generated with no awareness of each other, and did not specifically aim at the entailment task. This makes it more challenging than other entailment data sets and more directly useful to the end-task -- question answering. We propose DEISTE (deep explorations of inter-sentence interactions for textual entailment) for this entailment task. Given word-to-word interactions between the premise-hypothesis pair ($P$, $H$), DEISTE consists of: (i) a parameter-dynamic convolution to make important words in $P$ and $H$ play a dominant role in learnt representations; and (ii) a position-aware attentive convolution to encode the representation and position information of the aligned word pairs. Experiments show that DEISTE gets $\approx$5\% improvement over prior state of the art and that the pretrained DEISTE on SciTail generalizes well on RTE-5.
[ { "created": "Tue, 24 Apr 2018 02:29:14 GMT", "version": "v1" }, { "created": "Sat, 12 May 2018 03:29:42 GMT", "version": "v2" }, { "created": "Tue, 15 May 2018 03:53:53 GMT", "version": "v3" } ]
2018-05-16
[ [ "Yin", "Wenpeng", "" ], [ "Schütze", "Hinrich", "" ], [ "Roth", "Dan", "" ] ]
This work deals with SciTail, a natural entailment challenge derived from a multi-choice question answering problem. The premises and hypotheses in SciTail were generated with no awareness of each other, and did not specifically aim at the entailment task. This makes it more challenging than other entailment data sets and more directly useful to the end-task -- question answering. We propose DEISTE (deep explorations of inter-sentence interactions for textual entailment) for this entailment task. Given word-to-word interactions between the premise-hypothesis pair ($P$, $H$), DEISTE consists of: (i) a parameter-dynamic convolution to make important words in $P$ and $H$ play a dominant role in learnt representations; and (ii) a position-aware attentive convolution to encode the representation and position information of the aligned word pairs. Experiments show that DEISTE gets $\approx$5\% improvement over prior state of the art and that the pretrained DEISTE on SciTail generalizes well on RTE-5.
2106.04715
John Mern
John Mern, Mykel J. Kochenderfer
Measurable Monte Carlo Search Error Bounds
9 pages, submitted to ALT 2022
null
null
null
cs.AI cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
Monte Carlo planners can often return sub-optimal actions, even if they are guaranteed to converge in the limit of infinite samples. Known asymptotic regret bounds do not provide any way to measure confidence of a recommended action at the conclusion of search. In this work, we prove bounds on the sub-optimality of Monte Carlo estimates for non-stationary bandits and Markov decision processes. These bounds can be directly computed at the conclusion of the search and do not require knowledge of the true action-value. The presented bound holds for general Monte Carlo solvers meeting mild convergence conditions. We empirically test the tightness of the bounds through experiments on a multi-armed bandit and a discrete Markov decision process for both a simple solver and Monte Carlo tree search.
[ { "created": "Tue, 8 Jun 2021 22:20:14 GMT", "version": "v1" }, { "created": "Tue, 2 Nov 2021 23:44:05 GMT", "version": "v2" } ]
2021-11-04
[ [ "Mern", "John", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
Monte Carlo planners can often return sub-optimal actions, even if they are guaranteed to converge in the limit of infinite samples. Known asymptotic regret bounds do not provide any way to measure confidence of a recommended action at the conclusion of search. In this work, we prove bounds on the sub-optimality of Monte Carlo estimates for non-stationary bandits and Markov decision processes. These bounds can be directly computed at the conclusion of the search and do not require knowledge of the true action-value. The presented bound holds for general Monte Carlo solvers meeting mild convergence conditions. We empirically test the tightness of the bounds through experiments on a multi-armed bandit and a discrete Markov decision process for both a simple solver and Monte Carlo tree search.
1905.03179
Rahul Shome
Rahul Shome and Kostas E. Bekris
Anytime Multi-arm Task and Motion Planning for Pick-and-Place of Individual Objects via Handoffs
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automation applications are pushing the deployment of many high DoF manipulators in warehouse and manufacturing environments. This has motivated many efforts on optimizing manipulation tasks involving a single arm. Coordinating multiple arms for manipulation, however, introduces additional computational challenges arising from the increased DoFs, as well as the combinatorial increase in the available operations that many manipulators can perform, including handoffs between arms. The focus here is on the case of pick-and-place tasks, which require a sequence of handoffs to be executed, so as to achieve computational efficiency, asymptotic optimality and practical anytime performance. The paper leverages recent advances in multi-robot motion planning for high DoF systems to propose a novel multi-modal extension of the dRRT* algorithm. The key insight is that, instead of naively solving a sequence of motion planning problems, it is computationally advantageous to directly explore the composite space of the integrated multi-arm task and motion planning problem, given input sets of possible pick and handoff configurations. Asymptotic optimality guarantees are possible by sampling additional picks and handoffs over time. The evaluation shows that the approach finds initial solutions fast and improves their quality over time. It also succeeds in finding solutions to harder problem instances relative to alternatives and can scale effectively as the number of robots increases.
[ { "created": "Wed, 8 May 2019 16:07:56 GMT", "version": "v1" } ]
2019-05-09
[ [ "Shome", "Rahul", "" ], [ "Bekris", "Kostas E.", "" ] ]
Automation applications are pushing the deployment of many high DoF manipulators in warehouse and manufacturing environments. This has motivated many efforts on optimizing manipulation tasks involving a single arm. Coordinating multiple arms for manipulation, however, introduces additional computational challenges arising from the increased DoFs, as well as the combinatorial increase in the available operations that many manipulators can perform, including handoffs between arms. The focus here is on the case of pick-and-place tasks, which require a sequence of handoffs to be executed, so as to achieve computational efficiency, asymptotic optimality and practical anytime performance. The paper leverages recent advances in multi-robot motion planning for high DoF systems to propose a novel multi-modal extension of the dRRT* algorithm. The key insight is that, instead of naively solving a sequence of motion planning problems, it is computationally advantageous to directly explore the composite space of the integrated multi-arm task and motion planning problem, given input sets of possible pick and handoff configurations. Asymptotic optimality guarantees are possible by sampling additional picks and handoffs over time. The evaluation shows that the approach finds initial solutions fast and improves their quality over time. It also succeeds in finding solutions to harder problem instances relative to alternatives and can scale effectively as the number of robots increases.
2105.14550
Wei Sun
Wei Sun and Xiongkuo Min and Danyang Tu and Guangtao Zhai and Siwei Ma
Blind Quality Assessment for in-the-Wild Images via Hierarchical Feature Fusion and Iterative Mixed Database Training
Accepted by IEEE Journal of Selected Topics in Signal Processing
null
10.1109/JSTSP.2023.3270621
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image quality assessment (IQA) is very important for both end-users and service providers since a high-quality image can significantly improve the user's quality of experience (QoE) and also benefit lots of computer vision algorithms. Most existing blind image quality assessment (BIQA) models were developed for synthetically distorted images, however, they perform poorly on in-the-wild images, which are widely existed in various practical applications. In this paper, we propose a novel BIQA model for in-the-wild images by addressing two critical problems in this field: how to learn better quality-aware feature representation, and how to solve the problem of insufficient training samples in terms of their content and distortion diversity. Considering that perceptual visual quality is affected by both low-level visual features (e.g. distortions) and high-level semantic information (e.g. content), we first propose a staircase structure to hierarchically integrate the features from intermediate layers into the final feature representation, which enables the model to make full use of visual information from low-level to high-level. Then an iterative mixed database training (IMDT) strategy is proposed to train the BIQA model on multiple databases simultaneously, so the model can benefit from the increase in both training samples and image content and distortion diversity and can learn a more general feature representation. Experimental results show that the proposed model outperforms other state-of-the-art BIQA models on six in-the-wild IQA databases by a large margin. Moreover, the proposed model shows an excellent performance in the cross-database evaluation experiments, which further demonstrates that the learned feature representation is robust to images with diverse distortions and content. The code is available at https://github.com/sunwei925/StairIQA.
[ { "created": "Sun, 30 May 2021 14:04:10 GMT", "version": "v1" }, { "created": "Mon, 22 Nov 2021 04:54:44 GMT", "version": "v2" }, { "created": "Thu, 27 Apr 2023 07:18:31 GMT", "version": "v3" } ]
2023-04-28
[ [ "Sun", "Wei", "" ], [ "Min", "Xiongkuo", "" ], [ "Tu", "Danyang", "" ], [ "Zhai", "Guangtao", "" ], [ "Ma", "Siwei", "" ] ]
Image quality assessment (IQA) is very important for both end-users and service providers since a high-quality image can significantly improve the user's quality of experience (QoE) and also benefit lots of computer vision algorithms. Most existing blind image quality assessment (BIQA) models were developed for synthetically distorted images, however, they perform poorly on in-the-wild images, which are widely existed in various practical applications. In this paper, we propose a novel BIQA model for in-the-wild images by addressing two critical problems in this field: how to learn better quality-aware feature representation, and how to solve the problem of insufficient training samples in terms of their content and distortion diversity. Considering that perceptual visual quality is affected by both low-level visual features (e.g. distortions) and high-level semantic information (e.g. content), we first propose a staircase structure to hierarchically integrate the features from intermediate layers into the final feature representation, which enables the model to make full use of visual information from low-level to high-level. Then an iterative mixed database training (IMDT) strategy is proposed to train the BIQA model on multiple databases simultaneously, so the model can benefit from the increase in both training samples and image content and distortion diversity and can learn a more general feature representation. Experimental results show that the proposed model outperforms other state-of-the-art BIQA models on six in-the-wild IQA databases by a large margin. Moreover, the proposed model shows an excellent performance in the cross-database evaluation experiments, which further demonstrates that the learned feature representation is robust to images with diverse distortions and content. The code is available at https://github.com/sunwei925/StairIQA.
2301.05873
Paramita Koley
Paramita Koley, Aurghya Maiti, Niloy Ganguly and Sourangshu Bhattacharya
Opponent-aware Role-based Learning in Team Competitive Markov Games
9 pages, 6 figures
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Team competition in multi-agent Markov games is an increasingly important setting for multi-agent reinforcement learning, due to its general applicability in modeling many real-life situations. Multi-agent actor-critic methods are the most suitable class of techniques for learning optimal policies in the team competition setting, due to their flexibility in learning agent-specific critic functions, which can also learn from other agents. In many real-world team competitive scenarios, the roles of the agents naturally emerge, in order to aid in coordination and collaboration within members of the teams. However, existing methods for learning emergent roles rely heavily on the Q-learning setup which does not allow learning of agent-specific Q-functions. In this paper, we propose RAC, a novel technique for learning the emergent roles of agents within a team that are diverse and dynamic. In the proposed method, agents also benefit from predicting the roles of the agents in the opponent team. RAC uses the actor-critic framework with role encoder and opponent role predictors for learning an optimal policy. Experimentation using 2 games demonstrates that the policies learned by RAC achieve higher rewards than those learned using state-of-the-art baselines. Moreover, experiments suggest that the agents in a team learn diverse and opponent-aware policies.
[ { "created": "Sat, 14 Jan 2023 09:50:48 GMT", "version": "v1" } ]
2023-01-18
[ [ "Koley", "Paramita", "" ], [ "Maiti", "Aurghya", "" ], [ "Ganguly", "Niloy", "" ], [ "Bhattacharya", "Sourangshu", "" ] ]
Team competition in multi-agent Markov games is an increasingly important setting for multi-agent reinforcement learning, due to its general applicability in modeling many real-life situations. Multi-agent actor-critic methods are the most suitable class of techniques for learning optimal policies in the team competition setting, due to their flexibility in learning agent-specific critic functions, which can also learn from other agents. In many real-world team competitive scenarios, the roles of the agents naturally emerge, in order to aid in coordination and collaboration within members of the teams. However, existing methods for learning emergent roles rely heavily on the Q-learning setup which does not allow learning of agent-specific Q-functions. In this paper, we propose RAC, a novel technique for learning the emergent roles of agents within a team that are diverse and dynamic. In the proposed method, agents also benefit from predicting the roles of the agents in the opponent team. RAC uses the actor-critic framework with role encoder and opponent role predictors for learning an optimal policy. Experimentation using 2 games demonstrates that the policies learned by RAC achieve higher rewards than those learned using state-of-the-art baselines. Moreover, experiments suggest that the agents in a team learn diverse and opponent-aware policies.
2110.09365
Sourav Mondal
Sourav Mondal and Marco Ruffini
Optical Front/Mid-haul with Open Access-Edge Server Deployment Framework for Sliced O-RAN
This is the final version to be published in IEEE Transactions on Network and Service Management (TNSM). Copyright @ IEEE
null
10.1109/TNSM.2022.3173915
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The fifth-generation of mobile radio technologies is expected to be agile, flexible, and scalable while provisioning ultra-reliable and low-latency communication (uRLLC), enhanced mobile broadband (eMBB), and massive machine type communication (mMTC) applications. An efficient way of implementing these is by adopting cloudification, network function virtualization, and network slicing techniques with open-radio access network (O-RAN) architecture where the base-band processing functions are disaggregated into virtualized radio unit (RU), distributed unit (DU), and centralized unit (CU) over front/mid-haul interfaces. However, cost-efficient solutions are required for designing front/mid-haul interfaces and time-wavelength division multiplexed (TWDM) passive optical network (PON) appears as a potential candidate. Therefore, in this paper, we propose a framework for the optimal placement of RUs based on long-term network statistics and connecting them to open access-edge servers for hosting the corresponding DUs and CUs over front/mid-haul interfaces while satisfying the diverse QoS requirements of uRLLC, eMBB, and mMTC slices. In turn, we formulate a two-stage integer programming problem and time-efficient heuristics for users to RU association and flexible deployment of the corresponding DUs and CUs. We evaluate the O-RAN deployment cost and latency requirements with our TWDM-PON-based framework against urban, rural, and industrial areas and show its efficiency over the optical transport network (OTN)-based framework.
[ { "created": "Mon, 18 Oct 2021 14:49:05 GMT", "version": "v1" }, { "created": "Tue, 19 Oct 2021 03:05:35 GMT", "version": "v2" }, { "created": "Sun, 23 Jan 2022 06:49:37 GMT", "version": "v3" }, { "created": "Sun, 8 May 2022 09:31:31 GMT", "version": "v4" } ]
2022-05-10
[ [ "Mondal", "Sourav", "" ], [ "Ruffini", "Marco", "" ] ]
The fifth-generation of mobile radio technologies is expected to be agile, flexible, and scalable while provisioning ultra-reliable and low-latency communication (uRLLC), enhanced mobile broadband (eMBB), and massive machine type communication (mMTC) applications. An efficient way of implementing these is by adopting cloudification, network function virtualization, and network slicing techniques with open-radio access network (O-RAN) architecture where the base-band processing functions are disaggregated into virtualized radio unit (RU), distributed unit (DU), and centralized unit (CU) over front/mid-haul interfaces. However, cost-efficient solutions are required for designing front/mid-haul interfaces and time-wavelength division multiplexed (TWDM) passive optical network (PON) appears as a potential candidate. Therefore, in this paper, we propose a framework for the optimal placement of RUs based on long-term network statistics and connecting them to open access-edge servers for hosting the corresponding DUs and CUs over front/mid-haul interfaces while satisfying the diverse QoS requirements of uRLLC, eMBB, and mMTC slices. In turn, we formulate a two-stage integer programming problem and time-efficient heuristics for users to RU association and flexible deployment of the corresponding DUs and CUs. We evaluate the O-RAN deployment cost and latency requirements with our TWDM-PON-based framework against urban, rural, and industrial areas and show its efficiency over the optical transport network (OTN)-based framework.
1903.07024
Debajyoti Mondal
Prosenjit Bose, Paz Carmi, J. Mark Keil, Anil Maheshwari, Saeed Mehrabi, Debajyoti Mondal, and Michiel Smid
Computing Maximum Independent Set on Outerstring Graphs and Their Relatives
A preliminary version of this paper appeared in the 16th International Symposium on Algorithms and Data Structures (WADS 2019)
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A graph $G$ with $n$ vertices is called an outerstring graph if it has an intersection representation of a set of $n$ curves inside a disk such that one endpoint of every curve is attached to the boundary of the disk. Given an outerstring graph representation, the Maximum Independent Set (MIS) problem of the underlying graph can be computed in $O(s^3)$ time, where $s$ is the number of segments in the representation (Keil et al., Comput. Geom., 60:19--25, 2017). If the strings are of constant size (e.g., line segments, L-shapes, etc.), then the algorithm takes $O(n^3)$ time. In this paper, we examine the fine-grained complexity of the MIS problem on some well-known outerstring representations. We show that solving the MIS problem on grounded segment and grounded square-L representations is at least as hard as solving MIS on circle graph representations. Note that no $O(n^{2-\delta})$-time algorithm, $\delta>0$, is known for the MIS problem on circle graphs. For the grounded string representations where the strings are $y$-monotone simple polygonal paths of constant length with segments at integral coordinates, we solve MIS in $O(n^2)$ time and show this to be the best possible under the strong exponential time hypothesis (SETH). For the intersection graph of $n$ L-shapes in the plane, we give a $(4\cdot \log OPT)$-approximation algorithm for MIS (where $OPT$ denotes the size of an optimal solution), improving the previously best-known $(4\cdot \log n)$-approximation algorithm of Biedl and Derka (WADS 2017).
[ { "created": "Sun, 17 Mar 2019 04:06:26 GMT", "version": "v1" }, { "created": "Sun, 1 Aug 2021 19:42:19 GMT", "version": "v2" } ]
2021-08-03
[ [ "Bose", "Prosenjit", "" ], [ "Carmi", "Paz", "" ], [ "Keil", "J. Mark", "" ], [ "Maheshwari", "Anil", "" ], [ "Mehrabi", "Saeed", "" ], [ "Mondal", "Debajyoti", "" ], [ "Smid", "Michiel", "" ] ]
A graph $G$ with $n$ vertices is called an outerstring graph if it has an intersection representation of a set of $n$ curves inside a disk such that one endpoint of every curve is attached to the boundary of the disk. Given an outerstring graph representation, the Maximum Independent Set (MIS) problem of the underlying graph can be computed in $O(s^3)$ time, where $s$ is the number of segments in the representation (Keil et al., Comput. Geom., 60:19--25, 2017). If the strings are of constant size (e.g., line segments, L-shapes, etc.), then the algorithm takes $O(n^3)$ time. In this paper, we examine the fine-grained complexity of the MIS problem on some well-known outerstring representations. We show that solving the MIS problem on grounded segment and grounded square-L representations is at least as hard as solving MIS on circle graph representations. Note that no $O(n^{2-\delta})$-time algorithm, $\delta>0$, is known for the MIS problem on circle graphs. For the grounded string representations where the strings are $y$-monotone simple polygonal paths of constant length with segments at integral coordinates, we solve MIS in $O(n^2)$ time and show this to be the best possible under the strong exponential time hypothesis (SETH). For the intersection graph of $n$ L-shapes in the plane, we give a $(4\cdot \log OPT)$-approximation algorithm for MIS (where $OPT$ denotes the size of an optimal solution), improving the previously best-known $(4\cdot \log n)$-approximation algorithm of Biedl and Derka (WADS 2017).
2010.01017
Qinbin Li
Qinbin Li, Bingsheng He, Dawn Song
Practical One-Shot Federated Learning for Cross-Silo Setting
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning enables multiple parties to collaboratively learn a model without exchanging their data. While most existing federated learning algorithms need many rounds to converge, one-shot federated learning (i.e., federated learning with a single communication round) is a promising approach to make federated learning applicable in cross-silo setting in practice. However, existing one-shot algorithms only support specific models and do not provide any privacy guarantees, which significantly limit the applications in practice. In this paper, we propose a practical one-shot federated learning algorithm named FedKT. By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees. Our experiments on various tasks show that FedKT can significantly outperform the other state-of-the-art federated learning algorithms with a single communication round.
[ { "created": "Fri, 2 Oct 2020 14:09:10 GMT", "version": "v1" }, { "created": "Thu, 20 May 2021 13:25:47 GMT", "version": "v2" } ]
2021-05-21
[ [ "Li", "Qinbin", "" ], [ "He", "Bingsheng", "" ], [ "Song", "Dawn", "" ] ]
Federated learning enables multiple parties to collaboratively learn a model without exchanging their data. While most existing federated learning algorithms need many rounds to converge, one-shot federated learning (i.e., federated learning with a single communication round) is a promising approach to make federated learning applicable in cross-silo setting in practice. However, existing one-shot algorithms only support specific models and do not provide any privacy guarantees, which significantly limit the applications in practice. In this paper, we propose a practical one-shot federated learning algorithm named FedKT. By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees. Our experiments on various tasks show that FedKT can significantly outperform the other state-of-the-art federated learning algorithms with a single communication round.
2001.02390
Corey Lammie
Corey Lammie, Wei Xiang, Mostafa Rahimi Azghadi
Training Progressively Binarizing Deep Networks Using FPGAs
Accepted at 2020 IEEE International Symposium on Circuits and Systems (ISCAS)
2020 IEEE International Symposium on Circuits and Systems (ISCAS)
10.1109/ISCAS45731.2020.9181099
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
While hardware implementations of inference routines for Binarized Neural Networks (BNNs) are plentiful, current realizations of efficient BNN hardware training accelerators, suitable for Internet of Things (IoT) edge devices, leave much to be desired. Conventional BNN hardware training accelerators perform forward and backward propagations with parameters adopting binary representations, and optimization using parameters adopting floating or fixed-point real-valued representations--requiring two distinct sets of network parameters. In this paper, we propose a hardware-friendly training method that, contrary to conventional methods, progressively binarizes a singular set of fixed-point network parameters, yielding notable reductions in power and resource utilizations. We use the Intel FPGA SDK for OpenCL development environment to train our progressively binarizing DNNs on an OpenVINO FPGA. We benchmark our training approach on both GPUs and FPGAs using CIFAR-10 and compare it to conventional BNNs.
[ { "created": "Wed, 8 Jan 2020 06:01:13 GMT", "version": "v1" } ]
2021-02-18
[ [ "Lammie", "Corey", "" ], [ "Xiang", "Wei", "" ], [ "Azghadi", "Mostafa Rahimi", "" ] ]
While hardware implementations of inference routines for Binarized Neural Networks (BNNs) are plentiful, current realizations of efficient BNN hardware training accelerators, suitable for Internet of Things (IoT) edge devices, leave much to be desired. Conventional BNN hardware training accelerators perform forward and backward propagations with parameters adopting binary representations, and optimization using parameters adopting floating or fixed-point real-valued representations--requiring two distinct sets of network parameters. In this paper, we propose a hardware-friendly training method that, contrary to conventional methods, progressively binarizes a singular set of fixed-point network parameters, yielding notable reductions in power and resource utilizations. We use the Intel FPGA SDK for OpenCL development environment to train our progressively binarizing DNNs on an OpenVINO FPGA. We benchmark our training approach on both GPUs and FPGAs using CIFAR-10 and compare it to conventional BNNs.
2003.04947
Kristen Morse
Kristen Morse, Neha Das, Yixin Lin, Austin S. Wang, Akshara Rai, Franziska Meier
Learning State-Dependent Losses for Inverse Dynamics Learning
9 pages, 8 figures, accepted to IROS 2020, * Kristen Morse and Neha Das had equal contribution
null
10.1109/IROS45743.2020.9341701
null
cs.RO cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Being able to quickly adapt to changes in dynamics is paramount in model-based control for object manipulation tasks. In order to influence fast adaptation of the inverse dynamics model's parameters, data efficiency is crucial. Given observed data, a key element to how an optimizer updates model parameters is the loss function. In this work, we propose to apply meta-learning to learn structured, state-dependent loss functions during a meta-training phase. We then replace standard losses with our learned losses during online adaptation tasks. We evaluate our proposed approach on inverse dynamics learning tasks, both in simulation and on real hardware data. In both settings, the structured and state-dependent learned losses improve online adaptation speed, when compared to standard, state-independent loss functions.
[ { "created": "Tue, 10 Mar 2020 19:54:54 GMT", "version": "v1" }, { "created": "Thu, 12 Mar 2020 16:16:12 GMT", "version": "v2" }, { "created": "Fri, 14 Aug 2020 21:15:56 GMT", "version": "v3" } ]
2022-11-28
[ [ "Morse", "Kristen", "" ], [ "Das", "Neha", "" ], [ "Lin", "Yixin", "" ], [ "Wang", "Austin S.", "" ], [ "Rai", "Akshara", "" ], [ "Meier", "Franziska", "" ] ]
Being able to quickly adapt to changes in dynamics is paramount in model-based control for object manipulation tasks. In order to influence fast adaptation of the inverse dynamics model's parameters, data efficiency is crucial. Given observed data, a key element to how an optimizer updates model parameters is the loss function. In this work, we propose to apply meta-learning to learn structured, state-dependent loss functions during a meta-training phase. We then replace standard losses with our learned losses during online adaptation tasks. We evaluate our proposed approach on inverse dynamics learning tasks, both in simulation and on real hardware data. In both settings, the structured and state-dependent learned losses improve online adaptation speed, when compared to standard, state-independent loss functions.
2108.09069
Shunchuan Yang
Kai Zhu, Jinhui Wang, Shunchuan Yang
An Adaptive Interpolation Scheme for Wideband Frequency Sweep in Electromagnetic Simulations
null
null
10.1109/LAWP.2021.3135958
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An adaptive interpolation scheme is proposed to accurately calculate the wideband responses in electromagnetic simulations. In the proposed scheme, the sampling points are first carefully divided into several groups based on their responses to avoid the Runge phenomenon and the error fluctuations, and then different interpolation strategies are used to calculate the responses in the whole frequency band. If the relative error does not satisfy the predefined threshold in a specific frequency band, it will be refined until the error criteria is met. The detailed error analysis is also presented to verify the accuracy of the interpolation scheme. At last, two numerical examples including the antenna radiation and the filter simulation are carried out to validate its accuracy and efficiency.
[ { "created": "Fri, 20 Aug 2021 08:51:40 GMT", "version": "v1" } ]
2022-03-14
[ [ "Zhu", "Kai", "" ], [ "Wang", "Jinhui", "" ], [ "Yang", "Shunchuan", "" ] ]
An adaptive interpolation scheme is proposed to accurately calculate the wideband responses in electromagnetic simulations. In the proposed scheme, the sampling points are first carefully divided into several groups based on their responses to avoid the Runge phenomenon and the error fluctuations, and then different interpolation strategies are used to calculate the responses in the whole frequency band. If the relative error does not satisfy the predefined threshold in a specific frequency band, it will be refined until the error criteria is met. The detailed error analysis is also presented to verify the accuracy of the interpolation scheme. At last, two numerical examples including the antenna radiation and the filter simulation are carried out to validate its accuracy and efficiency.
2305.11813
Philipp Czerner
Eszter Couillard and Philipp Czerner and Javier Esparza and Rupak Majumdar
Making $\textsf{IP}=\textsf{PSPACE}$ Practical: Efficient Interactive Protocols for BDD Algorithms
null
null
10.1007/978-3-031-37709-9_21
null
cs.LO cs.CC
http://creativecommons.org/licenses/by/4.0/
We show that interactive protocols between a prover and a verifier, a well-known tool of complexity theory, can be used in practice to certify the correctness of automated reasoning tools. Theoretically, interactive protocols exist for all $\textsf{PSPACE}$ problems. The verifier of a protocol checks the prover's answer to a problem instance in probabilistic polynomial time, with polynomially many bits of communication, and with exponentially small probability of error. (The prover may need exponential time.) Existing interactive protocols are not used in practice because their provers use naive algorithms, inefficient even for small instances, that are incompatible with practical implementations of automated reasoning. We bridge the gap between theory and practice by means of an interactive protocol whose prover uses BDDs. We consider the problem of counting the number of assignments to a QBF instance ($\#\textrm{CP}$), which has a natural BDD-based algorithm. We give an interactive protocol for $\#\textrm{CP}$ whose prover is implemented on top of an extended BDD library. The prover has only a linear overhead in computation time over the natural algorithm. We have implemented our protocol in $\textsf{blic}$, a certifying tool for $\#\textrm{CP}$. Experiments on standard QBF benchmarks show that $\textsf{blic}$ is competitive with state-of-the-art QBF-solvers. The run time of the verifier is negligible. While loss of absolute certainty can be concerning, the error probability in our experiments is at most $10^{-10}$ and reduces to $10^{-10k}$ by repeating the verification $k$ times.
[ { "created": "Fri, 19 May 2023 16:48:21 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2023 13:37:35 GMT", "version": "v2" } ]
2023-09-07
[ [ "Couillard", "Eszter", "" ], [ "Czerner", "Philipp", "" ], [ "Esparza", "Javier", "" ], [ "Majumdar", "Rupak", "" ] ]
We show that interactive protocols between a prover and a verifier, a well-known tool of complexity theory, can be used in practice to certify the correctness of automated reasoning tools. Theoretically, interactive protocols exist for all $\textsf{PSPACE}$ problems. The verifier of a protocol checks the prover's answer to a problem instance in probabilistic polynomial time, with polynomially many bits of communication, and with exponentially small probability of error. (The prover may need exponential time.) Existing interactive protocols are not used in practice because their provers use naive algorithms, inefficient even for small instances, that are incompatible with practical implementations of automated reasoning. We bridge the gap between theory and practice by means of an interactive protocol whose prover uses BDDs. We consider the problem of counting the number of assignments to a QBF instance ($\#\textrm{CP}$), which has a natural BDD-based algorithm. We give an interactive protocol for $\#\textrm{CP}$ whose prover is implemented on top of an extended BDD library. The prover has only a linear overhead in computation time over the natural algorithm. We have implemented our protocol in $\textsf{blic}$, a certifying tool for $\#\textrm{CP}$. Experiments on standard QBF benchmarks show that $\textsf{blic}$ is competitive with state-of-the-art QBF-solvers. The run time of the verifier is negligible. While loss of absolute certainty can be concerning, the error probability in our experiments is at most $10^{-10}$ and reduces to $10^{-10k}$ by repeating the verification $k$ times.
2211.14613
Gabriel Istrate
Gabriel Istrate
Some Remarks on Almost Periodic Sequences and Languages
Reconstructed source file of a paper originally published in 1995 in a volume currently without an online version (and with limited availability). Uploaded in order to ensure the online availability (and preservation) of the paper. This version faithfully reproduces the original, except for the addition of a note about the solution of Open Problem 3 and the correction of some minor typos
pages 191-195, in "Mathematical linguistics and related topics. Papers in honor of Solomon Marcus on his 70th birthday". Edited by Gheorghe P\u{a}un. Editura Academiei Rom\^ane, Bucharest, 1995. xii+364 pp. ISBN: 973-27-0486-1
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
Almost periodicity has been considered in Formal Language Theory in connection with some topics in Symbolic Dynamics. In (P\u{a}un and Marcus, Bulletin of EATCS 53 (1994)) some problems concerning this property are raised. For instance it is asked whether there exists some almost periodic word $\alpha$ such that $Sub(\alpha)$, the set of its finite factors, is context-free non-regular. We answer negatively (even in a stronger form) this question, as well as discussing other related topics.
[ { "created": "Sat, 26 Nov 2022 16:45:36 GMT", "version": "v1" } ]
2022-11-29
[ [ "Istrate", "Gabriel", "" ] ]
Almost periodicity has been considered in Formal Language Theory in connection with some topics in Symbolic Dynamics. In (P\u{a}un and Marcus, Bulletin of EATCS 53 (1994)) some problems concerning this property are raised. For instance it is asked whether there exists some almost periodic word $\alpha$ such that $Sub(\alpha)$, the set of its finite factors, is context-free non-regular. We answer negatively (even in a stronger form) this question, as well as discussing other related topics.
2312.03030
Xiaosen Wang
Xiaosen Wang, Kunyu Wang
Generating Visually Realistic Adversarial Patch
14 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are vulnerable to various types of adversarial examples, bringing huge threats to security-critical applications. Among these, adversarial patches have drawn increasing attention due to their good applicability to fool DNNs in the physical world. However, existing works often generate patches with meaningless noise or patterns, making it conspicuous to humans. To address this issue, we explore how to generate visually realistic adversarial patches to fool DNNs. Firstly, we analyze that a high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world. Based on this analysis, we propose an effective attack called VRAP, to generate visually realistic adversarial patches. Specifically, VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimizes the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information. Empirical evaluations on the ImageNet dataset demonstrate that the proposed VRAP exhibits outstanding attack performance in the digital world. Moreover, the generated adversarial patches can be disguised as the scrawl or logo in the physical world to fool the deep models without being detected, bringing significant threats to DNNs-enabled applications.
[ { "created": "Tue, 5 Dec 2023 11:07:39 GMT", "version": "v1" } ]
2023-12-07
[ [ "Wang", "Xiaosen", "" ], [ "Wang", "Kunyu", "" ] ]
Deep neural networks (DNNs) are vulnerable to various types of adversarial examples, bringing huge threats to security-critical applications. Among these, adversarial patches have drawn increasing attention due to their good applicability to fool DNNs in the physical world. However, existing works often generate patches with meaningless noise or patterns, making it conspicuous to humans. To address this issue, we explore how to generate visually realistic adversarial patches to fool DNNs. Firstly, we analyze that a high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world. Based on this analysis, we propose an effective attack called VRAP, to generate visually realistic adversarial patches. Specifically, VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimizes the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information. Empirical evaluations on the ImageNet dataset demonstrate that the proposed VRAP exhibits outstanding attack performance in the digital world. Moreover, the generated adversarial patches can be disguised as the scrawl or logo in the physical world to fool the deep models without being detected, bringing significant threats to DNNs-enabled applications.
2011.11314
Gerald Baier
Gerald Baier and Antonin Deschemps and Michael Schmitt and Naoto Yokoya
Synthesizing Optical and SAR Imagery From Land Cover Maps and Auxiliary Raster Data
null
null
10.1109/TGRS.2021.3068532
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We synthesize both optical RGB and synthetic aperture radar (SAR) remote sensing images from land cover maps and auxiliary raster data using generative adversarial networks (GANs). In remote sensing, many types of data, such as digital elevation models (DEMs) or precipitation maps, are often not reflected in land cover maps but still influence image content or structure. Including such data in the synthesis process increases the quality of the generated images and exerts more control on their characteristics. Spatially adaptive normalization layers fuse both inputs and are applied to a full-blown generator architecture consisting of encoder and decoder to take full advantage of the information content in the auxiliary raster data. Our method successfully synthesizes medium (10 m) and high (1 m) resolution images when trained with the corresponding data set. We show the advantage of data fusion of land cover maps and auxiliary information using mean intersection over unions (mIoUs), pixel accuracy, and Fr\'echet inception distances (FIDs) using pretrained U-Net segmentation models. Handpicked images exemplify how fusing information avoids ambiguities in the synthesized images. By slightly editing the input, our method can be used to synthesize realistic changes, i.e., raising the water levels. The source code is available at https://github.com/gbaier/rs_img_synth and we published the newly created high-resolution dataset at https://ieee-dataport.org/open-access/geonrw.
[ { "created": "Mon, 23 Nov 2020 10:28:10 GMT", "version": "v1" }, { "created": "Tue, 25 May 2021 13:25:48 GMT", "version": "v2" } ]
2021-05-26
[ [ "Baier", "Gerald", "" ], [ "Deschemps", "Antonin", "" ], [ "Schmitt", "Michael", "" ], [ "Yokoya", "Naoto", "" ] ]
We synthesize both optical RGB and synthetic aperture radar (SAR) remote sensing images from land cover maps and auxiliary raster data using generative adversarial networks (GANs). In remote sensing, many types of data, such as digital elevation models (DEMs) or precipitation maps, are often not reflected in land cover maps but still influence image content or structure. Including such data in the synthesis process increases the quality of the generated images and exerts more control on their characteristics. Spatially adaptive normalization layers fuse both inputs and are applied to a full-blown generator architecture consisting of encoder and decoder to take full advantage of the information content in the auxiliary raster data. Our method successfully synthesizes medium (10 m) and high (1 m) resolution images when trained with the corresponding data set. We show the advantage of data fusion of land cover maps and auxiliary information using mean intersection over unions (mIoUs), pixel accuracy, and Fr\'echet inception distances (FIDs) using pretrained U-Net segmentation models. Handpicked images exemplify how fusing information avoids ambiguities in the synthesized images. By slightly editing the input, our method can be used to synthesize realistic changes, i.e., raising the water levels. The source code is available at https://github.com/gbaier/rs_img_synth and we published the newly created high-resolution dataset at https://ieee-dataport.org/open-access/geonrw.
1707.03374
Abhishek Gupta
YuXuan Liu, Abhishek Gupta, Pieter Abbeel, Sergey Levine
Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
Accepted at ICRA 2018, Brisbane. YuXuan Liu and Abhishek Gupta had equal contribution
null
null
null
cs.LG cs.AI cs.CV cs.NE cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imitation learning is an effective approach for autonomous systems to acquire control policies when an explicit reward function is unavailable, using supervision provided as demonstrations from an expert, typically a human operator. However, standard imitation learning methods assume that the agent receives examples of observation-action tuples that could be provided, for instance, to a supervised learning algorithm. This stands in contrast to how humans and animals imitate: we observe another person performing some behavior and then figure out which actions will realize that behavior, compensating for changes in viewpoint, surroundings, object positions and types, and other factors. We term this kind of imitation learning "imitation-from-observation," and propose an imitation learning method based on video prediction with context translation and deep reinforcement learning. This lifts the assumption in imitation learning that the demonstration should consist of observations in the same environment configuration, and enables a variety of interesting applications, including learning robotic skills that involve tool use simply by observing videos of human tool use. Our experimental results show the effectiveness of our approach in learning a wide range of real-world robotic tasks modeled after common household chores from videos of a human demonstrator, including sweeping, ladling almonds, pushing objects as well as a number of tasks in simulation.
[ { "created": "Tue, 11 Jul 2017 17:23:53 GMT", "version": "v1" }, { "created": "Mon, 18 Jun 2018 21:00:13 GMT", "version": "v2" } ]
2018-06-20
[ [ "Liu", "YuXuan", "" ], [ "Gupta", "Abhishek", "" ], [ "Abbeel", "Pieter", "" ], [ "Levine", "Sergey", "" ] ]
Imitation learning is an effective approach for autonomous systems to acquire control policies when an explicit reward function is unavailable, using supervision provided as demonstrations from an expert, typically a human operator. However, standard imitation learning methods assume that the agent receives examples of observation-action tuples that could be provided, for instance, to a supervised learning algorithm. This stands in contrast to how humans and animals imitate: we observe another person performing some behavior and then figure out which actions will realize that behavior, compensating for changes in viewpoint, surroundings, object positions and types, and other factors. We term this kind of imitation learning "imitation-from-observation," and propose an imitation learning method based on video prediction with context translation and deep reinforcement learning. This lifts the assumption in imitation learning that the demonstration should consist of observations in the same environment configuration, and enables a variety of interesting applications, including learning robotic skills that involve tool use simply by observing videos of human tool use. Our experimental results show the effectiveness of our approach in learning a wide range of real-world robotic tasks modeled after common household chores from videos of a human demonstrator, including sweeping, ladling almonds, pushing objects as well as a number of tasks in simulation.
1805.11724
Michael Kampffmeyer
Michael Kampffmeyer, Yinbo Chen, Xiaodan Liang, Hao Wang, Yujia Zhang and Eric P. Xing
Rethinking Knowledge Graph Propagation for Zero-Shot Learning
The first two authors contributed equally. Code at https://github.com/cyvius96/adgpm. To appear in CVPR 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph convolutional neural networks have recently shown great potential for the task of zero-shot learning. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, multi-layer architectures, which are required to propagate knowledge to distant nodes in the graph, dilute the knowledge by performing extensive Laplacian smoothing at each layer and thereby consequently decrease performance. In order to still enjoy the benefit brought by the graph structure while preventing dilution of knowledge from distant nodes, we propose a Dense Graph Propagation (DGP) module with carefully designed direct links among distant nodes. DGP allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants. A weighting scheme is further used to weigh their contribution depending on the distance to the node to improve information propagation in the graph. Combined with finetuning of the representations in a two-stage training approach our method outperforms state-of-the-art zero-shot learning approaches.
[ { "created": "Tue, 29 May 2018 21:55:46 GMT", "version": "v1" }, { "created": "Thu, 31 May 2018 20:14:38 GMT", "version": "v2" }, { "created": "Wed, 27 Mar 2019 17:26:38 GMT", "version": "v3" } ]
2019-03-28
[ [ "Kampffmeyer", "Michael", "" ], [ "Chen", "Yinbo", "" ], [ "Liang", "Xiaodan", "" ], [ "Wang", "Hao", "" ], [ "Zhang", "Yujia", "" ], [ "Xing", "Eric P.", "" ] ]
Graph convolutional neural networks have recently shown great potential for the task of zero-shot learning. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, multi-layer architectures, which are required to propagate knowledge to distant nodes in the graph, dilute the knowledge by performing extensive Laplacian smoothing at each layer and thereby consequently decrease performance. In order to still enjoy the benefit brought by the graph structure while preventing dilution of knowledge from distant nodes, we propose a Dense Graph Propagation (DGP) module with carefully designed direct links among distant nodes. DGP allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants. A weighting scheme is further used to weigh their contribution depending on the distance to the node to improve information propagation in the graph. Combined with finetuning of the representations in a two-stage training approach our method outperforms state-of-the-art zero-shot learning approaches.
1303.6573
Dr. Nadeem Javaid
A. Ahmad, K. Latif, N. Javaid, Z. A. Khan, U. Qasim
Density Controlled Divide-and-Rule Scheme for Energy Efficient Routing in Wireless Sensor Networks
26th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE2013), Regina, Saskatchewan, Canada, 2013
null
10.1109/CCECE.2013.6567738
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cluster based routing technique is most popular routing technique in Wireless Sensor Networks (WSNs). Due to varying need of WSN applications efficient energy utilization in routing protocols is still a potential area of research. In this research work we introduced a new energy efficient cluster based routing technique. In this technique we tried to overcome the problem of coverage hole and energy hole. In our technique we controlled these problems by introducing density controlled uniform distribution of nodes and fixing optimum number of Cluster Heads (CHs) in each round. Finally we verified our technique by experimental results of MATLAB simulations.
[ { "created": "Tue, 26 Mar 2013 17:50:52 GMT", "version": "v1" } ]
2016-11-17
[ [ "Ahmad", "A.", "" ], [ "Latif", "K.", "" ], [ "Javaid", "N.", "" ], [ "Khan", "Z. A.", "" ], [ "Qasim", "U.", "" ] ]
Cluster based routing technique is most popular routing technique in Wireless Sensor Networks (WSNs). Due to varying need of WSN applications efficient energy utilization in routing protocols is still a potential area of research. In this research work we introduced a new energy efficient cluster based routing technique. In this technique we tried to overcome the problem of coverage hole and energy hole. In our technique we controlled these problems by introducing density controlled uniform distribution of nodes and fixing optimum number of Cluster Heads (CHs) in each round. Finally we verified our technique by experimental results of MATLAB simulations.
2104.14644
Safa Alver
Safa Alver, Doina Precup
What is Going on Inside Recurrent Meta Reinforcement Learning Agents?
Accepted to the Never-Ending Reinforcement Learning Workshop at ICLR 2021
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm". After being trained on a pre-specified task distribution, the learned weights of the agent's RNN are said to implement an efficient learning algorithm through their activity dynamics, which allows the agent to quickly solve new tasks sampled from the same distribution. However, due to the black-box nature of these agents, the way in which they work is not yet fully understood. In this study, we shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework. We hypothesize that the learned activity dynamics is acting as belief states for such agents. Several illustrative experiments suggest that this hypothesis is true, and that recurrent meta-RL agents can be viewed as agents that learn to act optimally in partially observable environments consisting of multiple related tasks. This view helps in understanding their failure cases and some interesting model-based results reported in the literature.
[ { "created": "Thu, 29 Apr 2021 20:34:39 GMT", "version": "v1" } ]
2021-05-03
[ [ "Alver", "Safa", "" ], [ "Precup", "Doina", "" ] ]
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm". After being trained on a pre-specified task distribution, the learned weights of the agent's RNN are said to implement an efficient learning algorithm through their activity dynamics, which allows the agent to quickly solve new tasks sampled from the same distribution. However, due to the black-box nature of these agents, the way in which they work is not yet fully understood. In this study, we shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework. We hypothesize that the learned activity dynamics is acting as belief states for such agents. Several illustrative experiments suggest that this hypothesis is true, and that recurrent meta-RL agents can be viewed as agents that learn to act optimally in partially observable environments consisting of multiple related tasks. This view helps in understanding their failure cases and some interesting model-based results reported in the literature.
cs/0312042
Sergio Greco
Sergio Flesca, Sergio Greco
Declarative Semantics for Active Rules
27 pages
Theory and Practice of Logic Programming, 1(1): 43-69, 2001
null
null
cs.DB
null
In this paper we analyze declarative deterministic and non-deterministic semantics for active rules. In particular we consider several (partial) stable model semantics, previously defined for deductive rules, such as well-founded, max deterministic, unique total stable model, total stable model, and maximal stable model semantics. The semantics of an active program AP is given by first rewriting it into a deductive program P, then computing a model M defining the declarative semantics of P and, finally, applying `consistent' updates contained in M to the source database. The framework we propose permits a natural integration of deductive and active rules and can also be applied to queries with function symbols or to queries over infinite databases.
[ { "created": "Thu, 18 Dec 2003 17:43:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Flesca", "Sergio", "" ], [ "Greco", "Sergio", "" ] ]
In this paper we analyze declarative deterministic and non-deterministic semantics for active rules. In particular we consider several (partial) stable model semantics, previously defined for deductive rules, such as well-founded, max deterministic, unique total stable model, total stable model, and maximal stable model semantics. The semantics of an active program AP is given by first rewriting it into a deductive program P, then computing a model M defining the declarative semantics of P and, finally, applying `consistent' updates contained in M to the source database. The framework we propose permits a natural integration of deductive and active rules and can also be applied to queries with function symbols or to queries over infinite databases.
2004.02380
Philippe Morere
Philippe Morere and Fabio Ramos
Intrinsic Exploration as Multi-Objective RL
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intrinsic motivation enables reinforcement learning (RL) agents to explore when rewards are very sparse, where traditional exploration heuristics such as Boltzmann or e-greedy would typically fail. However, intrinsic exploration is generally handled in an ad-hoc manner, where exploration is not treated as a core objective of the learning process; this weak formulation leads to sub-optimal exploration performance. To overcome this problem, we propose a framework based on multi-objective RL where both exploration and exploitation are being optimized as separate objectives. This formulation brings the balance between exploration and exploitation at a policy level, resulting in advantages over traditional methods. This also allows for controlling exploration while learning, at no extra cost. Such strategies achieve a degree of control over agent exploration that was previously unattainable with classic or intrinsic rewards. We demonstrate scalability to continuous state-action spaces by presenting a method (EMU-Q) based on our framework, guiding exploration towards regions of higher value-function uncertainty. EMU-Q is experimentally shown to outperform classic exploration techniques and other intrinsic RL methods on a continuous control benchmark and on a robotic manipulator.
[ { "created": "Mon, 6 Apr 2020 02:37:29 GMT", "version": "v1" } ]
2020-04-07
[ [ "Morere", "Philippe", "" ], [ "Ramos", "Fabio", "" ] ]
Intrinsic motivation enables reinforcement learning (RL) agents to explore when rewards are very sparse, where traditional exploration heuristics such as Boltzmann or e-greedy would typically fail. However, intrinsic exploration is generally handled in an ad-hoc manner, where exploration is not treated as a core objective of the learning process; this weak formulation leads to sub-optimal exploration performance. To overcome this problem, we propose a framework based on multi-objective RL where both exploration and exploitation are being optimized as separate objectives. This formulation brings the balance between exploration and exploitation at a policy level, resulting in advantages over traditional methods. This also allows for controlling exploration while learning, at no extra cost. Such strategies achieve a degree of control over agent exploration that was previously unattainable with classic or intrinsic rewards. We demonstrate scalability to continuous state-action spaces by presenting a method (EMU-Q) based on our framework, guiding exploration towards regions of higher value-function uncertainty. EMU-Q is experimentally shown to outperform classic exploration techniques and other intrinsic RL methods on a continuous control benchmark and on a robotic manipulator.
1001.4420
Markus Jalsenius
Raphael Clifford, Markus Jalsenius, Ashley Montanaro and Benjamin Sach
The Complexity of Flood Filling Games
20 pages, 10 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the complexity of the popular one player combinatorial game known as Flood-It. In this game the player is given an n by n board of tiles where each tile is allocated one of c colours. The goal is to make the colours of all tiles equal via the shortest possible sequence of flooding operations. In the standard version, a flooding operation consists of the player choosing a colour k, which then changes the colour of all the tiles in the monochromatic region connected to the top left tile to k. After this operation has been performed, neighbouring regions which are already of the chosen colour k will then also become connected, thereby extending the monochromatic region of the board. We show that finding the minimum number of flooding operations is NP-hard for c>=3 and that this even holds when the player can perform flooding operations from any position on the board. However, we show that this "free" variant is in P for c=2. We also prove that for an unbounded number of colours, Flood-It remains NP-hard for boards of height at least 3, but is in P for boards of height 2. Next we show how a c-1 approximation and a randomised 2c/3 approximation algorithm can be derived, and that no polynomial time constant factor, independent of c, approximation algorithm exists unless P=NP. We then investigate how many moves are required for the "most demanding" n by n boards (those requiring the most moves) and show that the number grows as fast as Theta(n*c^0.5). Finally, we consider boards where the colours of the tiles are chosen at random and show that for c>=2, the number of moves required to flood the whole board is Omega(n) with high probability.
[ { "created": "Mon, 25 Jan 2010 13:40:57 GMT", "version": "v1" }, { "created": "Thu, 19 Aug 2010 16:57:48 GMT", "version": "v2" }, { "created": "Thu, 9 Jun 2011 13:12:47 GMT", "version": "v3" } ]
2011-06-10
[ [ "Clifford", "Raphael", "" ], [ "Jalsenius", "Markus", "" ], [ "Montanaro", "Ashley", "" ], [ "Sach", "Benjamin", "" ] ]
We study the complexity of the popular one player combinatorial game known as Flood-It. In this game the player is given an n by n board of tiles where each tile is allocated one of c colours. The goal is to make the colours of all tiles equal via the shortest possible sequence of flooding operations. In the standard version, a flooding operation consists of the player choosing a colour k, which then changes the colour of all the tiles in the monochromatic region connected to the top left tile to k. After this operation has been performed, neighbouring regions which are already of the chosen colour k will then also become connected, thereby extending the monochromatic region of the board. We show that finding the minimum number of flooding operations is NP-hard for c>=3 and that this even holds when the player can perform flooding operations from any position on the board. However, we show that this "free" variant is in P for c=2. We also prove that for an unbounded number of colours, Flood-It remains NP-hard for boards of height at least 3, but is in P for boards of height 2. Next we show how a c-1 approximation and a randomised 2c/3 approximation algorithm can be derived, and that no polynomial time constant factor, independent of c, approximation algorithm exists unless P=NP. We then investigate how many moves are required for the "most demanding" n by n boards (those requiring the most moves) and show that the number grows as fast as Theta(n*c^0.5). Finally, we consider boards where the colours of the tiles are chosen at random and show that for c>=2, the number of moves required to flood the whole board is Omega(n) with high probability.
1908.05739
Chandra Sekhar Bhagavatula
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih and Yejin Choi
Abductive Commonsense Reasoning
ICLR 2020 Camera Ready
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. While abduction has long been considered to be at the core of how people interpret and read between the lines in natural language (Hobbs et al., 1988), there has been relatively little research in support of abductive natural language inference and generation. We present the first study that investigates the viability of language-based abductive reasoning. We introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations. Based on this dataset, we conceptualize two new tasks -- (i) Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and (ii) Abductive NLG: a conditional generation task for explaining given observations in natural language. On Abductive NLI, the best model achieves 68.9% accuracy, well below human performance of 91.4%. On Abductive NLG, the current best language generators struggle even more, as they lack reasoning capabilities that are trivial for humans. Our analysis leads to new insights into the types of reasoning that deep pre-trained language models fail to perform--despite their strong performance on the related but more narrowly defined task of entailment NLI--pointing to interesting avenues for future research.
[ { "created": "Thu, 15 Aug 2019 20:03:10 GMT", "version": "v1" }, { "created": "Fri, 14 Feb 2020 02:52:27 GMT", "version": "v2" } ]
2020-02-17
[ [ "Bhagavatula", "Chandra", "" ], [ "Bras", "Ronan Le", "" ], [ "Malaviya", "Chaitanya", "" ], [ "Sakaguchi", "Keisuke", "" ], [ "Holtzman", "Ari", "" ], [ "Rashkin", "Hannah", "" ], [ "Downey", "Doug", "" ], [ "Yih", "Scott Wen-tau", "" ], [ "Choi", "Yejin", "" ] ]
Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. While abduction has long been considered to be at the core of how people interpret and read between the lines in natural language (Hobbs et al., 1988), there has been relatively little research in support of abductive natural language inference and generation. We present the first study that investigates the viability of language-based abductive reasoning. We introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations. Based on this dataset, we conceptualize two new tasks -- (i) Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and (ii) Abductive NLG: a conditional generation task for explaining given observations in natural language. On Abductive NLI, the best model achieves 68.9% accuracy, well below human performance of 91.4%. On Abductive NLG, the current best language generators struggle even more, as they lack reasoning capabilities that are trivial for humans. Our analysis leads to new insights into the types of reasoning that deep pre-trained language models fail to perform--despite their strong performance on the related but more narrowly defined task of entailment NLI--pointing to interesting avenues for future research.
1901.00861
Bradley Gram-Hansen
Bradley Gram-Hansen, Patrick Helber, Indhu Varatharajan, Faiza Azam, Alejandro Coca-Castro, Veronika Kopackova, Piotr Bilinski
Mapping Informal Settlements in Developing Countries using Machine Learning and Low Resolution Multi-spectral Data
Published at the AAAI/ACM Conference on AI, ethics and society. Extended results from our previous workshop: arXiv:1812.00812
AAAI/ACM Conference on AI, Ethics, and Society (AIES 2019)
10.1145/3306618.3314253
null
cs.CY cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Informal settlements are home to the most socially and economically vulnerable people on the planet. In order to deliver effective economic and social aid, non-government organizations (NGOs), such as the United Nations Children's Fund (UNICEF), require detailed maps of the locations of informal settlements. However, data regarding informal and formal settlements is primarily unavailable and if available is often incomplete. This is due, in part, to the cost and complexity of gathering data on a large scale. To address these challenges, we, in this work, provide three contributions. 1) A brand new machine learning data-set, purposely developed for informal settlement detection. 2) We show that it is possible to detect informal settlements using freely available low-resolution (LR) data, in contrast to previous studies that use very-high resolution (VHR) satellite and aerial imagery, something that is cost-prohibitive for NGOs. 3) We demonstrate two effective classification schemes on our curated data set, one that is cost-efficient for NGOs and another that is cost-prohibitive for NGOs, but has additional utility. We integrate these schemes into a semi-automated pipeline that converts either a LR or VHR satellite image into a binary map that encodes the locations of informal settlements.
[ { "created": "Thu, 3 Jan 2019 16:51:40 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2019 23:18:26 GMT", "version": "v2" }, { "created": "Thu, 30 May 2019 11:11:39 GMT", "version": "v3" } ]
2019-05-31
[ [ "Gram-Hansen", "Bradley", "" ], [ "Helber", "Patrick", "" ], [ "Varatharajan", "Indhu", "" ], [ "Azam", "Faiza", "" ], [ "Coca-Castro", "Alejandro", "" ], [ "Kopackova", "Veronika", "" ], [ "Bilinski", "Piotr", "" ] ]
Informal settlements are home to the most socially and economically vulnerable people on the planet. In order to deliver effective economic and social aid, non-government organizations (NGOs), such as the United Nations Children's Fund (UNICEF), require detailed maps of the locations of informal settlements. However, data regarding informal and formal settlements is primarily unavailable and if available is often incomplete. This is due, in part, to the cost and complexity of gathering data on a large scale. To address these challenges, we, in this work, provide three contributions. 1) A brand new machine learning data-set, purposely developed for informal settlement detection. 2) We show that it is possible to detect informal settlements using freely available low-resolution (LR) data, in contrast to previous studies that use very-high resolution (VHR) satellite and aerial imagery, something that is cost-prohibitive for NGOs. 3) We demonstrate two effective classification schemes on our curated data set, one that is cost-efficient for NGOs and another that is cost-prohibitive for NGOs, but has additional utility. We integrate these schemes into a semi-automated pipeline that converts either a LR or VHR satellite image into a binary map that encodes the locations of informal settlements.
1304.1137
John Yen
John Yen, Piero P. Bonissone
Extending Term Subsumption systems for Uncertainty Management
Appears in Proceedings of the Sixth Conference on Uncertainty in Artificial Intelligence (UAI1990)
null
null
UAI-P-1990-PG-468-474
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major difficulty in developing and maintaining very large knowledge bases originates from the variety of forms in which knowledge is made available to the KB builder. The objective of this research is to bring together two complementary knowledge representation schemes: term subsumption languages, which represent and reason about defining characteristics of concepts, and proximate reasoning models, which deal with uncertain knowledge and data in expert systems. Previous works in this area have primarily focused on probabilistic inheritance. In this paper, we address two other important issues regarding the integration of term subsumption-based systems and approximate reasoning models. First, we outline a general architecture that specifies the interactions between the deductive reasoner of a term subsumption system and an approximate reasoner. Second, we generalize the semantics of terminological language so that terminological knowledge can be used to make plausible inferences. The architecture, combined with the generalized semantics, forms the foundation of a synergistic tight integration of term subsumption systems and approximate reasoning models.
[ { "created": "Wed, 27 Mar 2013 13:59:48 GMT", "version": "v1" } ]
2013-04-05
[ [ "Yen", "John", "" ], [ "Bonissone", "Piero P.", "" ] ]
A major difficulty in developing and maintaining very large knowledge bases originates from the variety of forms in which knowledge is made available to the KB builder. The objective of this research is to bring together two complementary knowledge representation schemes: term subsumption languages, which represent and reason about defining characteristics of concepts, and proximate reasoning models, which deal with uncertain knowledge and data in expert systems. Previous works in this area have primarily focused on probabilistic inheritance. In this paper, we address two other important issues regarding the integration of term subsumption-based systems and approximate reasoning models. First, we outline a general architecture that specifies the interactions between the deductive reasoner of a term subsumption system and an approximate reasoner. Second, we generalize the semantics of terminological language so that terminological knowledge can be used to make plausible inferences. The architecture, combined with the generalized semantics, forms the foundation of a synergistic tight integration of term subsumption systems and approximate reasoning models.
cs/0610119
Elad Hazan
Elad Hazan
Approximate Convex Optimization by Online Game Playing
null
null
null
null
cs.DS
null
Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a $\epsilon$ approximate solution is proportional to $\frac{1}{\epsilon^2}$. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in $\frac{1}{\epsilon}$ iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to $\frac{1}{\epsilon}$. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest.
[ { "created": "Thu, 19 Oct 2006 22:10:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Hazan", "Elad", "" ] ]
Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a $\epsilon$ approximate solution is proportional to $\frac{1}{\epsilon^2}$. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in $\frac{1}{\epsilon}$ iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to $\frac{1}{\epsilon}$. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest.
1107.3129
Anwitaman Datta
Frederique Oggier, Anwitaman Datta
Homomorphic Self-repairing Codes for Agile Maintenance of Distributed Storage Systems
arXiv admin note: significant text overlap with arXiv:1008.0064
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed data storage systems are essential to deal with the need to store massive volumes of data. In order to make such a system fault-tolerant, some form of redundancy becomes crucial, incurring various overheads - most prominently in terms of storage space and maintenance bandwidth requirements. Erasure codes, originally designed for communication over lossy channels, provide a storage efficient alternative to replication based redundancy, however entailing high communication overhead for maintenance, when some of the encoded fragments need to be replenished in news ones after failure of some storage devices. We propose as an alternative a new family of erasure codes called self-repairing codes (SRC) taking into account the peculiarities of distributed storage systems, specifically the maintenance process. SRC has the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments by downloading less data than the size of the complete object, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. This paper lays the foundations by defining the novel self-repairing codes, elaborating why the defined characteristics are desirable for distributed storage systems. Then homomorphic self-repairing codes (HSRC) are proposed as a concrete instance, whose various aspects and properties are studied and compared - quantitatively or qualitatively with respect to other codes including traditional erasure codes as well as other recent codes designed specifically for storage applications.
[ { "created": "Fri, 15 Jul 2011 18:46:33 GMT", "version": "v1" } ]
2011-12-25
[ [ "Oggier", "Frederique", "" ], [ "Datta", "Anwitaman", "" ] ]
Distributed data storage systems are essential to deal with the need to store massive volumes of data. In order to make such a system fault-tolerant, some form of redundancy becomes crucial, incurring various overheads - most prominently in terms of storage space and maintenance bandwidth requirements. Erasure codes, originally designed for communication over lossy channels, provide a storage efficient alternative to replication based redundancy, however entailing high communication overhead for maintenance, when some of the encoded fragments need to be replenished in news ones after failure of some storage devices. We propose as an alternative a new family of erasure codes called self-repairing codes (SRC) taking into account the peculiarities of distributed storage systems, specifically the maintenance process. SRC has the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments by downloading less data than the size of the complete object, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. This paper lays the foundations by defining the novel self-repairing codes, elaborating why the defined characteristics are desirable for distributed storage systems. Then homomorphic self-repairing codes (HSRC) are proposed as a concrete instance, whose various aspects and properties are studied and compared - quantitatively or qualitatively with respect to other codes including traditional erasure codes as well as other recent codes designed specifically for storage applications.
1909.08518
Ashesh Rambachan
Ashesh Rambachan and Jonathan Roth
Bias In, Bias Out? Evaluating the Folk Wisdom
null
1st Symposium on Foundations of Responsible Computing (FORC 2020)
10.4230/LIPIcs.FORC.2020.6
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so "biased" training data arise due to discriminatory selection into the training data. In our baseline model, the more biased the decision-maker is against a group, the more the algorithmic decision rule favors that group. We refer to this phenomenon as "bias reversal." We then clarify the conditions that give rise to bias reversal. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.
[ { "created": "Wed, 18 Sep 2019 15:50:19 GMT", "version": "v1" }, { "created": "Thu, 27 Feb 2020 02:45:01 GMT", "version": "v2" }, { "created": "Sat, 19 Dec 2020 19:19:02 GMT", "version": "v3" } ]
2020-12-22
[ [ "Rambachan", "Ashesh", "" ], [ "Roth", "Jonathan", "" ] ]
We evaluate the folk wisdom that algorithmic decision rules trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so "biased" training data arise due to discriminatory selection into the training data. In our baseline model, the more biased the decision-maker is against a group, the more the algorithmic decision rule favors that group. We refer to this phenomenon as "bias reversal." We then clarify the conditions that give rise to bias reversal. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.
2103.01078
Rodrigo Bonacin
Renata de Podest\'a Gaspar, Rodrigo Bonacin, Vin\'icius Gon\c{c}alves
Um Estudo sobre Atividades Participativas para Solu\c{c}\~oes IoT para o Home care de Pessoas Idosas
147 pages, in Portuguese
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population aging in Brazil and in the world occurs at the same time of advances and evolutions in technology. Thus, opportunities for new solutions arise for the elderly, such as innovations in Home Care. With the Internet of Things, it is possible to improve the elderly autonomy, safety and quality of life. However, the design of IoT solutions for elderly Home Care poses new challenges. In this context, this technical report aims to detail activities developed as a case study to evaluate the IoT-PMHCS Method, which was developed in the context of the Master's program in Computer Science at UNIFACCAMP, Brazil. This report includes the planning and results of interviews, participatory workshops, validations, simulation of solutions, among other activities. This document reports the practical experience of applying the IoT-PMHCS Method. -- O envelhecimento populacional no Brasil e no mundo ocorre ao mesmo tempo que os avan\c{c}os e evolu\c{c}\~oes na tecnologia. Desta forma, surgem oportunidades de novas solu\c{c}\~oes para o p\'ublico idoso, tais como inova\c{c}\~oes em Home Care. Com a Internet das Coisas \'e poss\'ivel promover maior autonomia, seguran\c{c}a e qualidade de vida aos idosos. Entretanto, o design de solu\c{c}\~oes de IoT para Home Care de pessoas idosas traz novos desafios. Diante disto, este relat\'orio t\'ecnico tem o objetivo de detalhar atividades desenvolvidas como estudo de caso para avalia\c{c}\~ao do M\'etodo IoT-PMHCS, desenvolvido no contexto do programa de Mestrado em Ci\^encia da Computa\c{c}\~ao da UNIFACCAMP, Brasil. O relat\'orio inclui o planejamento e resultados de entrevistas, workshops participativos, pesquisas de valida\c{c}\~ao, simula\c{c}\~ao de solu\c{c}\~oes, dentre outras atividades. Este documento relata a experi\^encia pr\'atica da aplica\c{c}\~ao do M\'etodo IoT-PMHCS.
[ { "created": "Mon, 1 Mar 2021 15:43:32 GMT", "version": "v1" } ]
2021-03-02
[ [ "Gaspar", "Renata de Podestá", "" ], [ "Bonacin", "Rodrigo", "" ], [ "Gonçalves", "Vinícius", "" ] ]
Population aging in Brazil and in the world occurs at the same time of advances and evolutions in technology. Thus, opportunities for new solutions arise for the elderly, such as innovations in Home Care. With the Internet of Things, it is possible to improve the elderly autonomy, safety and quality of life. However, the design of IoT solutions for elderly Home Care poses new challenges. In this context, this technical report aims to detail activities developed as a case study to evaluate the IoT-PMHCS Method, which was developed in the context of the Master's program in Computer Science at UNIFACCAMP, Brazil. This report includes the planning and results of interviews, participatory workshops, validations, simulation of solutions, among other activities. This document reports the practical experience of applying the IoT-PMHCS Method. -- O envelhecimento populacional no Brasil e no mundo ocorre ao mesmo tempo que os avan\c{c}os e evolu\c{c}\~oes na tecnologia. Desta forma, surgem oportunidades de novas solu\c{c}\~oes para o p\'ublico idoso, tais como inova\c{c}\~oes em Home Care. Com a Internet das Coisas \'e poss\'ivel promover maior autonomia, seguran\c{c}a e qualidade de vida aos idosos. Entretanto, o design de solu\c{c}\~oes de IoT para Home Care de pessoas idosas traz novos desafios. Diante disto, este relat\'orio t\'ecnico tem o objetivo de detalhar atividades desenvolvidas como estudo de caso para avalia\c{c}\~ao do M\'etodo IoT-PMHCS, desenvolvido no contexto do programa de Mestrado em Ci\^encia da Computa\c{c}\~ao da UNIFACCAMP, Brasil. O relat\'orio inclui o planejamento e resultados de entrevistas, workshops participativos, pesquisas de valida\c{c}\~ao, simula\c{c}\~ao de solu\c{c}\~oes, dentre outras atividades. Este documento relata a experi\^encia pr\'atica da aplica\c{c}\~ao do M\'etodo IoT-PMHCS.
2208.07824
Ngoc Bui
Ngoc Bui, Phi Le Nguyen, Viet Anh Nguyen, Phan Thuan Do
A Deep Reinforcement Learning-based Adaptive Charging Policy for WRSNs
9 pages
null
10.1109/MASS56207.2022.00097
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Wireless sensor networks consist of randomly distributed sensor nodes for monitoring targets or areas of interest. Maintaining the network for continuous surveillance is a challenge due to the limited battery capacity in each sensor. Wireless power transfer technology is emerging as a reliable solution for energizing the sensors by deploying a mobile charger (MC) to recharge the sensor. However, designing an optimal charging path for the MC is challenging because of uncertainties arising in the networks. The energy consumption rate of the sensors may fluctuate significantly due to unpredictable changes in the network topology, such as node failures. These changes also lead to shifts in the importance of each sensor, which are often assumed to be the same in existing works. We address these challenges in this paper by proposing a novel adaptive charging scheme using a deep reinforcement learning (DRL) approach. Specifically, we endow the MC with a charging policy that determines the next sensor to charge conditioning on the current state of the network. We then use a deep neural network to parametrize this charging policy, which will be trained by reinforcement learning techniques. Our model can adapt to spontaneous changes in the network topology. The empirical results show that the proposed algorithm outperforms the existing on-demand algorithms by a significant margin.
[ { "created": "Tue, 16 Aug 2022 16:10:52 GMT", "version": "v1" } ]
2023-10-03
[ [ "Bui", "Ngoc", "" ], [ "Nguyen", "Phi Le", "" ], [ "Nguyen", "Viet Anh", "" ], [ "Do", "Phan Thuan", "" ] ]
Wireless sensor networks consist of randomly distributed sensor nodes for monitoring targets or areas of interest. Maintaining the network for continuous surveillance is a challenge due to the limited battery capacity in each sensor. Wireless power transfer technology is emerging as a reliable solution for energizing the sensors by deploying a mobile charger (MC) to recharge the sensor. However, designing an optimal charging path for the MC is challenging because of uncertainties arising in the networks. The energy consumption rate of the sensors may fluctuate significantly due to unpredictable changes in the network topology, such as node failures. These changes also lead to shifts in the importance of each sensor, which are often assumed to be the same in existing works. We address these challenges in this paper by proposing a novel adaptive charging scheme using a deep reinforcement learning (DRL) approach. Specifically, we endow the MC with a charging policy that determines the next sensor to charge conditioning on the current state of the network. We then use a deep neural network to parametrize this charging policy, which will be trained by reinforcement learning techniques. Our model can adapt to spontaneous changes in the network topology. The empirical results show that the proposed algorithm outperforms the existing on-demand algorithms by a significant margin.