id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2005.01807
Nitin Rathi
Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy
Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation
International Conference on Learning Representations (ICLR), 2020 https://openreview.net/forum?id=B1xSperKvH&noteId=B1xSperKvH
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron's spike time. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100, and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
[ { "created": "Mon, 4 May 2020 19:30:43 GMT", "version": "v1" } ]
2020-05-06
[ [ "Rathi", "Nitin", "" ], [ "Srinivasan", "Gopalakrishnan", "" ], [ "Panda", "Priyadarshini", "" ], [ "Roy", "Kaushik", "" ] ]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron's spike time. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100, and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
2106.04399
Emmanuel Bengio
Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, Yoshua Bengio
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation
Accepted at NeurIPS 2021
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper is about the problem of learning a stochastic policy for generating an object (like a molecular graph) from a sequence of actions, such that the probability of generating an object is proportional to a given positive reward for that object. Whereas standard return maximization tends to converge to a single return-maximizing sequence, there are cases where we would like to sample a diverse set of high-return solutions. These arise, for example, in black-box function optimization when few rounds are possible, each with large batches of queries, where the batches should be diverse, e.g., in the design of new molecules. One can also see this as a problem of approximately converting an energy function to a generative distribution. While MCMC methods can achieve that, they are expensive and generally only perform local exploration. Instead, training a generative policy amortizes the cost of search during training and yields to fast generation. Using insights from Temporal Difference learning, we propose GFlowNet, based on a view of the generative process as a flow network, making it possible to handle the tricky case where different trajectories can yield the same final state, e.g., there are many ways to sequentially add atoms to generate some molecular graph. We cast the set of trajectories as a flow and convert the flow consistency equations into a learning objective, akin to the casting of the Bellman equations into Temporal Difference methods. We prove that any global minimum of the proposed objectives yields a policy which samples from the desired distribution, and demonstrate the improved performance and diversity of GFlowNet on a simple domain where there are many modes to the reward function, and on a molecule synthesis task.
[ { "created": "Tue, 8 Jun 2021 14:21:10 GMT", "version": "v1" }, { "created": "Fri, 19 Nov 2021 14:50:32 GMT", "version": "v2" } ]
2021-11-22
[ [ "Bengio", "Emmanuel", "" ], [ "Jain", "Moksh", "" ], [ "Korablyov", "Maksym", "" ], [ "Precup", "Doina", "" ], [ "Bengio", "Yoshua", "" ] ]
This paper is about the problem of learning a stochastic policy for generating an object (like a molecular graph) from a sequence of actions, such that the probability of generating an object is proportional to a given positive reward for that object. Whereas standard return maximization tends to converge to a single return-maximizing sequence, there are cases where we would like to sample a diverse set of high-return solutions. These arise, for example, in black-box function optimization when few rounds are possible, each with large batches of queries, where the batches should be diverse, e.g., in the design of new molecules. One can also see this as a problem of approximately converting an energy function to a generative distribution. While MCMC methods can achieve that, they are expensive and generally only perform local exploration. Instead, training a generative policy amortizes the cost of search during training and yields to fast generation. Using insights from Temporal Difference learning, we propose GFlowNet, based on a view of the generative process as a flow network, making it possible to handle the tricky case where different trajectories can yield the same final state, e.g., there are many ways to sequentially add atoms to generate some molecular graph. We cast the set of trajectories as a flow and convert the flow consistency equations into a learning objective, akin to the casting of the Bellman equations into Temporal Difference methods. We prove that any global minimum of the proposed objectives yields a policy which samples from the desired distribution, and demonstrate the improved performance and diversity of GFlowNet on a simple domain where there are many modes to the reward function, and on a molecule synthesis task.
1711.10690
Yi Gu
Yi Gu and Huaiguang Jiang and Jun Jason Zhang and Yingchen Zhang and Eduard Muljadi and Francisco J.Solis
Load Forecasting Based Distribution System Network Reconfiguration-A Distributed Data-Driven Approach
5 pages, preprint for Asilomar Conference on Signals, Systems, and Computers 2017
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, the proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.
[ { "created": "Wed, 29 Nov 2017 06:01:02 GMT", "version": "v1" } ]
2017-11-30
[ [ "Gu", "Yi", "" ], [ "Jiang", "Huaiguang", "" ], [ "Zhang", "Jun Jason", "" ], [ "Zhang", "Yingchen", "" ], [ "Muljadi", "Eduard", "" ], [ "Solis", "Francisco J.", "" ] ]
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, the proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.
1908.11298
Tao Wen
Tao Wen and Yong Deng
Identification of influencers in complex networks by local information dimensionality
null
Information Sciences 2019
10.1016/j.ins.2019.10.003
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The identification of influential spreaders in complex networks is a popular topic in studies of network characteristics. Many centrality measures have been proposed to address this problem, but most have limitations. In this paper, a method for identifying influencers in complex networks via the local information dimensionality. The proposed method considers the local structural properties around the central node; therefore, the scale of locality only increases to half of the maximum value of the shortest distance from the central node. Thus, the proposed method considers the quasilocal information and reduces the computational complexity. The information (number of nodes) in boxes is described via the Shannon entropy, which is more reasonable. A node is more influential when its local information dimensionality is higher. In order to show the effectiveness of the proposed method, five existing centrality measures are used as comparison methods to rank influential nodes in six real world complex networks. In addition, a susceptible infected (SI) model and Kendall's tau coefficient are applied to show the correlation between different methods. Experiment results show the superiority of the proposed method.
[ { "created": "Thu, 29 Aug 2019 15:33:05 GMT", "version": "v1" }, { "created": "Sun, 6 Oct 2019 03:40:09 GMT", "version": "v2" } ]
2019-10-08
[ [ "Wen", "Tao", "" ], [ "Deng", "Yong", "" ] ]
The identification of influential spreaders in complex networks is a popular topic in studies of network characteristics. Many centrality measures have been proposed to address this problem, but most have limitations. In this paper, a method for identifying influencers in complex networks via the local information dimensionality. The proposed method considers the local structural properties around the central node; therefore, the scale of locality only increases to half of the maximum value of the shortest distance from the central node. Thus, the proposed method considers the quasilocal information and reduces the computational complexity. The information (number of nodes) in boxes is described via the Shannon entropy, which is more reasonable. A node is more influential when its local information dimensionality is higher. In order to show the effectiveness of the proposed method, five existing centrality measures are used as comparison methods to rank influential nodes in six real world complex networks. In addition, a susceptible infected (SI) model and Kendall's tau coefficient are applied to show the correlation between different methods. Experiment results show the superiority of the proposed method.
2009.06573
Runze Su
Runze Su, Fei Tao, Xudong Liu, Haoran Wei, Xiaorong Mei, Zhiyao Duan, Lei Yuan, Ji Liu, Yuying Xie
Themes Informed Audio-visual Correspondence Learning
Submitting to ICASSP 2021
null
null
null
cs.AI cs.MM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The applications of short-term user-generated video (UGV), such as Snapchat, and Youtube short-term videos, booms recently, raising lots of multimodal machine learning tasks. Among them, learning the correspondence between audio and visual information from videos is a challenging one. Most previous work of the audio-visual correspondence(AVC) learning only investigated constrained videos or simple settings, which may not fit the application of UGV. In this paper, we proposed new principles for AVC and introduced a new framework to set sight of videos' themes to facilitate AVC learning. We also released the KWAI-AD-AudVis corpus which contained 85432 short advertisement videos (around 913 hours) made by users. We evaluated our proposed approach on this corpus, and it was able to outperform the baseline by 23.15% absolute difference.
[ { "created": "Mon, 14 Sep 2020 17:03:04 GMT", "version": "v1" }, { "created": "Mon, 19 Oct 2020 06:40:40 GMT", "version": "v2" } ]
2020-10-20
[ [ "Su", "Runze", "" ], [ "Tao", "Fei", "" ], [ "Liu", "Xudong", "" ], [ "Wei", "Haoran", "" ], [ "Mei", "Xiaorong", "" ], [ "Duan", "Zhiyao", "" ], [ "Yuan", "Lei", "" ], [ "Liu", "Ji", "" ], [ "Xie", "Yuying", "" ] ]
The applications of short-term user-generated video (UGV), such as Snapchat, and Youtube short-term videos, booms recently, raising lots of multimodal machine learning tasks. Among them, learning the correspondence between audio and visual information from videos is a challenging one. Most previous work of the audio-visual correspondence(AVC) learning only investigated constrained videos or simple settings, which may not fit the application of UGV. In this paper, we proposed new principles for AVC and introduced a new framework to set sight of videos' themes to facilitate AVC learning. We also released the KWAI-AD-AudVis corpus which contained 85432 short advertisement videos (around 913 hours) made by users. We evaluated our proposed approach on this corpus, and it was able to outperform the baseline by 23.15% absolute difference.
2104.11230
Jiajie Wu
Jiajie Wu
Literature review on vulnerability detection using NLP technology
null
null
null
null
cs.CR cs.AI cs.SE
http://creativecommons.org/licenses/by/4.0/
Vulnerability detection has always been the most important task in the field of software security. With the development of technology, in the face of massive source code, automated analysis and detection of vulnerabilities has become a current research hotspot. For special text files such as source code, using some of the hottest NLP technologies to build models and realize the automatic analysis and detection of source code has become one of the most anticipated studies in the field of vulnerability detection. This article does a brief survey of some recent new documents and technologies, such as CodeBERT, and summarizes the previous technologies.
[ { "created": "Fri, 23 Apr 2021 03:16:51 GMT", "version": "v1" } ]
2021-04-26
[ [ "Wu", "Jiajie", "" ] ]
Vulnerability detection has always been the most important task in the field of software security. With the development of technology, in the face of massive source code, automated analysis and detection of vulnerabilities has become a current research hotspot. For special text files such as source code, using some of the hottest NLP technologies to build models and realize the automatic analysis and detection of source code has become one of the most anticipated studies in the field of vulnerability detection. This article does a brief survey of some recent new documents and technologies, such as CodeBERT, and summarizes the previous technologies.
1904.03540
Rokas Volkovas
Rokas Volkovas, Michael Fairbank, John Woodward, Simon Lucas
Mek: Mechanics Prototyping Tool for 2D Tile-Based Turn-Based Deterministic Games
null
null
null
null
cs.PL cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are few digital tools to help designers create game mechanics. A general language to express game mechanics is necessary for rapid game design iteration. The first iteration of a mechanics-focused language, together with its interfacing tool, are introduced in this paper. The language is restricted to two-dimensional, turn-based, tile-based, deterministic, complete-information games. The tool is compared to the existing alternatives for game mechanics prototyping and shown to be capable of succinctly implementing a range of well-known game mechanics.
[ { "created": "Sat, 6 Apr 2019 22:09:53 GMT", "version": "v1" } ]
2019-04-09
[ [ "Volkovas", "Rokas", "" ], [ "Fairbank", "Michael", "" ], [ "Woodward", "John", "" ], [ "Lucas", "Simon", "" ] ]
There are few digital tools to help designers create game mechanics. A general language to express game mechanics is necessary for rapid game design iteration. The first iteration of a mechanics-focused language, together with its interfacing tool, are introduced in this paper. The language is restricted to two-dimensional, turn-based, tile-based, deterministic, complete-information games. The tool is compared to the existing alternatives for game mechanics prototyping and shown to be capable of succinctly implementing a range of well-known game mechanics.
2102.11822
Navid Reyhanian
Navid Reyhanian, Jarvis Haupt
Online Stochastic Gradient Descent Learns Linear Dynamical Systems from A Single Trajectory
null
null
null
null
cs.LG cs.SY eess.SP eess.SY
http://creativecommons.org/licenses/by/4.0/
This work investigates the problem of estimating the weight matrices of a stable time-invariant linear dynamical system from a single sequence of noisy measurements. We show that if the unknown weight matrices describing the system are in Brunovsky canonical form, we can efficiently estimate the ground truth unknown matrices of the system from a linear system of equations formulated based on the transfer function of the system, using both online and offline stochastic gradient descent (SGD) methods. Specifically, by deriving concrete complexity bounds, we show that SGD converges linearly in expectation to any arbitrary small Frobenius norm distance from the ground truth weights. To the best of our knowledge, ours is the first work to establish linear convergence characteristics for online and offline gradient-based iterative methods for weight matrix estimation in linear dynamical systems from a single trajectory. Extensive numerical tests verify that the performance of the proposed methods is consistent with our theory, and show their superior performance relative to existing state of the art methods.
[ { "created": "Tue, 23 Feb 2021 17:48:39 GMT", "version": "v1" } ]
2021-02-24
[ [ "Reyhanian", "Navid", "" ], [ "Haupt", "Jarvis", "" ] ]
This work investigates the problem of estimating the weight matrices of a stable time-invariant linear dynamical system from a single sequence of noisy measurements. We show that if the unknown weight matrices describing the system are in Brunovsky canonical form, we can efficiently estimate the ground truth unknown matrices of the system from a linear system of equations formulated based on the transfer function of the system, using both online and offline stochastic gradient descent (SGD) methods. Specifically, by deriving concrete complexity bounds, we show that SGD converges linearly in expectation to any arbitrary small Frobenius norm distance from the ground truth weights. To the best of our knowledge, ours is the first work to establish linear convergence characteristics for online and offline gradient-based iterative methods for weight matrix estimation in linear dynamical systems from a single trajectory. Extensive numerical tests verify that the performance of the proposed methods is consistent with our theory, and show their superior performance relative to existing state of the art methods.
2403.17479
Morteza Zakeri-Nasrabadi
Morteza Zakeri-Nasrabadi and Saeed Parsa
Natural Language Requirements Testability Measurement Based on Requirement Smells
45 pages, 16 figures, and 13 tables; submitted as a journal paper
null
null
null
cs.SE cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Requirements form the basis for defining software systems' obligations and tasks. Testable requirements help prevent failures, reduce maintenance costs, and make it easier to perform acceptance tests. However, despite the importance of measuring and quantifying requirements testability, no automatic approach for measuring requirements testability has been proposed based on the requirements smells, which are at odds with the requirements testability. This paper presents a mathematical model to evaluate and rank the natural language requirements testability based on an extensive set of nine requirements smells, detected automatically, and acceptance test efforts determined by requirement length and its application domain. Most of the smells stem from uncountable adjectives, context-sensitive, and ambiguous words. A comprehensive dictionary is required to detect such words. We offer a neural word-embedding technique to generate such a dictionary automatically. Using the dictionary, we could automatically detect Polysemy smell (domain-specific ambiguity) for the first time in 10 application domains. Our empirical study on nearly 1000 software requirements from six well-known industrial and academic projects demonstrates that the proposed smell detection approach outperforms Smella, a state-of-the-art tool, in detecting requirements smells. The precision and recall of smell detection are improved with an average of 0.03 and 0.33, respectively, compared to the state-of-the-art. The proposed requirement testability model measures the testability of 985 requirements with a mean absolute error of 0.12 and a mean squared error of 0.03, demonstrating the model's potential for practical use.
[ { "created": "Tue, 26 Mar 2024 08:19:29 GMT", "version": "v1" } ]
2024-03-27
[ [ "Zakeri-Nasrabadi", "Morteza", "" ], [ "Parsa", "Saeed", "" ] ]
Requirements form the basis for defining software systems' obligations and tasks. Testable requirements help prevent failures, reduce maintenance costs, and make it easier to perform acceptance tests. However, despite the importance of measuring and quantifying requirements testability, no automatic approach for measuring requirements testability has been proposed based on the requirements smells, which are at odds with the requirements testability. This paper presents a mathematical model to evaluate and rank the natural language requirements testability based on an extensive set of nine requirements smells, detected automatically, and acceptance test efforts determined by requirement length and its application domain. Most of the smells stem from uncountable adjectives, context-sensitive, and ambiguous words. A comprehensive dictionary is required to detect such words. We offer a neural word-embedding technique to generate such a dictionary automatically. Using the dictionary, we could automatically detect Polysemy smell (domain-specific ambiguity) for the first time in 10 application domains. Our empirical study on nearly 1000 software requirements from six well-known industrial and academic projects demonstrates that the proposed smell detection approach outperforms Smella, a state-of-the-art tool, in detecting requirements smells. The precision and recall of smell detection are improved with an average of 0.03 and 0.33, respectively, compared to the state-of-the-art. The proposed requirement testability model measures the testability of 985 requirements with a mean absolute error of 0.12 and a mean squared error of 0.03, demonstrating the model's potential for practical use.
2308.02558
Vasant Dhar
Vasant Dhar
The Paradigm Shifts in Artificial Intelligence
14 pages, 1 figure, 1 table
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Kuhn's framework of scientific progress (Kuhn, 1962) provides a useful framing of the paradigm shifts that have occurred in Artificial Intelligence over the last 60 years. The framework is also useful in understanding what is arguably a new paradigm shift in AI, signaled by the emergence of large pre-trained systems such as GPT-3, on which conversational agents such as ChatGPT are based. Such systems make intelligence a commoditized general purpose technology that is configurable to applications. In this paper, I summarize the forces that led to the rise and fall of each paradigm, and discuss the pressing issues and risks associated with the current paradigm shift in AI.
[ { "created": "Wed, 2 Aug 2023 19:38:24 GMT", "version": "v1" } ]
2023-08-08
[ [ "Dhar", "Vasant", "" ] ]
Kuhn's framework of scientific progress (Kuhn, 1962) provides a useful framing of the paradigm shifts that have occurred in Artificial Intelligence over the last 60 years. The framework is also useful in understanding what is arguably a new paradigm shift in AI, signaled by the emergence of large pre-trained systems such as GPT-3, on which conversational agents such as ChatGPT are based. Such systems make intelligence a commoditized general purpose technology that is configurable to applications. In this paper, I summarize the forces that led to the rise and fall of each paradigm, and discuss the pressing issues and risks associated with the current paradigm shift in AI.
2107.11978
Yuqian Fu
Yuqian Fu, Yanwei Fu, Yu-Gang Jiang
Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data
Accepted by ACM Multimedia 2021
null
10.1109/TIP.2022.3219237
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent study finds that existing few-shot learning methods, trained on the source domain, fail to generalize to the novel target domain when a domain gap is observed. This motivates the task of Cross-Domain Few-Shot Learning (CD-FSL). In this paper, we realize that the labeled target data in CD-FSL has not been leveraged in any way to help the learning process. Thus, we advocate utilizing few labeled target data to guide the model learning. Technically, a novel meta-FDMixup network is proposed. We tackle this problem mainly from two aspects. Firstly, to utilize the source and the newly introduced target data of two different class sets, a mixup module is re-proposed and integrated into the meta-learning mechanism. Secondly, a novel disentangle module together with a domain classifier is proposed to extract the disentangled domain-irrelevant and domain-specific features. These two modules together enable our model to narrow the domain gap thus generalizing well to the target datasets. Additionally, a detailed feasibility and pilot study is conducted to reflect the intuitive understanding of CD-FSL under our new setting. Experimental results show the effectiveness of our new setting and the proposed method. Codes and models are available at https://github.com/lovelyqian/Meta-FDMixup.
[ { "created": "Mon, 26 Jul 2021 06:15:45 GMT", "version": "v1" } ]
2022-11-30
[ [ "Fu", "Yuqian", "" ], [ "Fu", "Yanwei", "" ], [ "Jiang", "Yu-Gang", "" ] ]
A recent study finds that existing few-shot learning methods, trained on the source domain, fail to generalize to the novel target domain when a domain gap is observed. This motivates the task of Cross-Domain Few-Shot Learning (CD-FSL). In this paper, we realize that the labeled target data in CD-FSL has not been leveraged in any way to help the learning process. Thus, we advocate utilizing few labeled target data to guide the model learning. Technically, a novel meta-FDMixup network is proposed. We tackle this problem mainly from two aspects. Firstly, to utilize the source and the newly introduced target data of two different class sets, a mixup module is re-proposed and integrated into the meta-learning mechanism. Secondly, a novel disentangle module together with a domain classifier is proposed to extract the disentangled domain-irrelevant and domain-specific features. These two modules together enable our model to narrow the domain gap thus generalizing well to the target datasets. Additionally, a detailed feasibility and pilot study is conducted to reflect the intuitive understanding of CD-FSL under our new setting. Experimental results show the effectiveness of our new setting and the proposed method. Codes and models are available at https://github.com/lovelyqian/Meta-FDMixup.
2211.09761
Piotr Nawrot
Piotr Nawrot, Jan Chorowski, Adrian {\L}a\'ncucki, Edoardo M. Ponti
Efficient Transformers with Dynamic Token Pooling
null
Proceedings of the 61st (Toronto 2023) Annual Meeting of the Association for Computational Linguistics (Volume 1 Long Papers) Pages 6403 to 6417
10.18653/v1/2023.acl-long.353
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers achieve unrivalled performance in modelling language, but remain inefficient in terms of memory and time complexity. A possible remedy is to reduce the sequence length in the intermediate layers by pooling fixed-length segments of tokens. Nevertheless, natural units of meaning, such as words or phrases, display varying sizes. To address this mismatch, we equip language models with a dynamic-pooling mechanism, which predicts segment boundaries in an autoregressive fashion. We compare several methods to infer boundaries, including end-to-end learning through stochastic re-parameterisation, supervised learning (based on segmentations from subword tokenizers or spikes in conditional entropy), as well as linguistically motivated boundaries. We perform character-level evaluation on texts from multiple datasets and morphologically diverse languages. The results demonstrate that dynamic pooling, which jointly segments and models language, is both faster and more accurate than vanilla Transformers and fixed-length pooling within the same computational budget.
[ { "created": "Thu, 17 Nov 2022 18:39:23 GMT", "version": "v1" }, { "created": "Wed, 24 May 2023 17:32:56 GMT", "version": "v2" } ]
2023-10-25
[ [ "Nawrot", "Piotr", "" ], [ "Chorowski", "Jan", "" ], [ "Łańcucki", "Adrian", "" ], [ "Ponti", "Edoardo M.", "" ] ]
Transformers achieve unrivalled performance in modelling language, but remain inefficient in terms of memory and time complexity. A possible remedy is to reduce the sequence length in the intermediate layers by pooling fixed-length segments of tokens. Nevertheless, natural units of meaning, such as words or phrases, display varying sizes. To address this mismatch, we equip language models with a dynamic-pooling mechanism, which predicts segment boundaries in an autoregressive fashion. We compare several methods to infer boundaries, including end-to-end learning through stochastic re-parameterisation, supervised learning (based on segmentations from subword tokenizers or spikes in conditional entropy), as well as linguistically motivated boundaries. We perform character-level evaluation on texts from multiple datasets and morphologically diverse languages. The results demonstrate that dynamic pooling, which jointly segments and models language, is both faster and more accurate than vanilla Transformers and fixed-length pooling within the same computational budget.
2104.08157
Robin Tu
Robin Tu, Alexander H. Foss, Sihai D. Zhao
Capturing patterns of variation unique to a specific dataset
null
null
null
null
cs.LG stat.ME
http://creativecommons.org/licenses/by/4.0/
Capturing patterns of variation present in a dataset is important in exploratory data analysis and unsupervised learning. Contrastive dimension reduction methods, such as contrastive principal component analysis (cPCA), find patterns unique to a target dataset of interest by contrasting with a carefully chosen background dataset representing unwanted or uninteresting variation. However, such methods typically require a tuning parameter that governs the level of contrast, and it is unclear how to choose this parameter objectively. Furthermore, it is frequently of interest to contrast against multiple backgrounds, which is difficult to accomplish with existing methods. We propose unique component analysis (UCA), a tuning-free method that identifies low-dimensional representations of a target dataset relative to one or more comparison datasets. It is computationally efficient even with large numbers of features. We show in several experiments that UCA with a single background dataset achieves similar results compared to cPCA with various tuning parameters, and that UCA with multiple individual background datasets is superior to both cPCA with any single background data and cPCA with a pooled background dataset.
[ { "created": "Fri, 16 Apr 2021 15:07:32 GMT", "version": "v1" } ]
2021-04-19
[ [ "Tu", "Robin", "" ], [ "Foss", "Alexander H.", "" ], [ "Zhao", "Sihai D.", "" ] ]
Capturing patterns of variation present in a dataset is important in exploratory data analysis and unsupervised learning. Contrastive dimension reduction methods, such as contrastive principal component analysis (cPCA), find patterns unique to a target dataset of interest by contrasting with a carefully chosen background dataset representing unwanted or uninteresting variation. However, such methods typically require a tuning parameter that governs the level of contrast, and it is unclear how to choose this parameter objectively. Furthermore, it is frequently of interest to contrast against multiple backgrounds, which is difficult to accomplish with existing methods. We propose unique component analysis (UCA), a tuning-free method that identifies low-dimensional representations of a target dataset relative to one or more comparison datasets. It is computationally efficient even with large numbers of features. We show in several experiments that UCA with a single background dataset achieves similar results compared to cPCA with various tuning parameters, and that UCA with multiple individual background datasets is superior to both cPCA with any single background data and cPCA with a pooled background dataset.
1912.10634
EPTCS
Julien Brunel (ONERA DTIS and Universit\'e f\'ed\'erale de Toulouse, France), David Chemouil (ONERA DTIS and Universit\'e f\'ed\'erale de Toulouse, France), Alcino Cunha (INESC TEC and Universidade do Minho, Portugal), Nuno Macedo (INESC TEC and Universidade do Minho, Portugal)
Simulation under Arbitrary Temporal Logic Constraints
In Proceedings F-IDE 2019, arXiv:1912.09611
EPTCS 310, 2019, pp. 63-69
10.4204/EPTCS.310.7
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most model checkers provide a useful simulation mode, that allows users to explore the set of possible behaviours by interactively picking at each state which event to execute next. Traditionally this simulation mode cannot take into consideration additional temporal logic constraints, such as arbitrary fairness restrictions, substantially reducing its usability for debugging the modelled system behaviour. Similarly, when a specification is false, even if all its counter-examples combined also form a set of behaviours, most model checkers only present one of them to the user, providing little or no mechanism to explore alternatives. In this paper, we present a simple on-the-fly verification technique to allow the user to explore the behaviours that satisfy an arbitrary temporal logic specification, with an interactive process akin to simulation. This technique enables a unified interface for simulating the modelled system and exploring its counter-examples. The technique is formalised in the framework of state/event linear temporal logic and a proof of concept was implemented in an event-based variant of the Electrum framework.
[ { "created": "Mon, 23 Dec 2019 05:41:51 GMT", "version": "v1" } ]
2019-12-24
[ [ "Brunel", "Julien", "", "ONERA DTIS and Université fédérale de Toulouse,\n France" ], [ "Chemouil", "David", "", "ONERA DTIS and Université fédérale de\n Toulouse, France" ], [ "Cunha", "Alcino", "", "INESC TEC and Universidade do Minho,\n Portugal" ], [ "Macedo", "Nuno", "", "INESC TEC and Universidade do Minho, Portugal" ] ]
Most model checkers provide a useful simulation mode, that allows users to explore the set of possible behaviours by interactively picking at each state which event to execute next. Traditionally this simulation mode cannot take into consideration additional temporal logic constraints, such as arbitrary fairness restrictions, substantially reducing its usability for debugging the modelled system behaviour. Similarly, when a specification is false, even if all its counter-examples combined also form a set of behaviours, most model checkers only present one of them to the user, providing little or no mechanism to explore alternatives. In this paper, we present a simple on-the-fly verification technique to allow the user to explore the behaviours that satisfy an arbitrary temporal logic specification, with an interactive process akin to simulation. This technique enables a unified interface for simulating the modelled system and exploring its counter-examples. The technique is formalised in the framework of state/event linear temporal logic and a proof of concept was implemented in an event-based variant of the Electrum framework.
2305.10661
Yitong Li
Yitong Li, Chang Liu, Jie Ma
Scribble-Supervised Target Extraction Method Based on Inner Structure-Constraint for Remote Sensing Images
5 pages, 4 figures, 1 table
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly supervised learning based on scribble annotations in target extraction of remote sensing images has drawn much interest due to scribbles' flexibility in denoting winding objects and low cost of manually labeling. However, scribbles are too sparse to identify object structure and detailed information, bringing great challenges in target localization and boundary description. To alleviate these problems, in this paper, we construct two inner structure-constraints, a deformation consistency loss and a trainable active contour loss, together with a scribble-constraint to supervise the optimization of the encoder-decoder network without introducing any auxiliary module or extra operation based on prior cues. Comprehensive experiments demonstrate our method's superiority over five state-of-the-art algorithms in this field. Source code is available at https://github.com/yitongli123/ISC-TE.
[ { "created": "Thu, 18 May 2023 02:49:07 GMT", "version": "v1" } ]
2023-05-19
[ [ "Li", "Yitong", "" ], [ "Liu", "Chang", "" ], [ "Ma", "Jie", "" ] ]
Weakly supervised learning based on scribble annotations in target extraction of remote sensing images has drawn much interest due to scribbles' flexibility in denoting winding objects and low cost of manually labeling. However, scribbles are too sparse to identify object structure and detailed information, bringing great challenges in target localization and boundary description. To alleviate these problems, in this paper, we construct two inner structure-constraints, a deformation consistency loss and a trainable active contour loss, together with a scribble-constraint to supervise the optimization of the encoder-decoder network without introducing any auxiliary module or extra operation based on prior cues. Comprehensive experiments demonstrate our method's superiority over five state-of-the-art algorithms in this field. Source code is available at https://github.com/yitongli123/ISC-TE.
2003.04427
Yan Zhang
Yan Zhang and Michael M. Zavlanos
Transfer Reinforcement Learning under Unobserved Contextual Information
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a transfer reinforcement learning problem where the state transitions and rewards are affected by the environmental context. Specifically, we consider a demonstrator agent that has access to a context-aware policy and can generate transition and reward data based on that policy. These data constitute the experience of the demonstrator. Then, the goal is to transfer this experience, excluding the underlying contextual information, to a learner agent that does not have access to the environmental context, so that they can learn a control policy using fewer samples. It is well known that, disregarding the causal effect of the contextual information, can introduce bias in the transition and reward models estimated by the learner, resulting in a learned suboptimal policy. To address this challenge, in this paper, we develop a method to obtain causal bounds on the transition and reward functions using the demonstrator's data, which we then use to obtain causal bounds on the value functions. Using these value function bounds, we propose new Q learning and UCB-Q learning algorithms that converge to the true value function without bias. We provide numerical experiments for robot motion planning problems that validate the proposed value function bounds and demonstrate that the proposed algorithms can effectively make use of the data from the demonstrator to accelerate the learning process of the learner.
[ { "created": "Mon, 9 Mar 2020 22:00:04 GMT", "version": "v1" } ]
2020-03-11
[ [ "Zhang", "Yan", "" ], [ "Zavlanos", "Michael M.", "" ] ]
In this paper, we study a transfer reinforcement learning problem where the state transitions and rewards are affected by the environmental context. Specifically, we consider a demonstrator agent that has access to a context-aware policy and can generate transition and reward data based on that policy. These data constitute the experience of the demonstrator. Then, the goal is to transfer this experience, excluding the underlying contextual information, to a learner agent that does not have access to the environmental context, so that they can learn a control policy using fewer samples. It is well known that, disregarding the causal effect of the contextual information, can introduce bias in the transition and reward models estimated by the learner, resulting in a learned suboptimal policy. To address this challenge, in this paper, we develop a method to obtain causal bounds on the transition and reward functions using the demonstrator's data, which we then use to obtain causal bounds on the value functions. Using these value function bounds, we propose new Q learning and UCB-Q learning algorithms that converge to the true value function without bias. We provide numerical experiments for robot motion planning problems that validate the proposed value function bounds and demonstrate that the proposed algorithms can effectively make use of the data from the demonstrator to accelerate the learning process of the learner.
1301.6231
Alexander Zeh
Alexander Zeh and Antonia Wachter-Zeh and Maximilien Gadouleau and Sergey Bezzateev
Generalizing Bounds on the Minimum Distance of Cyclic Codes Using Cyclic Product Codes
5 pages, no figure, accepted for ISIT2013
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two generalizations of the Hartmann--Tzeng (HT) bound on the minimum distance of q-ary cyclic codes are proposed. The first one is proven by embedding the given cyclic code into a cyclic product code. Furthermore, we show that unique decoding up to this bound is always possible and outline a quadratic-time syndrome-based error decoding algorithm. The second bound is stronger and the proof is more involved. Our technique of embedding the code into a cyclic product code can be applied to other bounds, too and therefore generalizes them.
[ { "created": "Sat, 26 Jan 2013 10:47:19 GMT", "version": "v1" }, { "created": "Thu, 27 Jun 2013 06:09:51 GMT", "version": "v2" } ]
2013-06-28
[ [ "Zeh", "Alexander", "" ], [ "Wachter-Zeh", "Antonia", "" ], [ "Gadouleau", "Maximilien", "" ], [ "Bezzateev", "Sergey", "" ] ]
Two generalizations of the Hartmann--Tzeng (HT) bound on the minimum distance of q-ary cyclic codes are proposed. The first one is proven by embedding the given cyclic code into a cyclic product code. Furthermore, we show that unique decoding up to this bound is always possible and outline a quadratic-time syndrome-based error decoding algorithm. The second bound is stronger and the proof is more involved. Our technique of embedding the code into a cyclic product code can be applied to other bounds, too and therefore generalizes them.
2204.01915
Peter Washington
Peter Washington, Cezmi Mutlu, Aaron Kline, Cathy Hou, Kaitlyn Dunlap, Jack Kent, Arman Husic, Nate Stockham, Brianna Chrisman, Kelley Paskov, Jae-Yoon Jung, Dennis P. Wall
An Exploration of Active Learning for Affective Digital Phenotyping
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Some of the most severe bottlenecks preventing widespread development of machine learning models for human behavior include a dearth of labeled training data and difficulty of acquiring high quality labels. Active learning is a paradigm for using algorithms to computationally select a useful subset of data points to label using metrics for model uncertainty and data similarity. We explore active learning for naturalistic computer vision emotion data, a particularly heterogeneous and complex data space due to inherently subjective labels. Using frames collected from gameplay acquired from a therapeutic smartphone game for children with autism, we run a simulation of active learning using gameplay prompts as metadata to aid in the active learning process. We find that active learning using information generated during gameplay slightly outperforms random selection of the same number of labeled frames. We next investigate a method to conduct active learning with subjective data, such as in affective computing, and where multiple crowdsourced labels can be acquired for each image. Using the Child Affective Facial Expression (CAFE) dataset, we simulate an active learning process for crowdsourcing many labels and find that prioritizing frames using the entropy of the crowdsourced label distribution results in lower categorical cross-entropy loss compared to random frame selection. Collectively, these results demonstrate pilot evaluations of two novel active learning approaches for subjective affective data collected in noisy settings.
[ { "created": "Tue, 5 Apr 2022 01:01:32 GMT", "version": "v1" }, { "created": "Wed, 6 Apr 2022 18:23:44 GMT", "version": "v2" } ]
2022-04-08
[ [ "Washington", "Peter", "" ], [ "Mutlu", "Cezmi", "" ], [ "Kline", "Aaron", "" ], [ "Hou", "Cathy", "" ], [ "Dunlap", "Kaitlyn", "" ], [ "Kent", "Jack", "" ], [ "Husic", "Arman", "" ], [ "Stockham", "Nate", "" ], [ "Chrisman", "Brianna", "" ], [ "Paskov", "Kelley", "" ], [ "Jung", "Jae-Yoon", "" ], [ "Wall", "Dennis P.", "" ] ]
Some of the most severe bottlenecks preventing widespread development of machine learning models for human behavior include a dearth of labeled training data and difficulty of acquiring high quality labels. Active learning is a paradigm for using algorithms to computationally select a useful subset of data points to label using metrics for model uncertainty and data similarity. We explore active learning for naturalistic computer vision emotion data, a particularly heterogeneous and complex data space due to inherently subjective labels. Using frames collected from gameplay acquired from a therapeutic smartphone game for children with autism, we run a simulation of active learning using gameplay prompts as metadata to aid in the active learning process. We find that active learning using information generated during gameplay slightly outperforms random selection of the same number of labeled frames. We next investigate a method to conduct active learning with subjective data, such as in affective computing, and where multiple crowdsourced labels can be acquired for each image. Using the Child Affective Facial Expression (CAFE) dataset, we simulate an active learning process for crowdsourcing many labels and find that prioritizing frames using the entropy of the crowdsourced label distribution results in lower categorical cross-entropy loss compared to random frame selection. Collectively, these results demonstrate pilot evaluations of two novel active learning approaches for subjective affective data collected in noisy settings.
2207.10941
Li Shen
Li Shen, Yuning Wei and Yangzhu Wang
Respecting Time Series Properties Makes Deep Time Series Forecasting Perfect
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
How to handle time features shall be the core question of any time series forecasting model. Ironically, it is often ignored or misunderstood by deep-learning based models, even those baselines which are state-of-the-art. This behavior makes their inefficient, untenable and unstable. In this paper, we rigorously analyze three prevalent but deficient/unfounded deep time series forecasting mechanisms or methods from the view of time series properties, including normalization methods, multivariate forecasting and input sequence length. Corresponding corollaries and solutions are given on both empirical and theoretical basis. We thereby propose a novel time series forecasting network, i.e. RTNet, on the basis of aforementioned analysis. It is general enough to be combined with both supervised and self-supervised forecasting format. Thanks to the core idea of respecting time series properties, no matter in which forecasting format, RTNet shows obviously superior forecasting performances compared with dozens of other SOTA time series forecasting baselines in three real-world benchmark datasets. By and large, it even occupies less time complexity and memory usage while acquiring better forecasting accuracy. The source code is available at https://github.com/OrigamiSL/RTNet.
[ { "created": "Fri, 22 Jul 2022 08:34:31 GMT", "version": "v1" } ]
2022-07-25
[ [ "Shen", "Li", "" ], [ "Wei", "Yuning", "" ], [ "Wang", "Yangzhu", "" ] ]
How to handle time features shall be the core question of any time series forecasting model. Ironically, it is often ignored or misunderstood by deep-learning based models, even those baselines which are state-of-the-art. This behavior makes their inefficient, untenable and unstable. In this paper, we rigorously analyze three prevalent but deficient/unfounded deep time series forecasting mechanisms or methods from the view of time series properties, including normalization methods, multivariate forecasting and input sequence length. Corresponding corollaries and solutions are given on both empirical and theoretical basis. We thereby propose a novel time series forecasting network, i.e. RTNet, on the basis of aforementioned analysis. It is general enough to be combined with both supervised and self-supervised forecasting format. Thanks to the core idea of respecting time series properties, no matter in which forecasting format, RTNet shows obviously superior forecasting performances compared with dozens of other SOTA time series forecasting baselines in three real-world benchmark datasets. By and large, it even occupies less time complexity and memory usage while acquiring better forecasting accuracy. The source code is available at https://github.com/OrigamiSL/RTNet.
2211.16200
Mustafa Chasmai Ebrahim
Britty Baby, Daksh Thapar, Mustafa Chasmai, Tamajit Banerjee, Kunal Dargan, Ashish Suri, Subhashis Banerjee, Chetan Arora
From Forks to Forceps: A New Framework for Instance Segmentation of Surgical Instruments
WACV 2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Minimally invasive surgeries and related applications demand surgical tool classification and segmentation at the instance level. Surgical tools are similar in appearance and are long, thin, and handled at an angle. The fine-tuning of state-of-the-art (SOTA) instance segmentation models trained on natural images for instrument segmentation has difficulty discriminating instrument classes. Our research demonstrates that while the bounding box and segmentation mask are often accurate, the classification head mis-classifies the class label of the surgical instrument. We present a new neural network framework that adds a classification module as a new stage to existing instance segmentation models. This module specializes in improving the classification of instrument masks generated by the existing model. The module comprises multi-scale mask attention, which attends to the instrument region and masks the distracting background features. We propose training our classifier module using metric learning with arc loss to handle low inter-class variance of surgical instruments. We conduct exhaustive experiments on the benchmark datasets EndoVis2017 and EndoVis2018. We demonstrate that our method outperforms all (more than 18) SOTA methods compared with, and improves the SOTA performance by at least 12 points (20%) on the EndoVis2017 benchmark challenge and generalizes effectively across the datasets.
[ { "created": "Sat, 26 Nov 2022 21:26:42 GMT", "version": "v1" }, { "created": "Sat, 11 Mar 2023 07:23:49 GMT", "version": "v2" } ]
2023-03-14
[ [ "Baby", "Britty", "" ], [ "Thapar", "Daksh", "" ], [ "Chasmai", "Mustafa", "" ], [ "Banerjee", "Tamajit", "" ], [ "Dargan", "Kunal", "" ], [ "Suri", "Ashish", "" ], [ "Banerjee", "Subhashis", "" ], [ "Arora", "Chetan", "" ] ]
Minimally invasive surgeries and related applications demand surgical tool classification and segmentation at the instance level. Surgical tools are similar in appearance and are long, thin, and handled at an angle. The fine-tuning of state-of-the-art (SOTA) instance segmentation models trained on natural images for instrument segmentation has difficulty discriminating instrument classes. Our research demonstrates that while the bounding box and segmentation mask are often accurate, the classification head mis-classifies the class label of the surgical instrument. We present a new neural network framework that adds a classification module as a new stage to existing instance segmentation models. This module specializes in improving the classification of instrument masks generated by the existing model. The module comprises multi-scale mask attention, which attends to the instrument region and masks the distracting background features. We propose training our classifier module using metric learning with arc loss to handle low inter-class variance of surgical instruments. We conduct exhaustive experiments on the benchmark datasets EndoVis2017 and EndoVis2018. We demonstrate that our method outperforms all (more than 18) SOTA methods compared with, and improves the SOTA performance by at least 12 points (20%) on the EndoVis2017 benchmark challenge and generalizes effectively across the datasets.
1305.4580
Manish Gupta
Krishna Gopal Benerjee and Manish K. Gupta and Nikhil Agrawal
Reconstruction and Repair Degree of Fractional Repetition Codes
A one page abstract of this paper appears as a poster in IEEE Netcod 2013
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a Fractional Repetition code, finding the reconstruction and repair degree in a distributed storage system is an important problem. In this work, we present algorithms for computing the reconstruction and repair degree of fractional repetition codes.
[ { "created": "Mon, 20 May 2013 17:21:30 GMT", "version": "v1" } ]
2013-05-21
[ [ "Benerjee", "Krishna Gopal", "" ], [ "Gupta", "Manish K.", "" ], [ "Agrawal", "Nikhil", "" ] ]
Given a Fractional Repetition code, finding the reconstruction and repair degree in a distributed storage system is an important problem. In this work, we present algorithms for computing the reconstruction and repair degree of fractional repetition codes.
2212.12309
Amrut Kajave
Amrut Kajave and Shazny Ahmed Hussain Nismy
How Cyber Criminal Use Social Engineering To Target Organizations
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social engineering is described as the art of manipulation. Cybercriminal use manipulation to victims their targets using psychological principles to change their behavior to make unconscious decisions. This study identifies the attack and techniques used by cybercriminal to conduct social engineering attacks within an organization. This study evaluate how social engineering attacks are delivered, techniques used and highlights how attackers take advantage Compromised systems. Lastly this study will also evaluate and provide the best solutions to help mitigate social engineering attacks with an organization
[ { "created": "Wed, 7 Dec 2022 22:03:58 GMT", "version": "v1" } ]
2022-12-26
[ [ "Kajave", "Amrut", "" ], [ "Nismy", "Shazny Ahmed Hussain", "" ] ]
Social engineering is described as the art of manipulation. Cybercriminal use manipulation to victims their targets using psychological principles to change their behavior to make unconscious decisions. This study identifies the attack and techniques used by cybercriminal to conduct social engineering attacks within an organization. This study evaluate how social engineering attacks are delivered, techniques used and highlights how attackers take advantage Compromised systems. Lastly this study will also evaluate and provide the best solutions to help mitigate social engineering attacks with an organization
2112.03386
Michael McDonald
Michael James McDonald and Dylan Hadfield-Menell
Guided Imitation of Task and Motion Planning
16 pages, 6 figures, 2 tables, submitted to Conference on Robot Learning 2021, to be published in Proceedings of Machine Learning Research
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While modern policy optimization methods can do complex manipulation from sensory data, they struggle on problems with extended time horizons and multiple sub-goals. On the other hand, task and motion planning (TAMP) methods scale to long horizons but they are computationally expensive and need to precisely track world state. We propose a method that draws on the strength of both methods: we train a policy to imitate a TAMP solver's output. This produces a feed-forward policy that can accomplish multi-step tasks from sensory data. First, we build an asynchronous distributed TAMP solver that can produce supervision data fast enough for imitation learning. Then, we propose a hierarchical policy architecture that lets us use partially trained control policies to speed up the TAMP solver. In robotic manipulation tasks with 7-DoF joint control, the partially trained policies reduce the time needed for planning by a factor of up to 2.6. Among these tasks, we can learn a policy that solves the RoboSuite 4-object pick-place task 88% of the time from object pose observations and a policy that solves the RoboDesk 9-goal benchmark 79% of the time from RGB images (averaged across the 9 disparate tasks).
[ { "created": "Mon, 6 Dec 2021 22:22:37 GMT", "version": "v1" } ]
2021-12-08
[ [ "McDonald", "Michael James", "" ], [ "Hadfield-Menell", "Dylan", "" ] ]
While modern policy optimization methods can do complex manipulation from sensory data, they struggle on problems with extended time horizons and multiple sub-goals. On the other hand, task and motion planning (TAMP) methods scale to long horizons but they are computationally expensive and need to precisely track world state. We propose a method that draws on the strength of both methods: we train a policy to imitate a TAMP solver's output. This produces a feed-forward policy that can accomplish multi-step tasks from sensory data. First, we build an asynchronous distributed TAMP solver that can produce supervision data fast enough for imitation learning. Then, we propose a hierarchical policy architecture that lets us use partially trained control policies to speed up the TAMP solver. In robotic manipulation tasks with 7-DoF joint control, the partially trained policies reduce the time needed for planning by a factor of up to 2.6. Among these tasks, we can learn a policy that solves the RoboSuite 4-object pick-place task 88% of the time from object pose observations and a policy that solves the RoboDesk 9-goal benchmark 79% of the time from RGB images (averaged across the 9 disparate tasks).
1908.07324
Bharath Sudharsan Mr
Bharath Sudharsan and Manigandan Chockalingam
A Microphone Array and Voice Algorithm based Smart Hearing Aid
null
null
10.5120/ijca2019919295
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Approximately 6.2% of the world's population (466 million people) suffer from disabling hearing impairment [1]. Hearing impairment impacts negatively on one's education, financial success [2][3], cognitive development in childhood [4], including increased risk of dementia in older adulthood [5]. Lack of or reduced social interaction due to hearing impairment affects creating or maintaining healthy relationships at home, school and work [5]. Hence, hearing impairment genuinely affects the overall quality of life and wellbeing. The cocktail party effect, which is a healthy hearing individual's ability to understand one voice in a cacophony of other voices or sounds, is an important ability lacking in people with hearing impairment. This inability results in difficulties with simple daily activities such as partaking in group discussions or conversing in noisy restaurants [6]. This smart hearing aid aims to provide much-needed assistance with understanding speech in noisy environments. For example, if a person wants to partake in a group discussion, he/she needs to place the microphone array based unit on a flat surface in front of him/her, such as a table. When conversations take place, the microphone array will capture and process sound from all directions, intelligently prioritise and provide the lead speaker's voice by suppressing unwanted noises, including speeches of other people. This device selects and alternates voices between speakers automatically using voice algorithms. Additionally, the user has the option of further fine-tuning the acoustic parameters as needed through a smartphone interface. This paper describes the development and functions of this new Smart Hearing Aid.
[ { "created": "Tue, 20 Aug 2019 13:17:09 GMT", "version": "v1" }, { "created": "Mon, 4 Nov 2019 21:41:06 GMT", "version": "v2" }, { "created": "Thu, 2 Jan 2020 09:42:40 GMT", "version": "v3" }, { "created": "Fri, 5 Jun 2020 21:55:22 GMT", "version": "v4" } ]
2020-06-09
[ [ "Sudharsan", "Bharath", "" ], [ "Chockalingam", "Manigandan", "" ] ]
Approximately 6.2% of the world's population (466 million people) suffer from disabling hearing impairment [1]. Hearing impairment impacts negatively on one's education, financial success [2][3], cognitive development in childhood [4], including increased risk of dementia in older adulthood [5]. Lack of or reduced social interaction due to hearing impairment affects creating or maintaining healthy relationships at home, school and work [5]. Hence, hearing impairment genuinely affects the overall quality of life and wellbeing. The cocktail party effect, which is a healthy hearing individual's ability to understand one voice in a cacophony of other voices or sounds, is an important ability lacking in people with hearing impairment. This inability results in difficulties with simple daily activities such as partaking in group discussions or conversing in noisy restaurants [6]. This smart hearing aid aims to provide much-needed assistance with understanding speech in noisy environments. For example, if a person wants to partake in a group discussion, he/she needs to place the microphone array based unit on a flat surface in front of him/her, such as a table. When conversations take place, the microphone array will capture and process sound from all directions, intelligently prioritise and provide the lead speaker's voice by suppressing unwanted noises, including speeches of other people. This device selects and alternates voices between speakers automatically using voice algorithms. Additionally, the user has the option of further fine-tuning the acoustic parameters as needed through a smartphone interface. This paper describes the development and functions of this new Smart Hearing Aid.
1910.07394
Thassilo Gadermaier
Thassilo Gadermaier and Gerhard Widmer
A Study of Annotation and Alignment Accuracy for Performance Comparison in Complex Orchestral Music
null
Proceedings of the 20th International Society for Music Information Retrieval Conference, (ISMIR) 2019, Delft, The Netherlands, November 4-8, 2019, pages: 769--775
null
null
cs.MM cs.DL cs.IR
http://creativecommons.org/licenses/by/4.0/
Quantitative analysis of commonalities and differences between recorded music performances is an increasingly common task in computational musicology. A typical scenario involves manual annotation of different recordings of the same piece along the time dimension, for comparative analysis of, e.g., the musical tempo, or for mapping other performance-related information between performances. This can be done by manually annotating one reference performance, and then automatically synchronizing other performances, using audio-to-audio alignment algorithms. In this paper we address several questions related to those tasks. First, we analyze different annotations of the same musical piece, quantifying timing deviations between the respective human annotators. A statistical evaluation of the marker time stamps will provide (a) an estimate of the expected timing precision of human annotations and (b) a ground truth for subsequent automatic alignment experiments. We then carry out a systematic evaluation of different audio features for audio-to-audio alignment, quantifying the degree of alignment accuracy that can be achieved, and relate this to the results from the annotation study.
[ { "created": "Wed, 16 Oct 2019 14:59:59 GMT", "version": "v1" } ]
2020-09-28
[ [ "Gadermaier", "Thassilo", "" ], [ "Widmer", "Gerhard", "" ] ]
Quantitative analysis of commonalities and differences between recorded music performances is an increasingly common task in computational musicology. A typical scenario involves manual annotation of different recordings of the same piece along the time dimension, for comparative analysis of, e.g., the musical tempo, or for mapping other performance-related information between performances. This can be done by manually annotating one reference performance, and then automatically synchronizing other performances, using audio-to-audio alignment algorithms. In this paper we address several questions related to those tasks. First, we analyze different annotations of the same musical piece, quantifying timing deviations between the respective human annotators. A statistical evaluation of the marker time stamps will provide (a) an estimate of the expected timing precision of human annotations and (b) a ground truth for subsequent automatic alignment experiments. We then carry out a systematic evaluation of different audio features for audio-to-audio alignment, quantifying the degree of alignment accuracy that can be achieved, and relate this to the results from the annotation study.
1604.05577
Nadeem Akhtar
Nadeem Akhtar, Malik M. Saad Missen
Contribution to the Formal Specification and Verification of a Multi-Agent Robotic System
arXiv admin note: text overlap with arXiv:1501.05120
European Journal of Scientific Research, ISSN 1450-216X / 1450-202X Vol.117 No.1 January, 2014, pp. 35-55
null
null
cs.SE cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is important to have multi-agent robotic system specifications that ensure correctness properties of safety and liveness. As these systems have concurrency, and often have dynamic environment, the formal specification and verification of these systems along with step-wise refinement from abstract to concrete concepts play a major role in system correctness. Formal verification is used for exhaustive investigation of the system space thus ensuring that undetected failures in the behavior are excluded. We construct the system incrementally from subcomponents, based on software architecture. The challenge is to develop a safe multi-agent robotic system, more specifically to ensure the correctness properties of safety and liveness. Formal specifications based on model-checking are flexible, have a concrete syntax, and play vital role in correctness of a multi-agent robotic system. To formally verify safety and liveness of such systems is important because they have high concurrency and in most of the cases have dynamic environment. We have considered a case-study of a multi-agent robotic system for the transport of stock between storehouses to exemplify our formal approach. Our proposed development approach allows for formal verification during specification definition. The development process has been classified in to four major phases of requirement specifications, verification specifications, architecture specifications and implementation.
[ { "created": "Fri, 2 Oct 2015 13:53:35 GMT", "version": "v1" } ]
2016-04-20
[ [ "Akhtar", "Nadeem", "" ], [ "Missen", "Malik M. Saad", "" ] ]
It is important to have multi-agent robotic system specifications that ensure correctness properties of safety and liveness. As these systems have concurrency, and often have dynamic environment, the formal specification and verification of these systems along with step-wise refinement from abstract to concrete concepts play a major role in system correctness. Formal verification is used for exhaustive investigation of the system space thus ensuring that undetected failures in the behavior are excluded. We construct the system incrementally from subcomponents, based on software architecture. The challenge is to develop a safe multi-agent robotic system, more specifically to ensure the correctness properties of safety and liveness. Formal specifications based on model-checking are flexible, have a concrete syntax, and play vital role in correctness of a multi-agent robotic system. To formally verify safety and liveness of such systems is important because they have high concurrency and in most of the cases have dynamic environment. We have considered a case-study of a multi-agent robotic system for the transport of stock between storehouses to exemplify our formal approach. Our proposed development approach allows for formal verification during specification definition. The development process has been classified in to four major phases of requirement specifications, verification specifications, architecture specifications and implementation.
2401.15767
Fabian Fernando Jurado Lasso Dr.
F. Fernando Jurado-Lasso, J. F. Jurado, and Xenofon Fafoutis
A Centralized Reinforcement Learning Framework for Adaptive Clustering with Low Control Overhead in IoT Networks
13 pages, 13 figures, 3 tables, journal
null
null
null
cs.NI cs.LG
http://creativecommons.org/licenses/by/4.0/
Wireless Sensor Networks (WSNs) play a pivotal role in enabling Internet of Things (IoT) devices with sensing and actuation capabilities. Operating in remote and resource-constrained environments, these IoT devices face challenges related to energy consumption, crucial for network longevity. Clustering protocols have emerged as an effective solution to alleviate energy burdens on IoT devices. This paper introduces Low-Energy Adaptive Clustering Hierarchy with Reinforcement Learning-based Controller (LEACH-RLC), a novel clustering protocol that employs a Mixed Integer Linear Programming (MILP) for strategic selection of cluster heads (CHs) and node-to-cluster assignments. Additionally, it integrates a Reinforcement Learning (RL) agent to minimize control overhead by learning optimal timings for generating new clusters. Addressing key research questions, LEACH-RLC seeks to balance control overhead reduction without compromising overall network performance. Through extensive simulations, this paper investigates the frequency and opportune moments for generating new clustering solutions. Results demonstrate the superior performance of LEACH-RLC over conventional LEACH and LEACH-C, showcasing enhanced network lifetime, reduced average energy consumption, and minimized control overhead. The proposed protocol contributes to advancing the efficiency and adaptability of WSNs, addressing critical challenges in IoT deployments.
[ { "created": "Sun, 28 Jan 2024 21:08:45 GMT", "version": "v1" } ]
2024-01-30
[ [ "Jurado-Lasso", "F. Fernando", "" ], [ "Jurado", "J. F.", "" ], [ "Fafoutis", "Xenofon", "" ] ]
Wireless Sensor Networks (WSNs) play a pivotal role in enabling Internet of Things (IoT) devices with sensing and actuation capabilities. Operating in remote and resource-constrained environments, these IoT devices face challenges related to energy consumption, crucial for network longevity. Clustering protocols have emerged as an effective solution to alleviate energy burdens on IoT devices. This paper introduces Low-Energy Adaptive Clustering Hierarchy with Reinforcement Learning-based Controller (LEACH-RLC), a novel clustering protocol that employs a Mixed Integer Linear Programming (MILP) for strategic selection of cluster heads (CHs) and node-to-cluster assignments. Additionally, it integrates a Reinforcement Learning (RL) agent to minimize control overhead by learning optimal timings for generating new clusters. Addressing key research questions, LEACH-RLC seeks to balance control overhead reduction without compromising overall network performance. Through extensive simulations, this paper investigates the frequency and opportune moments for generating new clustering solutions. Results demonstrate the superior performance of LEACH-RLC over conventional LEACH and LEACH-C, showcasing enhanced network lifetime, reduced average energy consumption, and minimized control overhead. The proposed protocol contributes to advancing the efficiency and adaptability of WSNs, addressing critical challenges in IoT deployments.
2407.16354
Jinpeng Chen
Jinpeng Chen, Runmin Cong, Yuxuan Luo, Horace Ho Shing Ip, and Sam Kwong
Strike a Balance in Continual Panoptic Segmentation
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study explores the emerging area of continual panoptic segmentation, highlighting three key balances. First, we introduce past-class backtrace distillation to balance the stability of existing knowledge with the adaptability to new information. This technique retraces the features associated with past classes based on the final label assignment results, performing knowledge distillation targeting these specific features from the previous model while allowing other features to flexibly adapt to new information. Additionally, we introduce a class-proportional memory strategy, which aligns the class distribution in the replay sample set with that of the historical training data. This strategy maintains a balanced class representation during replay, enhancing the utility of the limited-capacity replay sample set in recalling prior classes. Moreover, recognizing that replay samples are annotated only for the classes of their original step, we devise balanced anti-misguidance losses, which combat the impact of incomplete annotations without incurring classification bias. Building upon these innovations, we present a new method named Balanced Continual Panoptic Segmentation (BalConpas). Our evaluation on the challenging ADE20K dataset demonstrates its superior performance compared to existing state-of-the-art methods. The official code is available at https://github.com/jinpeng0528/BalConpas.
[ { "created": "Tue, 23 Jul 2024 09:58:20 GMT", "version": "v1" } ]
2024-07-24
[ [ "Chen", "Jinpeng", "" ], [ "Cong", "Runmin", "" ], [ "Luo", "Yuxuan", "" ], [ "Ip", "Horace Ho Shing", "" ], [ "Kwong", "Sam", "" ] ]
This study explores the emerging area of continual panoptic segmentation, highlighting three key balances. First, we introduce past-class backtrace distillation to balance the stability of existing knowledge with the adaptability to new information. This technique retraces the features associated with past classes based on the final label assignment results, performing knowledge distillation targeting these specific features from the previous model while allowing other features to flexibly adapt to new information. Additionally, we introduce a class-proportional memory strategy, which aligns the class distribution in the replay sample set with that of the historical training data. This strategy maintains a balanced class representation during replay, enhancing the utility of the limited-capacity replay sample set in recalling prior classes. Moreover, recognizing that replay samples are annotated only for the classes of their original step, we devise balanced anti-misguidance losses, which combat the impact of incomplete annotations without incurring classification bias. Building upon these innovations, we present a new method named Balanced Continual Panoptic Segmentation (BalConpas). Our evaluation on the challenging ADE20K dataset demonstrates its superior performance compared to existing state-of-the-art methods. The official code is available at https://github.com/jinpeng0528/BalConpas.
2310.09360
Hongchao Zhang
Hongchao Zhang, Junlin Wu, Yevgeniy Vorobeychik, Andrew Clark
Exact Verification of ReLU Neural Control Barrier Functions
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems. In CBF-based control, the desired safety properties of the system are mapped to nonnegativity of a CBF, and the control input is chosen to ensure that the CBF remains nonnegative for all time. Recently, machine learning methods that represent CBFs as neural networks (neural control barrier functions, or NCBFs) have shown great promise due to the universal representability of neural networks. However, verifying that a learned CBF guarantees safety remains a challenging research problem. This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions. The key challenge in doing so is that, due to the piecewise linearity of the ReLU function, the NCBF will be nondifferentiable at certain points, thus invalidating traditional safety verification methods that assume a smooth barrier function. We resolve this issue by leveraging a generalization of Nagumo's theorem for proving invariance of sets with nonsmooth boundaries to derive necessary and sufficient conditions for safety. Based on this condition, we propose an algorithm for safety verification of NCBFs that first decomposes the NCBF into piecewise linear segments and then solves a nonlinear program to verify safety of each segment as well as the intersections of the linear segments. We mitigate the complexity by only considering the boundary of the safe region and by pruning the segments with Interval Bound Propagation (IBP) and linear relaxation. We evaluate our approach through numerical studies with comparison to state-of-the-art SMT-based methods. Our code is available at https://github.com/HongchaoZhang-HZ/exactverif-reluncbf-nips23.
[ { "created": "Fri, 13 Oct 2023 18:59:04 GMT", "version": "v1" } ]
2023-10-17
[ [ "Zhang", "Hongchao", "" ], [ "Wu", "Junlin", "" ], [ "Vorobeychik", "Yevgeniy", "" ], [ "Clark", "Andrew", "" ] ]
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems. In CBF-based control, the desired safety properties of the system are mapped to nonnegativity of a CBF, and the control input is chosen to ensure that the CBF remains nonnegative for all time. Recently, machine learning methods that represent CBFs as neural networks (neural control barrier functions, or NCBFs) have shown great promise due to the universal representability of neural networks. However, verifying that a learned CBF guarantees safety remains a challenging research problem. This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions. The key challenge in doing so is that, due to the piecewise linearity of the ReLU function, the NCBF will be nondifferentiable at certain points, thus invalidating traditional safety verification methods that assume a smooth barrier function. We resolve this issue by leveraging a generalization of Nagumo's theorem for proving invariance of sets with nonsmooth boundaries to derive necessary and sufficient conditions for safety. Based on this condition, we propose an algorithm for safety verification of NCBFs that first decomposes the NCBF into piecewise linear segments and then solves a nonlinear program to verify safety of each segment as well as the intersections of the linear segments. We mitigate the complexity by only considering the boundary of the safe region and by pruning the segments with Interval Bound Propagation (IBP) and linear relaxation. We evaluate our approach through numerical studies with comparison to state-of-the-art SMT-based methods. Our code is available at https://github.com/HongchaoZhang-HZ/exactverif-reluncbf-nips23.
1808.00878
Hazrat Ali
Hazrat Ali, Adnan Ali Awan, Sanaullah Khan, Omer Shafique, Atiq ur Rahman, Shahid Khan
Supervised classification for object identification in urban areas using satellite imagery
2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET)
H. Ali et al., 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, 2018, pp. 1-4
10.1109/ICOMET.2018.8346383
null
cs.LG cs.CV eess.SP stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a useful method to achieve classification in satellite imagery. The approach is based on pixel level study employing various features such as correlation, homogeneity, energy and contrast. In this study gray-scale images are used for training the classification model. For supervised classification, two classification techniques are employed namely the Support Vector Machine (SVM) and the Naive Bayes. With textural features used for gray-scale images, Naive Bayes performs better with an overall accuracy of 76% compared to 68% achieved by SVM. The computational time is evaluated while performing the experiment with two different window sizes i.e., 50x50 and 70x70. The required computational time on a single image is found to be 27 seconds for a window size of 70x70 and 45 seconds for a window size of 50x50.
[ { "created": "Thu, 2 Aug 2018 16:00:32 GMT", "version": "v1" } ]
2018-08-03
[ [ "Ali", "Hazrat", "" ], [ "Awan", "Adnan Ali", "" ], [ "Khan", "Sanaullah", "" ], [ "Shafique", "Omer", "" ], [ "Rahman", "Atiq ur", "" ], [ "Khan", "Shahid", "" ] ]
This paper presents a useful method to achieve classification in satellite imagery. The approach is based on pixel level study employing various features such as correlation, homogeneity, energy and contrast. In this study gray-scale images are used for training the classification model. For supervised classification, two classification techniques are employed namely the Support Vector Machine (SVM) and the Naive Bayes. With textural features used for gray-scale images, Naive Bayes performs better with an overall accuracy of 76% compared to 68% achieved by SVM. The computational time is evaluated while performing the experiment with two different window sizes i.e., 50x50 and 70x70. The required computational time on a single image is found to be 27 seconds for a window size of 70x70 and 45 seconds for a window size of 50x50.
1408.4703
Amelia Carolina Sparavigna
Amelia Carolina Sparavigna
GIMP and Wavelets for Medical Image Processing: Enhancing Images of the Fundus of the Eye
Keywords: Image processing, Retina, Retina Vessels, GIMP, AstroFracTool, Iris, Wavelets
ijSciences, 2014, Volume 3, Issue 8, pages 35-47
10.18483/ijSci.556
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The visual analysis of retina and of its vascular characteristics is important in the diagnosis and monitoring of diseases of visual perception. In the related medical diagnoses, the digital processing of the fundus images is used to obtain the segmentation of retinal vessels. However, an image segmentation is often requiring methods based on peculiar or complex algorithms: in this paper we will show some alternative approaches obtained by applying freely available tools to enhance, without a specific segmentation, the images of the fundus of the eye. We will see in particular, that combining the use of GIMP, the GNU Image Manipulation Program, with the wavelet filter of Iris, a program well-known for processing astronomical images, the result is giving images which can be alternative of those obtained from segmentation.
[ { "created": "Wed, 20 Aug 2014 15:49:17 GMT", "version": "v1" } ]
2015-08-06
[ [ "Sparavigna", "Amelia Carolina", "" ] ]
The visual analysis of retina and of its vascular characteristics is important in the diagnosis and monitoring of diseases of visual perception. In the related medical diagnoses, the digital processing of the fundus images is used to obtain the segmentation of retinal vessels. However, an image segmentation is often requiring methods based on peculiar or complex algorithms: in this paper we will show some alternative approaches obtained by applying freely available tools to enhance, without a specific segmentation, the images of the fundus of the eye. We will see in particular, that combining the use of GIMP, the GNU Image Manipulation Program, with the wavelet filter of Iris, a program well-known for processing astronomical images, the result is giving images which can be alternative of those obtained from segmentation.
1505.03105
Hossam Ibrahim
Hossam S. Ibrahim, Sherif M. Abdou, Mervat Gheith
Sentiment Analysis For Modern Standard Arabic And Colloquial
International Journal on Natural Language Computing (IJNLC) Vol. 4, No.2,April 2015
null
10.5121/ijnlc.2015.4207
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of social media such as blogs and social networks has fueled interest in sentiment analysis. With the proliferation of reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of virtual currency for businesses looking to market their products, identify new opportunities and manage their reputations, therefore many are now looking to the field of sentiment analysis. In this paper, we present a feature-based sentence level approach for Arabic sentiment analysis. Our approach is using Arabic idioms/saying phrases lexicon as a key importance for improving the detection of the sentiment polarity in Arabic sentences as well as a number of novels and rich set of linguistically motivated features contextual Intensifiers, contextual Shifter and negation handling), syntactic features for conflicting phrases which enhance the sentiment classification accuracy. Furthermore, we introduce an automatic expandable wide coverage polarity lexicon of Arabic sentiment words. The lexicon is built with gold-standard sentiment words as a seed which is manually collected and annotated and it expands and detects the sentiment orientation automatically of new sentiment words using synset aggregation technique and free online Arabic lexicons and thesauruses. Our data focus on modern standard Arabic (MSA) and Egyptian dialectal Arabic tweets and microblogs (hotel reservation, product reviews, etc.). The experimental results using our resources and techniques with SVM classifier indicate high performance levels, with accuracies of over 95%.
[ { "created": "Tue, 12 May 2015 18:10:53 GMT", "version": "v1" } ]
2015-05-13
[ [ "Ibrahim", "Hossam S.", "" ], [ "Abdou", "Sherif M.", "" ], [ "Gheith", "Mervat", "" ] ]
The rise of social media such as blogs and social networks has fueled interest in sentiment analysis. With the proliferation of reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of virtual currency for businesses looking to market their products, identify new opportunities and manage their reputations, therefore many are now looking to the field of sentiment analysis. In this paper, we present a feature-based sentence level approach for Arabic sentiment analysis. Our approach is using Arabic idioms/saying phrases lexicon as a key importance for improving the detection of the sentiment polarity in Arabic sentences as well as a number of novels and rich set of linguistically motivated features contextual Intensifiers, contextual Shifter and negation handling), syntactic features for conflicting phrases which enhance the sentiment classification accuracy. Furthermore, we introduce an automatic expandable wide coverage polarity lexicon of Arabic sentiment words. The lexicon is built with gold-standard sentiment words as a seed which is manually collected and annotated and it expands and detects the sentiment orientation automatically of new sentiment words using synset aggregation technique and free online Arabic lexicons and thesauruses. Our data focus on modern standard Arabic (MSA) and Egyptian dialectal Arabic tweets and microblogs (hotel reservation, product reviews, etc.). The experimental results using our resources and techniques with SVM classifier indicate high performance levels, with accuracies of over 95%.
2209.07702
Xinlin Leng
Xinlin Leng, Chenxu Li, Weifeng Xu, Yuyan Sun, Hongtao Wang
Federated Coordinate Descent for Privacy-Preserving Multiparty Linear Regression
14 pages, 19 figures (Under review)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed privacy-preserving regression schemes have been developed and extended in various fields, where multiparty collaboratively and privately run optimization algorithms, e.g., Gradient Descent, to learn a set of optimal parameters. However, traditional Gradient-Descent based methods fail to solve problems which contains objective functions with L1 regularization, such as Lasso regression. In this paper, we present Federated Coordinate Descent, a new distributed scheme called FCD, to address this issue securely under multiparty scenarios. Specifically, through secure aggregation and added perturbations, our scheme guarantees that: (1) no local information is leaked to other parties, and (2) global model parameters are not exposed to cloud servers. The added perturbations can eventually be eliminated by each party to derive a global model with high performance. We show that the FCD scheme fills the gap of multiparty secure Coordinate Descent methods and is applicable for general linear regressions, including linear, ridge and lasso regressions. Theoretical security analysis and experimental results demonstrate that FCD can be performed effectively and efficiently, and provide as low MAE measure as centralized methods under tasks of three types of linear regressions on real-world UCI datasets.
[ { "created": "Fri, 16 Sep 2022 03:53:46 GMT", "version": "v1" }, { "created": "Mon, 19 Sep 2022 08:28:36 GMT", "version": "v2" }, { "created": "Sun, 16 Oct 2022 10:53:11 GMT", "version": "v3" } ]
2022-10-18
[ [ "Leng", "Xinlin", "" ], [ "Li", "Chenxu", "" ], [ "Xu", "Weifeng", "" ], [ "Sun", "Yuyan", "" ], [ "Wang", "Hongtao", "" ] ]
Distributed privacy-preserving regression schemes have been developed and extended in various fields, where multiparty collaboratively and privately run optimization algorithms, e.g., Gradient Descent, to learn a set of optimal parameters. However, traditional Gradient-Descent based methods fail to solve problems which contains objective functions with L1 regularization, such as Lasso regression. In this paper, we present Federated Coordinate Descent, a new distributed scheme called FCD, to address this issue securely under multiparty scenarios. Specifically, through secure aggregation and added perturbations, our scheme guarantees that: (1) no local information is leaked to other parties, and (2) global model parameters are not exposed to cloud servers. The added perturbations can eventually be eliminated by each party to derive a global model with high performance. We show that the FCD scheme fills the gap of multiparty secure Coordinate Descent methods and is applicable for general linear regressions, including linear, ridge and lasso regressions. Theoretical security analysis and experimental results demonstrate that FCD can be performed effectively and efficiently, and provide as low MAE measure as centralized methods under tasks of three types of linear regressions on real-world UCI datasets.
2312.01680
Deepika Tiwari
Deepika Tiwari, Tim Toady, Martin Monperrus, Benoit Baudry
With Great Humor Comes Great Developer Engagement
null
Proceedings of International Conference on Software Engineering, 2024
10.1145/3639475.3640099
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The worldwide collaborative effort for the creation of software is technically and socially demanding. The more engaged developers are, the more value they impart to the software they create. Engaged developers, such as Margaret Hamilton programming Apollo 11, can succeed in tackling the most difficult engineering tasks. In this paper, we dive deep into an original vector of engagement - humor - and study how it fuels developer engagement. First, we collect qualitative and quantitative data about the humorous elements present within three significant, real-world software projects: faker, which helps developers introduce humor within their tests; lolcommits, which captures a photograph after each contribution made by a developer; and volkswagen, an exercise in satire, which accidentally led to the invention of an impactful software tool. Second, through a developer survey, we receive unique insights from 125 developers, who share their real-life experiences with humor in software. Our analysis of the three case studies highlights the prevalence of humor in software, and unveils the worldwide community of developers who are enthusiastic about both software and humor. We also learn about the caveats of humor in software through the valuable insights shared by our survey respondents. We report clear evidence that, when practiced responsibly, humor increases developer engagement and supports them in addressing hard engineering and cognitive tasks. The most actionable highlight of our work is that software tests and documentation are the best locations in code to practice humor.
[ { "created": "Mon, 4 Dec 2023 07:06:02 GMT", "version": "v1" }, { "created": "Tue, 16 Jan 2024 12:51:47 GMT", "version": "v2" } ]
2024-01-17
[ [ "Tiwari", "Deepika", "" ], [ "Toady", "Tim", "" ], [ "Monperrus", "Martin", "" ], [ "Baudry", "Benoit", "" ] ]
The worldwide collaborative effort for the creation of software is technically and socially demanding. The more engaged developers are, the more value they impart to the software they create. Engaged developers, such as Margaret Hamilton programming Apollo 11, can succeed in tackling the most difficult engineering tasks. In this paper, we dive deep into an original vector of engagement - humor - and study how it fuels developer engagement. First, we collect qualitative and quantitative data about the humorous elements present within three significant, real-world software projects: faker, which helps developers introduce humor within their tests; lolcommits, which captures a photograph after each contribution made by a developer; and volkswagen, an exercise in satire, which accidentally led to the invention of an impactful software tool. Second, through a developer survey, we receive unique insights from 125 developers, who share their real-life experiences with humor in software. Our analysis of the three case studies highlights the prevalence of humor in software, and unveils the worldwide community of developers who are enthusiastic about both software and humor. We also learn about the caveats of humor in software through the valuable insights shared by our survey respondents. We report clear evidence that, when practiced responsibly, humor increases developer engagement and supports them in addressing hard engineering and cognitive tasks. The most actionable highlight of our work is that software tests and documentation are the best locations in code to practice humor.
1902.03222
Thirupathi Guggulothu
Thirupathi Guggulothu
Code Smell Detection using Multilabel Classification Approach
16 pages,2 figures
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Code smells are characteristics of the software that indicates a code or design problem which can make software hard to understand, evolve, and maintain. The code smell detection tools proposed in the literature produce different results, as smells are informally defined or are subjective in nature. To address the issue of tool subjectivity, machine learning techniques have been proposed which can learn and distinguish the characteristics of smelly and non-smelly source code elements (classes or methods). However, the existing machine learning techniques can only detect a single type of smell in the code element which does not correspond to a real-world scenario. In this paper, we have used multilabel classification methods to detect whether the given code element is affected by multiple smells or not. We have considered two code smell datasets for this work and converted them into a multilabel dataset. In our experimentation, Two multilabel methods performed on the converted dataset which demonstrates good performances in the 10-fold cross-validation, using ten repetitions.
[ { "created": "Fri, 8 Feb 2019 18:30:33 GMT", "version": "v1" } ]
2019-02-11
[ [ "Guggulothu", "Thirupathi", "" ] ]
Code smells are characteristics of the software that indicates a code or design problem which can make software hard to understand, evolve, and maintain. The code smell detection tools proposed in the literature produce different results, as smells are informally defined or are subjective in nature. To address the issue of tool subjectivity, machine learning techniques have been proposed which can learn and distinguish the characteristics of smelly and non-smelly source code elements (classes or methods). However, the existing machine learning techniques can only detect a single type of smell in the code element which does not correspond to a real-world scenario. In this paper, we have used multilabel classification methods to detect whether the given code element is affected by multiple smells or not. We have considered two code smell datasets for this work and converted them into a multilabel dataset. In our experimentation, Two multilabel methods performed on the converted dataset which demonstrates good performances in the 10-fold cross-validation, using ten repetitions.
2306.09841
Fangzhi Xu
Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, Erik Cambria
Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond
14 pages, 11 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Logical reasoning consistently plays a fundamental and significant role in the domains of knowledge engineering and artificial intelligence. Recently, Large Language Models (LLMs) have emerged as a noteworthy innovation in natural language processing (NLP), exhibiting impressive achievements across various classic NLP tasks. However, the question of whether LLMs can effectively address the task of logical reasoning, which requires gradual cognitive inference similar to human intelligence, remains unanswered. To this end, we aim to bridge this gap and provide comprehensive evaluations in this paper. Firstly, to offer systematic evaluations, we select fifteen typical logical reasoning datasets and organize them into deductive, inductive, abductive and mixed-form reasoning settings. Considering the comprehensiveness of evaluations, we include three representative LLMs (i.e., text-davinci-003, ChatGPT and BARD) and evaluate them on all selected datasets under zero-shot, one-shot and three-shot settings. Secondly, different from previous evaluations relying only on simple metrics (e.g., accuracy), we propose fine-level evaluations from objective and subjective manners, covering both answers and explanations. Additionally, to uncover the logical flaws of LLMs, problematic cases will be attributed to five error types from two dimensions, i.e., evidence selection process and reasoning process. Thirdly, to avoid the influences of knowledge bias and purely focus on benchmarking the logical reasoning capability of LLMs, we propose a new dataset with neutral content. It contains 3,000 samples and covers deductive, inductive and abductive settings. Based on the in-depth evaluations, this paper finally forms a general evaluation scheme of logical reasoning capability from six dimensions. It reflects the pros and cons of LLMs and gives guiding directions for future works.
[ { "created": "Fri, 16 Jun 2023 13:39:35 GMT", "version": "v1" }, { "created": "Tue, 11 Jul 2023 13:41:20 GMT", "version": "v2" }, { "created": "Tue, 8 Aug 2023 12:57:18 GMT", "version": "v3" } ]
2023-08-09
[ [ "Xu", "Fangzhi", "" ], [ "Lin", "Qika", "" ], [ "Han", "Jiawei", "" ], [ "Zhao", "Tianzhe", "" ], [ "Liu", "Jun", "" ], [ "Cambria", "Erik", "" ] ]
Logical reasoning consistently plays a fundamental and significant role in the domains of knowledge engineering and artificial intelligence. Recently, Large Language Models (LLMs) have emerged as a noteworthy innovation in natural language processing (NLP), exhibiting impressive achievements across various classic NLP tasks. However, the question of whether LLMs can effectively address the task of logical reasoning, which requires gradual cognitive inference similar to human intelligence, remains unanswered. To this end, we aim to bridge this gap and provide comprehensive evaluations in this paper. Firstly, to offer systematic evaluations, we select fifteen typical logical reasoning datasets and organize them into deductive, inductive, abductive and mixed-form reasoning settings. Considering the comprehensiveness of evaluations, we include three representative LLMs (i.e., text-davinci-003, ChatGPT and BARD) and evaluate them on all selected datasets under zero-shot, one-shot and three-shot settings. Secondly, different from previous evaluations relying only on simple metrics (e.g., accuracy), we propose fine-level evaluations from objective and subjective manners, covering both answers and explanations. Additionally, to uncover the logical flaws of LLMs, problematic cases will be attributed to five error types from two dimensions, i.e., evidence selection process and reasoning process. Thirdly, to avoid the influences of knowledge bias and purely focus on benchmarking the logical reasoning capability of LLMs, we propose a new dataset with neutral content. It contains 3,000 samples and covers deductive, inductive and abductive settings. Based on the in-depth evaluations, this paper finally forms a general evaluation scheme of logical reasoning capability from six dimensions. It reflects the pros and cons of LLMs and gives guiding directions for future works.
1606.05023
Christopher Rose
Christopher Rose and Ismat Saira Mian
Inscribed Matter Communication: Part I
20 pages, 6 figures, 1 Table in revision at IEEE Journal on Molecular, Biological and Multiscale Communication
null
10.1109/TMBMC.2017.2655025
null
cs.ET cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a fundamental treatment of the molecular communication channel wherein "inscribed matter" is transmitted across a spatial gap to provide reliable signaling between a sender and receiver. Inscribed matter is defined as an ensemble of "tokens" (molecules, objects, and so on) and is inspired, at least partially, by biological systems where groups of individually constructed discrete particles ranging from molecules through membrane-bound structures containing molecules to viruses and organisms are released by a source and travel to a target -- for example, morphogens or semiochemicals diffuse from one cell, tissue or organism diffuse to another. For identical tokens that are neither lost nor modified, we consider messages encoded using three candidate communication schemes: a) token timing (timed release), b) token payload (composition), and c) token timing plus payload. We provide capacity bounds for each scheme and discuss their relative utility. We find that under not unreasonable assumptions, megabit per second rates could be supported at femtoWatt transmitter powers. Since quantities such as token concentration or bin-counting are derivatives of token arrival timing, individual token timing undergirds all molecular communication techniques. Thus, our modeling and results about the physics of efficient token-based information transfer can inform investigations of diverse theoretical and practical problems in engineering and biology. This work, Part I, focuses on the information theoretic bounds on capacity. Part II develops some of the mathematical and information-theoretic ideas that support the bounds presented here.
[ { "created": "Thu, 16 Jun 2016 01:36:05 GMT", "version": "v1" }, { "created": "Sun, 19 Jun 2016 19:08:35 GMT", "version": "v2" }, { "created": "Mon, 29 Aug 2016 18:31:39 GMT", "version": "v3" }, { "created": "Tue, 22 Nov 2016 17:16:33 GMT", "version": "v4" }, { "created": "Sat, 17 Dec 2016 01:59:50 GMT", "version": "v5" } ]
2020-09-22
[ [ "Rose", "Christopher", "" ], [ "Mian", "Ismat Saira", "" ] ]
We provide a fundamental treatment of the molecular communication channel wherein "inscribed matter" is transmitted across a spatial gap to provide reliable signaling between a sender and receiver. Inscribed matter is defined as an ensemble of "tokens" (molecules, objects, and so on) and is inspired, at least partially, by biological systems where groups of individually constructed discrete particles ranging from molecules through membrane-bound structures containing molecules to viruses and organisms are released by a source and travel to a target -- for example, morphogens or semiochemicals diffuse from one cell, tissue or organism diffuse to another. For identical tokens that are neither lost nor modified, we consider messages encoded using three candidate communication schemes: a) token timing (timed release), b) token payload (composition), and c) token timing plus payload. We provide capacity bounds for each scheme and discuss their relative utility. We find that under not unreasonable assumptions, megabit per second rates could be supported at femtoWatt transmitter powers. Since quantities such as token concentration or bin-counting are derivatives of token arrival timing, individual token timing undergirds all molecular communication techniques. Thus, our modeling and results about the physics of efficient token-based information transfer can inform investigations of diverse theoretical and practical problems in engineering and biology. This work, Part I, focuses on the information theoretic bounds on capacity. Part II develops some of the mathematical and information-theoretic ideas that support the bounds presented here.
1801.02930
Yoshinari Takeishi
Yoshinari Takeishi and Jun'ichi Takeuchi
An Improved Analysis of Least Squares Superposition Codes with Bernoulli Dictionary
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the additive white Gaussian noise channel with average power constraint, sparse superposition codes, proposed by Barron and Joseph in 2010, achieve the capacity. While the codewords of the original sparse superposition codes are made with a dictionary matrix drawn from a Gaussian distribution, we consider the case that it is drawn from a Bernoulli distribution. We show an improved upper bound on its block error probability with least squares decoding, which is fairly simplified and tighter bound than our previous result in 2014.
[ { "created": "Tue, 9 Jan 2018 13:42:18 GMT", "version": "v1" } ]
2018-01-10
[ [ "Takeishi", "Yoshinari", "" ], [ "Takeuchi", "Jun'ichi", "" ] ]
For the additive white Gaussian noise channel with average power constraint, sparse superposition codes, proposed by Barron and Joseph in 2010, achieve the capacity. While the codewords of the original sparse superposition codes are made with a dictionary matrix drawn from a Gaussian distribution, we consider the case that it is drawn from a Bernoulli distribution. We show an improved upper bound on its block error probability with least squares decoding, which is fairly simplified and tighter bound than our previous result in 2014.
2102.06245
Kaushik Roy
Kaushik Roy, Qi Zhang, Manas Gaur, and Amit Sheth
Knowledge Infused Policy Gradients for Adaptive Pandemic Control
Accepted at AAAI-MAKE 2021
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
COVID-19 has impacted nations differently based on their policy implementations. The effective policy requires taking into account public information and adaptability to new knowledge. Epidemiological models built to understand COVID-19 seldom provide the policymaker with the capability for adaptive pandemic control (APC). Among the core challenges to be overcome include (a) inability to handle a high degree of non-homogeneity in different contributing features across the pandemic timeline, (b) lack of an approach that enables adaptive incorporation of public health expert knowledge, and (c) transparent models that enable understanding of the decision-making process in suggesting policy. In this work, we take the early steps to address these challenges using Knowledge Infused Policy Gradient (KIPG) methods. Prior work on knowledge infusion does not handle soft and hard imposition of varying forms of knowledge in disease information and guidelines to necessarily comply with. Furthermore, the models do not attend to non-homogeneity in feature counts, manifesting as partial observability in informing the policy. Additionally, interpretable structures are extracted post-learning instead of learning an interpretable model required for APC. To this end, we introduce a mathematical framework for KIPG methods that can (a) induce relevant feature counts over multi-relational features of the world, (b) handle latent non-homogeneous counts as hidden variables that are linear combinations of kernelized aggregates over the features, and (b) infuse knowledge as functional constraints in a principled manner. The study establishes a theory for imposing hard and soft constraints and simulates it through experiments. In comparison with knowledge-intensive baselines, we show quick sample efficient adaptation to new knowledge and interpretability in the learned policy, especially in a pandemic context.
[ { "created": "Thu, 11 Feb 2021 20:13:00 GMT", "version": "v1" } ]
2021-02-15
[ [ "Roy", "Kaushik", "" ], [ "Zhang", "Qi", "" ], [ "Gaur", "Manas", "" ], [ "Sheth", "Amit", "" ] ]
COVID-19 has impacted nations differently based on their policy implementations. The effective policy requires taking into account public information and adaptability to new knowledge. Epidemiological models built to understand COVID-19 seldom provide the policymaker with the capability for adaptive pandemic control (APC). Among the core challenges to be overcome include (a) inability to handle a high degree of non-homogeneity in different contributing features across the pandemic timeline, (b) lack of an approach that enables adaptive incorporation of public health expert knowledge, and (c) transparent models that enable understanding of the decision-making process in suggesting policy. In this work, we take the early steps to address these challenges using Knowledge Infused Policy Gradient (KIPG) methods. Prior work on knowledge infusion does not handle soft and hard imposition of varying forms of knowledge in disease information and guidelines to necessarily comply with. Furthermore, the models do not attend to non-homogeneity in feature counts, manifesting as partial observability in informing the policy. Additionally, interpretable structures are extracted post-learning instead of learning an interpretable model required for APC. To this end, we introduce a mathematical framework for KIPG methods that can (a) induce relevant feature counts over multi-relational features of the world, (b) handle latent non-homogeneous counts as hidden variables that are linear combinations of kernelized aggregates over the features, and (b) infuse knowledge as functional constraints in a principled manner. The study establishes a theory for imposing hard and soft constraints and simulates it through experiments. In comparison with knowledge-intensive baselines, we show quick sample efficient adaptation to new knowledge and interpretability in the learned policy, especially in a pandemic context.
1311.6718
Diego Perea Mr.
Diego Perea-Vega, Jean-Fran\c{c}ois Frigon, Andr\'e Girard
Efficient Heuristic for Resource Allocation in Zero-forcing OFDMA-SDMA Systems with Minimum Rate Constraints
8 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
4G wireless access systems require high spectral efficiency to support the ever increasing number of users and data rates for real time applications. Multi-antenna OFDM-SDMA systems can provide the required high spectral efficiency and dynamic usage of the channel, but the resource allocation process becomes extremely complex because of the augmented degrees of freedom. In this paper, we propose two heuristics to solve the resource allocation problem that have very low computational complexity and give performances not far from the optimal. The proposed heuristics select a set of users for each subchannel, but contrary to the reported methods that solve the throughput maximization problem, our heuristics consider the set of real-time (RT) users to ensure that their minimum rate requirements are met. We compare the heuristics' performance against an upper bound and other methods proposed in the literature and find that they give a somewhat lower performance, but support a wider range of minimum rates while reducing the computational complexity. The gap between the objective achieved by the heuristics and the upper bound is not large. In our experiments this gap is 10.7% averaging over all performed numerical evaluations for all system configurations. The increase in the range of the supported minimum rates when compared with a method reported in the literature is 14.6% on average.
[ { "created": "Tue, 26 Nov 2013 15:59:11 GMT", "version": "v1" }, { "created": "Wed, 8 Jan 2014 12:45:16 GMT", "version": "v2" } ]
2014-01-09
[ [ "Perea-Vega", "Diego", "" ], [ "Frigon", "Jean-François", "" ], [ "Girard", "André", "" ] ]
4G wireless access systems require high spectral efficiency to support the ever increasing number of users and data rates for real time applications. Multi-antenna OFDM-SDMA systems can provide the required high spectral efficiency and dynamic usage of the channel, but the resource allocation process becomes extremely complex because of the augmented degrees of freedom. In this paper, we propose two heuristics to solve the resource allocation problem that have very low computational complexity and give performances not far from the optimal. The proposed heuristics select a set of users for each subchannel, but contrary to the reported methods that solve the throughput maximization problem, our heuristics consider the set of real-time (RT) users to ensure that their minimum rate requirements are met. We compare the heuristics' performance against an upper bound and other methods proposed in the literature and find that they give a somewhat lower performance, but support a wider range of minimum rates while reducing the computational complexity. The gap between the objective achieved by the heuristics and the upper bound is not large. In our experiments this gap is 10.7% averaging over all performed numerical evaluations for all system configurations. The increase in the range of the supported minimum rates when compared with a method reported in the literature is 14.6% on average.
1502.03874
Zhengbang Zha
Zhengbang Zha, Lei Hu, Siwei Sun, Jinyong Shan
Further results on differentially 4-uniform permutations over $\F_{2^{2m}}$
15 pages. This paper has been accepted for publication in SCIENCE CHINA Mathematics
null
10.1007/s11425-015-4996-2
null
cs.IT math.IT
http://creativecommons.org/licenses/by/3.0/
In this paper, we present several new constructions of differentially 4-uniform permutations over $\F_{2^{2m}}$ by modifying the values of the inverse function on some subsets of $\F_{2^{2m}}$. The resulted differentially 4-uniform permutations have high nonlinearities and algebraic degrees, which provide more choices for the design of cryptographic substitution boxes.
[ { "created": "Fri, 13 Feb 2015 02:50:59 GMT", "version": "v1" } ]
2023-07-19
[ [ "Zha", "Zhengbang", "" ], [ "Hu", "Lei", "" ], [ "Sun", "Siwei", "" ], [ "Shan", "Jinyong", "" ] ]
In this paper, we present several new constructions of differentially 4-uniform permutations over $\F_{2^{2m}}$ by modifying the values of the inverse function on some subsets of $\F_{2^{2m}}$. The resulted differentially 4-uniform permutations have high nonlinearities and algebraic degrees, which provide more choices for the design of cryptographic substitution boxes.
2109.05658
Abigail Jacobs
Abigail Z. Jacobs
Measurement as governance in and for responsible AI
5 pages, 1 figure; KDD Workshop on Responsible AI 2021
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement process between the thing we purport to be measuring and the thing we actually measure. However, the measurement process -- where social, cultural, and political values are implicitly encoded in sociotechnical systems -- is almost always obscured. Furthermore, this obscured process is where important governance decisions are encoded: governance about which systems are fair, which individuals belong in which categories, and so on. We can then use the language of measurement, and the tools of construct validity and reliability, to uncover hidden governance decisions. In particular, we highlight two types of construct validity, content validity and consequential validity, that are useful to elicit and characterize the feedback loops between the measurement, social construction, and enforcement of social categories. We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI. Together, these perspectives help us unpack how measurement acts as a hidden governance process in sociotechnical systems. Understanding measurement as governance supports a richer understanding of the governance processes already happening in AI -- responsible or otherwise -- revealing paths to more effective interventions.
[ { "created": "Mon, 13 Sep 2021 01:04:22 GMT", "version": "v1" } ]
2021-09-14
[ [ "Jacobs", "Abigail Z.", "" ] ]
Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement process between the thing we purport to be measuring and the thing we actually measure. However, the measurement process -- where social, cultural, and political values are implicitly encoded in sociotechnical systems -- is almost always obscured. Furthermore, this obscured process is where important governance decisions are encoded: governance about which systems are fair, which individuals belong in which categories, and so on. We can then use the language of measurement, and the tools of construct validity and reliability, to uncover hidden governance decisions. In particular, we highlight two types of construct validity, content validity and consequential validity, that are useful to elicit and characterize the feedback loops between the measurement, social construction, and enforcement of social categories. We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI. Together, these perspectives help us unpack how measurement acts as a hidden governance process in sociotechnical systems. Understanding measurement as governance supports a richer understanding of the governance processes already happening in AI -- responsible or otherwise -- revealing paths to more effective interventions.
1903.10140
Chunyang Feng
Chunyang Feng, Yufeng Sun, Xin Li
Iris R-CNN: Accurate Iris Segmentation in Non-cooperative Environment
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the significant advances in iris segmentation, accomplishing accurate iris segmentation in non-cooperative environment remains a grand challenge. In this paper, we present a deep learning framework, referred to as Iris R-CNN, to offer superior accuracy for iris segmentation. The proposed framework is derived from Mask R-CNN, and several novel techniques are proposed to carefully explore the unique characteristics of iris. First, we propose two novel networks: (i) Double-Circle Region Proposal Network (DC-RPN), and (ii) Double-Circle Classification and Regression Network (DC-CRN) to take into account the iris and pupil circles to maximize the accuracy for iris segmentation. Second, we propose a novel normalization scheme for Regions of Interest (RoIs) to facilitate a radically new pooling operation over a double-circle region. Experimental results on two challenging iris databases, UBIRIS.v2 and MICHE, demonstrate the superior accuracy of the proposed approach over other state-of-the-art methods.
[ { "created": "Mon, 25 Mar 2019 05:33:06 GMT", "version": "v1" } ]
2019-03-26
[ [ "Feng", "Chunyang", "" ], [ "Sun", "Yufeng", "" ], [ "Li", "Xin", "" ] ]
Despite the significant advances in iris segmentation, accomplishing accurate iris segmentation in non-cooperative environment remains a grand challenge. In this paper, we present a deep learning framework, referred to as Iris R-CNN, to offer superior accuracy for iris segmentation. The proposed framework is derived from Mask R-CNN, and several novel techniques are proposed to carefully explore the unique characteristics of iris. First, we propose two novel networks: (i) Double-Circle Region Proposal Network (DC-RPN), and (ii) Double-Circle Classification and Regression Network (DC-CRN) to take into account the iris and pupil circles to maximize the accuracy for iris segmentation. Second, we propose a novel normalization scheme for Regions of Interest (RoIs) to facilitate a radically new pooling operation over a double-circle region. Experimental results on two challenging iris databases, UBIRIS.v2 and MICHE, demonstrate the superior accuracy of the proposed approach over other state-of-the-art methods.
1904.09163
David Sigtermans
David Sigtermans
Transfer Entropy: where Shannon meets Turing
4 pages, 1 figure
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer entropy is capable of capturing nonlinear source-destination relations between multi-variate time series. It is a measure of association between source data that are transformed into destination data via a set of linear transformations between their probability mass functions. The resulting tensor formalism is used to show that in specific cases, e.g., in the case the system consists of three stochastic processes, bivariate analysis suffices to distinguish true relations from false relations. This allows us to determine the causal structure as far as encoded in the probability mass functions of noisy data. The tensor formalism was also used to derive the Data Processing Inequality for transfer entropy.
[ { "created": "Fri, 19 Apr 2019 12:24:44 GMT", "version": "v1" }, { "created": "Fri, 17 May 2019 17:56:03 GMT", "version": "v2" }, { "created": "Sat, 25 May 2019 14:11:24 GMT", "version": "v3" } ]
2019-05-28
[ [ "Sigtermans", "David", "" ] ]
Transfer entropy is capable of capturing nonlinear source-destination relations between multi-variate time series. It is a measure of association between source data that are transformed into destination data via a set of linear transformations between their probability mass functions. The resulting tensor formalism is used to show that in specific cases, e.g., in the case the system consists of three stochastic processes, bivariate analysis suffices to distinguish true relations from false relations. This allows us to determine the causal structure as far as encoded in the probability mass functions of noisy data. The tensor formalism was also used to derive the Data Processing Inequality for transfer entropy.
1312.5548
Juergen Schmidhuber
J\"urgen Schmidhuber
My First Deep Learning System of 1991 + Deep Learning Timeline 1962-2013
11 pages. As a machine learning researcher I am obsessed with proper credit assignment. This draft is the result of an experiment in rapid massive open online peer review. Since 20 September 2013, subsequent revisions published under http://www.deeplearning.me have absorbed many suggestions for improvements by experts
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Learning has attracted significant attention in recent years. Here I present a brief overview of my first Deep Learner of 1991, and its historic context, with a timeline of Deep Learning highlights.
[ { "created": "Thu, 19 Dec 2013 13:45:45 GMT", "version": "v1" } ]
2013-12-20
[ [ "Schmidhuber", "Jürgen", "" ] ]
Deep Learning has attracted significant attention in recent years. Here I present a brief overview of my first Deep Learner of 1991, and its historic context, with a timeline of Deep Learning highlights.
2204.00645
Young-Ho Kim
Christian DeBuy, Florin Ghesu, Reza Langari, Young-Ho Kim
Design and validation of zero-slack separable manipulator for Intracardiac Echocardiography
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clinicians require substantial training and experience to become comfortable with steering Intracardiac echocardiography (ICE) catheter to localize and measure the area of treatment to watch for complications while device catheters are deployed in another access. Thus, it is reasonable that a robotic-assist system to hold and actively manipulate the ICE catheter could ease the workload of the physician. Existing commercially-available robotic systems and research prototypes all use existing commercially available ICE catheters based on multiple tendon-sheath mechanism (TSM). To motorize the existing TSM-based ICE catheter, the actuators interface with the outer handle knobs to manipulate four internal tendons. However, in practice, the actuators are located at a sterile, safe place far away from the ICE handle. Thus, to interface with knobs, there exist multiple coupled gear structures between two, leading to a highly nonlinear behavior (e.g. various slack, elasticity) alongside hysteresis phenomena in TSM. Since ICE catheters are designed for single use, the expensive actuators need to be located in a safe place so as to be reusable. Moreover, these actuators should interface as directly as possible with the tendons for accurate tip controls. In this paper, we introduce a separable ICE catheter robot with four tendon actuation: one part reusable and another disposable. Moreover, we propose a practical model and calibration method for our proposed mechanism so that four tendons are actuated simultaneously allowing for precise tip control and mitigating issues with conventional devices such as dead-zone and hysteresis with simple linear compensation. We consider an open-loop controller since many available ICE catheters are used without position-tracking sensors at the tip due to costs and single use
[ { "created": "Fri, 1 Apr 2022 18:17:21 GMT", "version": "v1" } ]
2022-04-05
[ [ "DeBuy", "Christian", "" ], [ "Ghesu", "Florin", "" ], [ "Langari", "Reza", "" ], [ "Kim", "Young-Ho", "" ] ]
Clinicians require substantial training and experience to become comfortable with steering Intracardiac echocardiography (ICE) catheter to localize and measure the area of treatment to watch for complications while device catheters are deployed in another access. Thus, it is reasonable that a robotic-assist system to hold and actively manipulate the ICE catheter could ease the workload of the physician. Existing commercially-available robotic systems and research prototypes all use existing commercially available ICE catheters based on multiple tendon-sheath mechanism (TSM). To motorize the existing TSM-based ICE catheter, the actuators interface with the outer handle knobs to manipulate four internal tendons. However, in practice, the actuators are located at a sterile, safe place far away from the ICE handle. Thus, to interface with knobs, there exist multiple coupled gear structures between two, leading to a highly nonlinear behavior (e.g. various slack, elasticity) alongside hysteresis phenomena in TSM. Since ICE catheters are designed for single use, the expensive actuators need to be located in a safe place so as to be reusable. Moreover, these actuators should interface as directly as possible with the tendons for accurate tip controls. In this paper, we introduce a separable ICE catheter robot with four tendon actuation: one part reusable and another disposable. Moreover, we propose a practical model and calibration method for our proposed mechanism so that four tendons are actuated simultaneously allowing for precise tip control and mitigating issues with conventional devices such as dead-zone and hysteresis with simple linear compensation. We consider an open-loop controller since many available ICE catheters are used without position-tracking sensors at the tip due to costs and single use
2110.04452
Huimin Dong
Huimin Dong, R\'eka Markovich and Leendert van der Torre
Towards AI Logic for Social Reasoning
null
Journal of Zhejiang University, Vol. 5, No. 50 (2020): 31-50
null
null
cs.AI cs.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Artificial Intelligence (AI) logic formalizes the reasoning of intelligent agents. In this paper, we discuss how an argumentation-based AI logic could be used also to formalize important aspects of social reasoning. Besides reasoning about the knowledge and actions of individual agents, social AI logic can reason also about social dependencies among agents using the rights, obligations and permissions of the agents. We discuss four aspects of social AI logic. First, we discuss how rights represent relations between the obligations and permissions of intelligent agents. Second, we discuss how to argue about the right-to-know, a central issue in the recent discussion of privacy and ethics. Third, we discuss how a wide variety of conflicts among intelligent agents can be identified and (sometimes) resolved by comparing formal arguments. Importantly, to cover a wide range of arguments occurring in daily life, also fallacious arguments can be represented and reasoned about. Fourth, we discuss how to argue about the freedom to act for intelligent agents. Examples from social, legal and ethical reasoning highlight the challenges in developing social AI logic. The discussion of the four challenges leads to a research program for argumentation-based social AI logic, contributing towards the future development of AI logic.
[ { "created": "Sat, 9 Oct 2021 04:35:23 GMT", "version": "v1" } ]
2021-10-12
[ [ "Dong", "Huimin", "" ], [ "Markovich", "Réka", "" ], [ "van der Torre", "Leendert", "" ] ]
Artificial Intelligence (AI) logic formalizes the reasoning of intelligent agents. In this paper, we discuss how an argumentation-based AI logic could be used also to formalize important aspects of social reasoning. Besides reasoning about the knowledge and actions of individual agents, social AI logic can reason also about social dependencies among agents using the rights, obligations and permissions of the agents. We discuss four aspects of social AI logic. First, we discuss how rights represent relations between the obligations and permissions of intelligent agents. Second, we discuss how to argue about the right-to-know, a central issue in the recent discussion of privacy and ethics. Third, we discuss how a wide variety of conflicts among intelligent agents can be identified and (sometimes) resolved by comparing formal arguments. Importantly, to cover a wide range of arguments occurring in daily life, also fallacious arguments can be represented and reasoned about. Fourth, we discuss how to argue about the freedom to act for intelligent agents. Examples from social, legal and ethical reasoning highlight the challenges in developing social AI logic. The discussion of the four challenges leads to a research program for argumentation-based social AI logic, contributing towards the future development of AI logic.
1904.01691
Sangkug Lym
Sangkug Lym, Donghyuk Lee, Mike O'Connor, Niladrish Chatterjee, Mattan Erez
DeLTA: GPU Performance Model for Deep Learning Applications with In-depth Memory System Traffic Analysis
null
null
10.1109/ISPASS.2019.00041
null
cs.DC cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Training convolutional neural networks (CNNs) requires intense compute throughput and high memory bandwidth. Especially, convolution layers account for the majority of the execution time of CNN training, and GPUs are commonly used to accelerate these layer workloads. GPU design optimization for efficient CNN training acceleration requires the accurate modeling of how their performance improves when computing and memory resources are increased. We present DeLTA, the first analytical model that accurately estimates the traffic at each GPU memory hierarchy level, while accounting for the complex reuse patterns of a parallel convolution algorithm. We demonstrate that our model is both accurate and robust for different CNNs and GPU architectures. We then show how this model can be used to carefully balance the scaling of different GPU resources for efficient CNN performance improvement.
[ { "created": "Tue, 2 Apr 2019 22:30:06 GMT", "version": "v1" } ]
2020-04-28
[ [ "Lym", "Sangkug", "" ], [ "Lee", "Donghyuk", "" ], [ "O'Connor", "Mike", "" ], [ "Chatterjee", "Niladrish", "" ], [ "Erez", "Mattan", "" ] ]
Training convolutional neural networks (CNNs) requires intense compute throughput and high memory bandwidth. Especially, convolution layers account for the majority of the execution time of CNN training, and GPUs are commonly used to accelerate these layer workloads. GPU design optimization for efficient CNN training acceleration requires the accurate modeling of how their performance improves when computing and memory resources are increased. We present DeLTA, the first analytical model that accurately estimates the traffic at each GPU memory hierarchy level, while accounting for the complex reuse patterns of a parallel convolution algorithm. We demonstrate that our model is both accurate and robust for different CNNs and GPU architectures. We then show how this model can be used to carefully balance the scaling of different GPU resources for efficient CNN performance improvement.
2303.09001
Alan Chan
Alan Chan, Herbie Bradley, Nitarshan Rajkumar
Reclaiming the Digital Commons: A Public Data Trust for Training Data
Accepted at AIES 2023
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.
[ { "created": "Thu, 16 Mar 2023 00:12:43 GMT", "version": "v1" }, { "created": "Sun, 21 May 2023 23:17:19 GMT", "version": "v2" } ]
2023-05-23
[ [ "Chan", "Alan", "" ], [ "Bradley", "Herbie", "" ], [ "Rajkumar", "Nitarshan", "" ] ]
Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.
1603.09601
Reuben Stephen
Reuben George Stephen, Rui Zhang
Joint Millimeter-Wave Fronthaul and OFDMA Resource Allocation in Ultra-Dense CRAN
Accepted for publication in IEEE Transactions on Communications
null
10.1109/TCOMM.2017.2649519
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ultra-dense (UD) wireless networks and cloud radio access networks (CRAN) are two promising network architectures for the emerging fifth-generation (5G) wireless communication systems. By jointly employing them, a new appealing network solution is proposed in this paper, termed UD-CRAN. In a UD-CRAN, millimeter-wave (mmWave) wireless fronthaul is preferred for information exchange between the central processor and the distributed remote radio heads (RRHs), due to its lower cost and higher flexibility in deployment, compared to fixed optical links. This motivates our study in this paper on the downlink transmission in a mmWave fronthaul enabled, orthogonal frequency division multiple access (OFDMA) based UD-CRAN. In particular, the fronthaul is shared among the RRHs via time division multiple access (TDMA), while the RRHs jointly transmit to the users on orthogonal frequency subchannels using OFDMA. The joint resource allocation over the TDMA-based mmWave fronthaul and OFDMA-based wireless transmission is investigated to maximize the weighted sum rate of all users. Although the problem is non-convex, we propose a Lagrange duality based solution, which can be efficiently computed with good accuracy. To further reduce the complexity, we also propose a greedy search based heuristic, which achieves close to optimal performance under practical setups. Finally, we show the significant throughput gains of the proposed joint resource allocation approach compared to other benchmark schemes by simulations.
[ { "created": "Thu, 31 Mar 2016 14:25:24 GMT", "version": "v1" }, { "created": "Fri, 17 Jun 2016 15:17:25 GMT", "version": "v2" }, { "created": "Fri, 20 Jan 2017 07:53:07 GMT", "version": "v3" }, { "created": "Wed, 8 Feb 2017 10:11:11 GMT", "version": "v4" } ]
2017-02-09
[ [ "Stephen", "Reuben George", "" ], [ "Zhang", "Rui", "" ] ]
Ultra-dense (UD) wireless networks and cloud radio access networks (CRAN) are two promising network architectures for the emerging fifth-generation (5G) wireless communication systems. By jointly employing them, a new appealing network solution is proposed in this paper, termed UD-CRAN. In a UD-CRAN, millimeter-wave (mmWave) wireless fronthaul is preferred for information exchange between the central processor and the distributed remote radio heads (RRHs), due to its lower cost and higher flexibility in deployment, compared to fixed optical links. This motivates our study in this paper on the downlink transmission in a mmWave fronthaul enabled, orthogonal frequency division multiple access (OFDMA) based UD-CRAN. In particular, the fronthaul is shared among the RRHs via time division multiple access (TDMA), while the RRHs jointly transmit to the users on orthogonal frequency subchannels using OFDMA. The joint resource allocation over the TDMA-based mmWave fronthaul and OFDMA-based wireless transmission is investigated to maximize the weighted sum rate of all users. Although the problem is non-convex, we propose a Lagrange duality based solution, which can be efficiently computed with good accuracy. To further reduce the complexity, we also propose a greedy search based heuristic, which achieves close to optimal performance under practical setups. Finally, we show the significant throughput gains of the proposed joint resource allocation approach compared to other benchmark schemes by simulations.
1307.1943
EPTCS
Carst Tankink (Institute for Computing and Information Science, Radboud University Nijmegen)
Proof in Context -- Web Editing with Rich, Modeless Contextual Feedback
In Proceedings UITP 2012, arXiv:1307.1528
EPTCS 118, 2013, pp. 42-56
10.4204/EPTCS.118.3
null
cs.HC cs.LO cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Agora system is a prototypical Wiki for formal mathematics: a web-based system for collaborating on formal mathematics, intended to support informal documentation of formal developments. This system requires a reusable proof editor component, both for collaborative editing of documents, and for embedding in the resulting documents. This paper describes the design of Agora's asynchronous editor, that is generic enough to support different tools working on editor content and providing contextual information, with interactive theorem proverss being a special, but important, case described in detail for the Coq theorem prover.
[ { "created": "Mon, 8 Jul 2013 04:41:39 GMT", "version": "v1" } ]
2013-07-09
[ [ "Tankink", "Carst", "", "Institute for Computing and Information Science,\n Radboud University Nijmegen" ] ]
The Agora system is a prototypical Wiki for formal mathematics: a web-based system for collaborating on formal mathematics, intended to support informal documentation of formal developments. This system requires a reusable proof editor component, both for collaborative editing of documents, and for embedding in the resulting documents. This paper describes the design of Agora's asynchronous editor, that is generic enough to support different tools working on editor content and providing contextual information, with interactive theorem proverss being a special, but important, case described in detail for the Coq theorem prover.
1806.11269
Jun Chen
Yang Xiao, Jun Chen, Yancheng Wang, Zhiguo Cao, Joey Tianyi Zhou, Xiang Bai
Action Recognition for Depth Video using Multi-view Dynamic Images
accepted by Information Sciences
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.
[ { "created": "Fri, 29 Jun 2018 05:27:26 GMT", "version": "v1" }, { "created": "Sat, 4 Aug 2018 10:40:40 GMT", "version": "v2" }, { "created": "Thu, 27 Dec 2018 16:21:49 GMT", "version": "v3" } ]
2018-12-31
[ [ "Xiao", "Yang", "" ], [ "Chen", "Jun", "" ], [ "Wang", "Yancheng", "" ], [ "Cao", "Zhiguo", "" ], [ "Zhou", "Joey Tianyi", "" ], [ "Bai", "Xiang", "" ] ]
Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.
2004.08861
Jie Fu
Jie Fu, Xue Geng, Zhijian Duan, Bohan Zhuang, Xingdi Yuan, Adam Trischler, Jie Lin, Chris Pal, Hao Dong
Role-Wise Data Augmentation for Knowledge Distillation
null
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the \textit{teacher}) into another model (the \textit{student}), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated. Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained. On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests. Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation. Our data augmentation agents generate distinct training data for the teacher and student, respectively. We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student. We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches. The code for reproducing our results can be found at https://github.com/bigaidream-projects/role-kd
[ { "created": "Sun, 19 Apr 2020 14:22:17 GMT", "version": "v1" } ]
2020-04-21
[ [ "Fu", "Jie", "" ], [ "Geng", "Xue", "" ], [ "Duan", "Zhijian", "" ], [ "Zhuang", "Bohan", "" ], [ "Yuan", "Xingdi", "" ], [ "Trischler", "Adam", "" ], [ "Lin", "Jie", "" ], [ "Pal", "Chris", "" ], [ "Dong", "Hao", "" ] ]
Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the \textit{teacher}) into another model (the \textit{student}), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated. Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained. On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests. Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation. Our data augmentation agents generate distinct training data for the teacher and student, respectively. We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student. We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches. The code for reproducing our results can be found at https://github.com/bigaidream-projects/role-kd
1908.04812
Taesun Whang
Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, HeuiSeok Lim
An Effective Domain Adaptive Post-Training Method for BERT in Response Selection
INTERSPEECH 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on multi-turn response selection in a retrieval-based dialog system. In this paper, we utilize the powerful pre-trained language model Bi-directional Encoder Representations from Transformer (BERT) for a multi-turn dialog system and propose a highly effective post-training method on domain-specific corpus. Although BERT is easily adopted to various NLP tasks and outperforms previous baselines of each task, it still has limitations if a task corpus is too focused on a certain domain. Post-training on domain-specific corpus (e.g., Ubuntu Corpus) helps the model to train contextualized representations and words that do not appear in general corpus (e.g., English Wikipedia). Experimental results show that our approach achieves new state-of-the-art on two response selection benchmarks (i.e., Ubuntu Corpus V1, Advising Corpus) performance improvement by 5.9% and 6% on R@1.
[ { "created": "Tue, 13 Aug 2019 18:24:29 GMT", "version": "v1" }, { "created": "Mon, 27 Jul 2020 02:37:49 GMT", "version": "v2" } ]
2020-07-28
[ [ "Whang", "Taesun", "" ], [ "Lee", "Dongyub", "" ], [ "Lee", "Chanhee", "" ], [ "Yang", "Kisu", "" ], [ "Oh", "Dongsuk", "" ], [ "Lim", "HeuiSeok", "" ] ]
We focus on multi-turn response selection in a retrieval-based dialog system. In this paper, we utilize the powerful pre-trained language model Bi-directional Encoder Representations from Transformer (BERT) for a multi-turn dialog system and propose a highly effective post-training method on domain-specific corpus. Although BERT is easily adopted to various NLP tasks and outperforms previous baselines of each task, it still has limitations if a task corpus is too focused on a certain domain. Post-training on domain-specific corpus (e.g., Ubuntu Corpus) helps the model to train contextualized representations and words that do not appear in general corpus (e.g., English Wikipedia). Experimental results show that our approach achieves new state-of-the-art on two response selection benchmarks (i.e., Ubuntu Corpus V1, Advising Corpus) performance improvement by 5.9% and 6% on R@1.
1412.3579
Jan Bergstra
Jan A. Bergstra
Personal Multi-threading
null
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-threading allows agents to pursue a heterogeneous collection of tasks in an orderly manner. The view of multi-threading that emerges from thread algebra is applied to the case where a single agent, who may be human, maintains a hierarchical multithread as an architecture of its own activities.
[ { "created": "Thu, 11 Dec 2014 09:06:40 GMT", "version": "v1" } ]
2014-12-12
[ [ "Bergstra", "Jan A.", "" ] ]
Multi-threading allows agents to pursue a heterogeneous collection of tasks in an orderly manner. The view of multi-threading that emerges from thread algebra is applied to the case where a single agent, who may be human, maintains a hierarchical multithread as an architecture of its own activities.
2112.12182
Duo Wang
Duo Wang, Salah Karout
Fine-grained Multi-Modal Self-Supervised Learning
Accepted at BMVC 2021
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-Modal Self-Supervised Learning from videos has been shown to improve model's performance on various downstream tasks. However, such Self-Supervised pre-training requires large batch sizes and a large amount of computation resources due to the noise present in the uncurated data. This is partly due to the fact that the prevalent training scheme is trained on coarse-grained setting, in which vectors representing the whole video clips or natural language sentences are used for computing similarity. Such scheme makes training noisy as part of the video clips can be totally not correlated with the other-modality input such as text description. In this paper, we propose a fine-grained multi-modal self-supervised training scheme that computes the similarity between embeddings at finer-scale (such as individual feature map embeddings and embeddings of phrases), and uses attention mechanisms to reduce noisy pairs' weighting in the loss function. We show that with the proposed pre-training scheme, we can train smaller models, with smaller batch-size and much less computational resources to achieve downstream tasks performances comparable to State-Of-The-Art, for tasks including action recognition and text-image retrievals.
[ { "created": "Wed, 22 Dec 2021 19:17:45 GMT", "version": "v1" } ]
2021-12-24
[ [ "Wang", "Duo", "" ], [ "Karout", "Salah", "" ] ]
Multi-Modal Self-Supervised Learning from videos has been shown to improve model's performance on various downstream tasks. However, such Self-Supervised pre-training requires large batch sizes and a large amount of computation resources due to the noise present in the uncurated data. This is partly due to the fact that the prevalent training scheme is trained on coarse-grained setting, in which vectors representing the whole video clips or natural language sentences are used for computing similarity. Such scheme makes training noisy as part of the video clips can be totally not correlated with the other-modality input such as text description. In this paper, we propose a fine-grained multi-modal self-supervised training scheme that computes the similarity between embeddings at finer-scale (such as individual feature map embeddings and embeddings of phrases), and uses attention mechanisms to reduce noisy pairs' weighting in the loss function. We show that with the proposed pre-training scheme, we can train smaller models, with smaller batch-size and much less computational resources to achieve downstream tasks performances comparable to State-Of-The-Art, for tasks including action recognition and text-image retrievals.
2203.11795
Rob Bisseling
Thomas Koopman and Rob H. Bisseling
Minimizing communication in the multidimensional FFT
23 pages, 3 figures. The new version has mainly added results in section 4.2 for the package heFFT, following referee comments. Furthermore, small linguistic changes have been made to render the arXiv version identical in text to the final published journal version
SIAM Journal on Scientific Computing, Volume 45, Number 6, pp. C330-C347 (2023)
10.1137/22M1487242
null
cs.DC cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a parallel algorithm for the fast Fourier transform (FFT) in higher dimensions. This algorithm generalizes the cyclic-to-cyclic one-dimensional parallel algorithm to a cyclic-to-cyclic multidimensional parallel algorithm while retaining the property of needing only a single all-to-all communication step. This is under the constraint that we use at most $\sqrt{N}$ processors for an FFT on an array with a total of $N$ elements, irrespective of the dimension $d$ or the shape of the array. The only assumption we make is that $N$ is sufficiently composite. Our algorithm starts and ends in the same data distribution. We present our multidimensional implementation FFTU which utilizes the sequential FFTW program for its local FFTs, and which can handle any dimension $d$. We obtain experimental results for $d\leq 5$ using MPI on up to 4096 cores of the supercomputer Snellius, comparing FFTU with the parallel FFTW program and with PFFT and heFFTe. These results show that FFTU is competitive with the state of the art and that it allows one to use a larger number of processors, while keeping communication limited to a single all-to-all operation. For arrays of size $1024^3$ and $64^5$, FFTU achieves a speedup of a factor 149 and 176, respectively, on 4096 processors.
[ { "created": "Tue, 22 Mar 2022 15:01:35 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2023 09:36:58 GMT", "version": "v2" } ]
2023-12-12
[ [ "Koopman", "Thomas", "" ], [ "Bisseling", "Rob H.", "" ] ]
We present a parallel algorithm for the fast Fourier transform (FFT) in higher dimensions. This algorithm generalizes the cyclic-to-cyclic one-dimensional parallel algorithm to a cyclic-to-cyclic multidimensional parallel algorithm while retaining the property of needing only a single all-to-all communication step. This is under the constraint that we use at most $\sqrt{N}$ processors for an FFT on an array with a total of $N$ elements, irrespective of the dimension $d$ or the shape of the array. The only assumption we make is that $N$ is sufficiently composite. Our algorithm starts and ends in the same data distribution. We present our multidimensional implementation FFTU which utilizes the sequential FFTW program for its local FFTs, and which can handle any dimension $d$. We obtain experimental results for $d\leq 5$ using MPI on up to 4096 cores of the supercomputer Snellius, comparing FFTU with the parallel FFTW program and with PFFT and heFFTe. These results show that FFTU is competitive with the state of the art and that it allows one to use a larger number of processors, while keeping communication limited to a single all-to-all operation. For arrays of size $1024^3$ and $64^5$, FFTU achieves a speedup of a factor 149 and 176, respectively, on 4096 processors.
2210.04230
Victor Croisfelt MSc
Victor Croisfelt and Fabio Saggese and Israel Leyva-Mayorga and Rados{\l}aw Kotaba and Gabriele Gradoni and Petar Popovski
Random Access Protocol with Channel Oracle Enabled by a Reconfigurable Intelligent Surface
14 pages, 7 figures, journal paper
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The widespread adoption of Reconfigurable Intelligent Surfaces (RISs) in future practical wireless systems is critically dependent on the integration of the RIS into higher-layer protocols beyond the physical (PHY) one, an issue that has received minimal attention in the research literature. In light of this, we consider a classical random access (RA) problem, where uncoordinated users' equipment (UEs) transmit sporadically to an access point (AP). Differently from previous works, we ponder how a RIS can be integrated into the design of new medium access control (MAC) layer protocols to solve such a problem. We consider that the AP is able to control a RIS to change how its reflective elements are configured, namely, the RIS configurations. Thus, the RIS can be opportunistically controlled to favor the transmission of some of the UEs without the need to explicitly perform channel estimation (CHEST). We embrace this observation and propose a RIS-assisted RA protocol comprised of two modules: Channel Oracle and Access. During channel oracle, the UEs learn how the RIS configurations affect their channel conditions. During the access, the UEs tailor their access policies using the channel oracle knowledge. Our proposed RIS-assisted protocol is able to increase the expected throughput by approximately 60% in comparison to the slotted ALOHA (S-ALOHA) protocol.
[ { "created": "Sun, 9 Oct 2022 11:23:27 GMT", "version": "v1" }, { "created": "Mon, 17 Apr 2023 14:41:39 GMT", "version": "v2" } ]
2023-04-18
[ [ "Croisfelt", "Victor", "" ], [ "Saggese", "Fabio", "" ], [ "Leyva-Mayorga", "Israel", "" ], [ "Kotaba", "Radosław", "" ], [ "Gradoni", "Gabriele", "" ], [ "Popovski", "Petar", "" ] ]
The widespread adoption of Reconfigurable Intelligent Surfaces (RISs) in future practical wireless systems is critically dependent on the integration of the RIS into higher-layer protocols beyond the physical (PHY) one, an issue that has received minimal attention in the research literature. In light of this, we consider a classical random access (RA) problem, where uncoordinated users' equipment (UEs) transmit sporadically to an access point (AP). Differently from previous works, we ponder how a RIS can be integrated into the design of new medium access control (MAC) layer protocols to solve such a problem. We consider that the AP is able to control a RIS to change how its reflective elements are configured, namely, the RIS configurations. Thus, the RIS can be opportunistically controlled to favor the transmission of some of the UEs without the need to explicitly perform channel estimation (CHEST). We embrace this observation and propose a RIS-assisted RA protocol comprised of two modules: Channel Oracle and Access. During channel oracle, the UEs learn how the RIS configurations affect their channel conditions. During the access, the UEs tailor their access policies using the channel oracle knowledge. Our proposed RIS-assisted protocol is able to increase the expected throughput by approximately 60% in comparison to the slotted ALOHA (S-ALOHA) protocol.
2309.01237
Alexander Kolpakov
Alexander Kolpakov, A. Alistair Rocke
The Information Geometry of UMAP
12 pages, 2 figures, 3 tables ; Github repo (https://github.com/sashakolpakov/info-geometry-umap) ; to appear in Le Matematiche
null
null
null
cs.CG cs.DM cs.IT math.GT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note we highlight some connections of UMAP to the basic principles of Information Geometry. Originally, UMAP was derived from Category Theory observations. However, we posit that it also has a natural geometric interpretation.
[ { "created": "Sun, 3 Sep 2023 18:10:00 GMT", "version": "v1" }, { "created": "Sat, 9 Sep 2023 09:31:41 GMT", "version": "v2" }, { "created": "Fri, 15 Mar 2024 09:55:54 GMT", "version": "v3" }, { "created": "Tue, 30 Apr 2024 19:35:10 GMT", "version": "v4" }, { "created": "Sun, 2 Jun 2024 17:06:41 GMT", "version": "v5" }, { "created": "Tue, 25 Jun 2024 17:23:10 GMT", "version": "v6" }, { "created": "Sat, 20 Jul 2024 22:56:38 GMT", "version": "v7" } ]
2024-07-23
[ [ "Kolpakov", "Alexander", "" ], [ "Rocke", "A. Alistair", "" ] ]
In this note we highlight some connections of UMAP to the basic principles of Information Geometry. Originally, UMAP was derived from Category Theory observations. However, we posit that it also has a natural geometric interpretation.
2311.06840
Bijan Mazaheri
Bijan Mazaheri, Siddharth Jain, Matthew Cook, Jehoshua Bruck
Omitted Labels in Causality: A Study of Paradoxes
null
null
null
null
cs.LG cs.AI cs.IT cs.SI math.IT stat.ME
http://creativecommons.org/licenses/by/4.0/
We explore what we call ``omitted label contexts,'' in which training data is limited to a subset of the possible labels. This setting is common among specialized human experts or specific focused studies. We lean on well-studied paradoxes (Simpson's and Condorcet) to illustrate the more general difficulties of causal inference in omitted label contexts. Contrary to the fundamental principles on which much of causal inference is built, we show that ``correct'' adjustments sometimes require non-exchangeable treatment and control groups. These pitfalls lead us to the study networks of conclusions drawn from different contexts and the structures the form, proving an interesting connection between these networks and social choice theory.
[ { "created": "Sun, 12 Nov 2023 13:31:53 GMT", "version": "v1" }, { "created": "Tue, 13 Feb 2024 18:53:58 GMT", "version": "v2" }, { "created": "Thu, 23 May 2024 12:58:14 GMT", "version": "v3" } ]
2024-05-24
[ [ "Mazaheri", "Bijan", "" ], [ "Jain", "Siddharth", "" ], [ "Cook", "Matthew", "" ], [ "Bruck", "Jehoshua", "" ] ]
We explore what we call ``omitted label contexts,'' in which training data is limited to a subset of the possible labels. This setting is common among specialized human experts or specific focused studies. We lean on well-studied paradoxes (Simpson's and Condorcet) to illustrate the more general difficulties of causal inference in omitted label contexts. Contrary to the fundamental principles on which much of causal inference is built, we show that ``correct'' adjustments sometimes require non-exchangeable treatment and control groups. These pitfalls lead us to the study networks of conclusions drawn from different contexts and the structures the form, proving an interesting connection between these networks and social choice theory.
2407.19160
Stephan Saalfeld
C\'edric Allier, Magdalena C. Schneider, Michael Innerberger, Larissa Heinrich, John A. Bogovic, Stephan Saalfeld
Decomposing heterogeneous dynamical systems with graph neural networks
11 pages, 4 figures, 2 pages appendix, 2 supplementary tables, 18 supplementary figures, 13 videos linked to youtube
null
null
null
cs.LG cs.AI math.DS
http://creativecommons.org/licenses/by/4.0/
Natural physical, chemical, and biological dynamical systems are often complex, with heterogeneous components interacting in diverse ways. We show that graph neural networks can be designed to jointly learn the interaction rules and the structure of the heterogeneity from data alone. The learned latent structure and dynamics can be used to virtually decompose the complex system which is necessary to parameterize and infer the underlying governing equations. We tested the approach with simulation experiments of moving particles and vector fields that interact with each other. While our current aim is to better understand and validate the approach with simulated data, we anticipate it to become a generally applicable tool to uncover the governing rules underlying complex dynamics observed in nature.
[ { "created": "Sat, 27 Jul 2024 04:03:12 GMT", "version": "v1" } ]
2024-07-30
[ [ "Allier", "Cédric", "" ], [ "Schneider", "Magdalena C.", "" ], [ "Innerberger", "Michael", "" ], [ "Heinrich", "Larissa", "" ], [ "Bogovic", "John A.", "" ], [ "Saalfeld", "Stephan", "" ] ]
Natural physical, chemical, and biological dynamical systems are often complex, with heterogeneous components interacting in diverse ways. We show that graph neural networks can be designed to jointly learn the interaction rules and the structure of the heterogeneity from data alone. The learned latent structure and dynamics can be used to virtually decompose the complex system which is necessary to parameterize and infer the underlying governing equations. We tested the approach with simulation experiments of moving particles and vector fields that interact with each other. While our current aim is to better understand and validate the approach with simulated data, we anticipate it to become a generally applicable tool to uncover the governing rules underlying complex dynamics observed in nature.
1711.10467
Cong Ma
Cong Ma, Kaizheng Wang, Yuejie Chi, Yuxin Chen
Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution
accepted to Foundations of Computational Mathematics (FOCM)
Foundations of Computational Mathematics, vol. 20, no. 3, pp. 451-632, June 2020
10.1007/s10208-019-09429-9
null
cs.LG cs.IT math.IT math.OC math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks performance guarantees. This paper uncovers a striking phenomenon in nonconvex optimization: even in the absence of explicit regularization, gradient descent enforces proper regularization implicitly under various statistical models. In fact, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting of points incoherent with the sampling mechanism. This "implicit regularization" feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results in substantial computational savings. Focusing on three fundamental statistical estimation problems, i.e. phase retrieval, low-rank matrix completion, and blind deconvolution, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization. In particular, by marrying statistical modeling with generic optimization theory, we develop a general recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal error control --- measured entrywise and by the spectral norm --- which might be of independent interest.
[ { "created": "Tue, 28 Nov 2017 18:53:38 GMT", "version": "v1" }, { "created": "Thu, 14 Dec 2017 10:47:03 GMT", "version": "v2" }, { "created": "Tue, 30 Jul 2019 02:14:54 GMT", "version": "v3" } ]
2020-06-09
[ [ "Ma", "Cong", "" ], [ "Wang", "Kaizheng", "" ], [ "Chi", "Yuejie", "" ], [ "Chen", "Yuxin", "" ] ]
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks performance guarantees. This paper uncovers a striking phenomenon in nonconvex optimization: even in the absence of explicit regularization, gradient descent enforces proper regularization implicitly under various statistical models. In fact, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting of points incoherent with the sampling mechanism. This "implicit regularization" feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results in substantial computational savings. Focusing on three fundamental statistical estimation problems, i.e. phase retrieval, low-rank matrix completion, and blind deconvolution, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization. In particular, by marrying statistical modeling with generic optimization theory, we develop a general recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal error control --- measured entrywise and by the spectral norm --- which might be of independent interest.
2105.09613
Aditi Singh
Aditi Singh, Suhas Jayaram Subramanya, Ravishankar Krishnaswamy, Harsha Vardhan Simhadri
FreshDiskANN: A Fast and Accurate Graph-Based ANN Index for Streaming Similarity Search
19 pages, 22 figures
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate nearest neighbor search (ANNS) is a fundamental building block in information retrieval with graph-based indices being the current state-of-the-art and widely used in the industry. Recent advances in graph-based indices have made it possible to index and search billion-point datasets with high recall and millisecond-level latency on a single commodity machine with an SSD. However, existing graph algorithms for ANNS support only static indices that cannot reflect real-time changes to the corpus required by many key real-world scenarios (e.g. index of sentences in documents, email, or a news index). To overcome this drawback, the current industry practice for manifesting updates into such indices is to periodically re-build these indices, which can be prohibitively expensive. In this paper, we present the first graph-based ANNS index that reflects corpus updates into the index in real-time without compromising on search performance. Using update rules for this index, we design FreshDiskANN, a system that can index over a billion points on a workstation with an SSD and limited memory, and support thousands of concurrent real-time inserts, deletes and searches per second each, while retaining $>95\%$ 5-recall@5. This represents a 5-10x reduction in the cost of maintaining freshness in indices when compared to existing methods.
[ { "created": "Thu, 20 May 2021 09:17:13 GMT", "version": "v1" } ]
2021-05-21
[ [ "Singh", "Aditi", "" ], [ "Subramanya", "Suhas Jayaram", "" ], [ "Krishnaswamy", "Ravishankar", "" ], [ "Simhadri", "Harsha Vardhan", "" ] ]
Approximate nearest neighbor search (ANNS) is a fundamental building block in information retrieval with graph-based indices being the current state-of-the-art and widely used in the industry. Recent advances in graph-based indices have made it possible to index and search billion-point datasets with high recall and millisecond-level latency on a single commodity machine with an SSD. However, existing graph algorithms for ANNS support only static indices that cannot reflect real-time changes to the corpus required by many key real-world scenarios (e.g. index of sentences in documents, email, or a news index). To overcome this drawback, the current industry practice for manifesting updates into such indices is to periodically re-build these indices, which can be prohibitively expensive. In this paper, we present the first graph-based ANNS index that reflects corpus updates into the index in real-time without compromising on search performance. Using update rules for this index, we design FreshDiskANN, a system that can index over a billion points on a workstation with an SSD and limited memory, and support thousands of concurrent real-time inserts, deletes and searches per second each, while retaining $>95\%$ 5-recall@5. This represents a 5-10x reduction in the cost of maintaining freshness in indices when compared to existing methods.
1610.02136
Dan Hendrycks
Dan Hendrycks and Kevin Gimpel
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
null
null
cs.NE cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
[ { "created": "Fri, 7 Oct 2016 04:06:01 GMT", "version": "v1" }, { "created": "Thu, 23 Mar 2017 18:11:25 GMT", "version": "v2" }, { "created": "Wed, 3 Oct 2018 07:32:57 GMT", "version": "v3" } ]
2018-10-04
[ [ "Hendrycks", "Dan", "" ], [ "Gimpel", "Kevin", "" ] ]
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
2202.00893
Jaeyeon Ahn
Jaeyeon Ahn, Taehyeon Kim, Seyoung Yun
Mold into a Graph: Efficient Bayesian Optimization over Mixed-Spaces
14 pages, 10 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Real-world optimization problems are generally not just black-box problems, but also involve mixed types of inputs in which discrete and continuous variables coexist. Such mixed-space optimization possesses the primary challenge of modeling complex interactions between the inputs. In this work, we propose a novel yet simple approach that entails exploiting the graph data structure to model the underlying relationship between variables, i.e., variables as nodes and interactions defined by edges. Then, a variational graph autoencoder is used to naturally take the interactions into account. We first provide empirical evidence of the existence of such graph structures and then suggest a joint framework of graph structure learning and latent space optimization to adaptively search for optimal graph connectivity. Experimental results demonstrate that our method shows remarkable performance, exceeding the existing approaches with significant computational efficiency for a number of synthetic and real-world tasks.
[ { "created": "Wed, 2 Feb 2022 07:12:18 GMT", "version": "v1" }, { "created": "Tue, 8 Feb 2022 06:44:25 GMT", "version": "v2" } ]
2022-02-09
[ [ "Ahn", "Jaeyeon", "" ], [ "Kim", "Taehyeon", "" ], [ "Yun", "Seyoung", "" ] ]
Real-world optimization problems are generally not just black-box problems, but also involve mixed types of inputs in which discrete and continuous variables coexist. Such mixed-space optimization possesses the primary challenge of modeling complex interactions between the inputs. In this work, we propose a novel yet simple approach that entails exploiting the graph data structure to model the underlying relationship between variables, i.e., variables as nodes and interactions defined by edges. Then, a variational graph autoencoder is used to naturally take the interactions into account. We first provide empirical evidence of the existence of such graph structures and then suggest a joint framework of graph structure learning and latent space optimization to adaptively search for optimal graph connectivity. Experimental results demonstrate that our method shows remarkable performance, exceeding the existing approaches with significant computational efficiency for a number of synthetic and real-world tasks.
cs/0604082
Farhad Meshkati
Farhad Meshkati, H. Vincent Poor, Stuart C. Schwartz and Radu V. Balan
Energy-Efficient Power and Rate Control with QoS Constraints: A Game-Theoretic Approach
To appear in the proceedings of the 2006 International Wireless Communications and Mobile Computing Conference (IWCMC'06), Vancouver, BC, Canada, July 2006
null
null
null
cs.IT math.IT
null
A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own utility and at the same time satisfy its QoS requirements. The user's QoS constraints are specified in terms of the average source rate and average delay. The utility function considered here measures energy efficiency and the delay includes both transmission and queueing delays. The Nash equilibrium solution for the proposed non-cooperative game is derived and a closed-form expression for the utility achieved at equilibrium is obtained. It is shown that the QoS requirements of a user translate into a "size" for the user which is an indication of the amount of network resources consumed by the user. Using this framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are also studied.
[ { "created": "Fri, 21 Apr 2006 14:46:14 GMT", "version": "v1" }, { "created": "Fri, 5 May 2006 19:37:13 GMT", "version": "v2" } ]
2007-07-16
[ [ "Meshkati", "Farhad", "" ], [ "Poor", "H. Vincent", "" ], [ "Schwartz", "Stuart C.", "" ], [ "Balan", "Radu V.", "" ] ]
A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own utility and at the same time satisfy its QoS requirements. The user's QoS constraints are specified in terms of the average source rate and average delay. The utility function considered here measures energy efficiency and the delay includes both transmission and queueing delays. The Nash equilibrium solution for the proposed non-cooperative game is derived and a closed-form expression for the utility achieved at equilibrium is obtained. It is shown that the QoS requirements of a user translate into a "size" for the user which is an indication of the amount of network resources consumed by the user. Using this framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are also studied.
2106.11394
Felix Biessmann
Felix Biessmann and Viktor Treu
A Turing Test for Transparency
Published in Proceedings of the ICML Workshop on Theoretical Foundations, Criticism, and Application Trends of Explainable AI held in conjunction with the 38th International Conference on Machine Learning (ICML)
null
null
null
cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
A central goal of explainable artificial intelligence (XAI) is to improve the trust relationship in human-AI interaction. One assumption underlying research in transparent AI systems is that explanations help to better assess predictions of machine learning (ML) models, for instance by enabling humans to identify wrong predictions more efficiently. Recent empirical evidence however shows that explanations can have the opposite effect: When presenting explanations of ML predictions humans often tend to trust ML predictions even when these are wrong. Experimental evidence suggests that this effect can be attributed to how intuitive, or human, an AI or explanation appears. This effect challenges the very goal of XAI and implies that responsible usage of transparent AI methods has to consider the ability of humans to distinguish machine generated from human explanations. Here we propose a quantitative metric for XAI methods based on Turing's imitation game, a Turing Test for Transparency. A human interrogator is asked to judge whether an explanation was generated by a human or by an XAI method. Explanations of XAI methods that can not be detected by humans above chance performance in this binary classification task are passing the test. Detecting such explanations is a requirement for assessing and calibrating the trust relationship in human-AI interaction. We present experimental results on a crowd-sourced text classification task demonstrating that even for basic ML models and XAI approaches most participants were not able to differentiate human from machine generated explanations. We discuss ethical and practical implications of our results for applications of transparent ML.
[ { "created": "Mon, 21 Jun 2021 20:09:40 GMT", "version": "v1" } ]
2021-06-23
[ [ "Biessmann", "Felix", "" ], [ "Treu", "Viktor", "" ] ]
A central goal of explainable artificial intelligence (XAI) is to improve the trust relationship in human-AI interaction. One assumption underlying research in transparent AI systems is that explanations help to better assess predictions of machine learning (ML) models, for instance by enabling humans to identify wrong predictions more efficiently. Recent empirical evidence however shows that explanations can have the opposite effect: When presenting explanations of ML predictions humans often tend to trust ML predictions even when these are wrong. Experimental evidence suggests that this effect can be attributed to how intuitive, or human, an AI or explanation appears. This effect challenges the very goal of XAI and implies that responsible usage of transparent AI methods has to consider the ability of humans to distinguish machine generated from human explanations. Here we propose a quantitative metric for XAI methods based on Turing's imitation game, a Turing Test for Transparency. A human interrogator is asked to judge whether an explanation was generated by a human or by an XAI method. Explanations of XAI methods that can not be detected by humans above chance performance in this binary classification task are passing the test. Detecting such explanations is a requirement for assessing and calibrating the trust relationship in human-AI interaction. We present experimental results on a crowd-sourced text classification task demonstrating that even for basic ML models and XAI approaches most participants were not able to differentiate human from machine generated explanations. We discuss ethical and practical implications of our results for applications of transparent ML.
2112.02129
Marco Carpentiero
Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
Distributed Adaptive Learning Under Communication Constraints
Submitted for publication
null
null
null
cs.LG cs.IT cs.MA math.IT math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work examines adaptive distributed learning strategies designed to operate under communication constraints. We consider a network of agents that must solve an online optimization problem from continual observation of streaming data. The agents implement a distributed cooperative strategy where each agent is allowed to perform local exchange of information with its neighbors. In order to cope with communication constraints, the exchanged information must be unavoidably compressed. We propose a diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which relies on the following steps: i) an adaptation step where each agent performs an individual stochastic-gradient update with constant step-size; ii) a compression step that leverages a recently introduced class of stochastic compression operators; and iii) a combination step where each agent combines the compressed updates received from its neighbors. The distinguishing elements of this work are as follows. First, we focus on adaptive strategies, where constant (as opposed to diminishing) step-sizes are critical to respond in real time to nonstationary variations. Second, we consider the general class of directed graphs and left-stochastic combination policies, which allow us to enhance the interplay between topology and learning. Third, in contrast with related works that assume strong convexity for all individual agents' cost functions, we require strong convexity only at a network level, a condition satisfied even if a single agent has a strongly-convex cost and the remaining agents have non-convex costs. Fourth, we focus on a diffusion (as opposed to consensus) strategy. Under the demanding setting of compressed information, we establish that the ACTC iterates fluctuate around the desired optimizer, achieving remarkable savings in terms of bits exchanged between neighboring agents.
[ { "created": "Fri, 3 Dec 2021 19:23:48 GMT", "version": "v1" } ]
2021-12-07
[ [ "Carpentiero", "Marco", "" ], [ "Matta", "Vincenzo", "" ], [ "Sayed", "Ali H.", "" ] ]
This work examines adaptive distributed learning strategies designed to operate under communication constraints. We consider a network of agents that must solve an online optimization problem from continual observation of streaming data. The agents implement a distributed cooperative strategy where each agent is allowed to perform local exchange of information with its neighbors. In order to cope with communication constraints, the exchanged information must be unavoidably compressed. We propose a diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which relies on the following steps: i) an adaptation step where each agent performs an individual stochastic-gradient update with constant step-size; ii) a compression step that leverages a recently introduced class of stochastic compression operators; and iii) a combination step where each agent combines the compressed updates received from its neighbors. The distinguishing elements of this work are as follows. First, we focus on adaptive strategies, where constant (as opposed to diminishing) step-sizes are critical to respond in real time to nonstationary variations. Second, we consider the general class of directed graphs and left-stochastic combination policies, which allow us to enhance the interplay between topology and learning. Third, in contrast with related works that assume strong convexity for all individual agents' cost functions, we require strong convexity only at a network level, a condition satisfied even if a single agent has a strongly-convex cost and the remaining agents have non-convex costs. Fourth, we focus on a diffusion (as opposed to consensus) strategy. Under the demanding setting of compressed information, we establish that the ACTC iterates fluctuate around the desired optimizer, achieving remarkable savings in terms of bits exchanged between neighboring agents.
2404.01184
Mingxin Yu
Mingxin Yu, Chenning Yu, M-Mahdi Naddaf-Sh, Devesh Upadhyay, Sicun Gao, and Chuchu Fan
Efficient Motion Planning for Manipulators with Control Barrier Function-Induced Neural Controller
Accepted by IEEE International Conference on Robotics and Automation (ICRA2024)
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sampling-based motion planning methods for manipulators in crowded environments often suffer from expensive collision checking and high sampling complexity, which make them difficult to use in real time. To address this issue, we propose a new generalizable control barrier function (CBF)-based steering controller to reduce the number of samples needed in a sampling-based motion planner RRT. Our method combines the strength of CBF for real-time collision-avoidance control and RRT for long-horizon motion planning, by using CBF-induced neural controller (CBF-INC) to generate control signals that steer the system towards sampled configurations by RRT. CBF-INC is learned as Neural Networks and has two variants handling different inputs, respectively: state (signed distance) input and point-cloud input from LiDAR. In the latter case, we also study two different settings: fully and partially observed environmental information. Compared to manually crafted CBF which suffers from over-approximating robot geometry, CBF-INC can balance safety and goal-reaching better without being over-conservative. Given state-based input, our neural CBF-induced neural controller-enhanced RRT (CBF-INC-RRT) can increase the success rate by 14% while reducing the number of nodes explored by 30%, compared with vanilla RRT on hard test cases. Given LiDAR input where vanilla RRT is not directly applicable, we demonstrate that our CBF-INC-RRT can improve the success rate by 10%, compared with planning with other steering controllers. Our project page with supplementary material is at https://mit-realm.github.io/CBF-INC-RRT-website/.
[ { "created": "Mon, 1 Apr 2024 15:36:39 GMT", "version": "v1" } ]
2024-04-02
[ [ "Yu", "Mingxin", "" ], [ "Yu", "Chenning", "" ], [ "Naddaf-Sh", "M-Mahdi", "" ], [ "Upadhyay", "Devesh", "" ], [ "Gao", "Sicun", "" ], [ "Fan", "Chuchu", "" ] ]
Sampling-based motion planning methods for manipulators in crowded environments often suffer from expensive collision checking and high sampling complexity, which make them difficult to use in real time. To address this issue, we propose a new generalizable control barrier function (CBF)-based steering controller to reduce the number of samples needed in a sampling-based motion planner RRT. Our method combines the strength of CBF for real-time collision-avoidance control and RRT for long-horizon motion planning, by using CBF-induced neural controller (CBF-INC) to generate control signals that steer the system towards sampled configurations by RRT. CBF-INC is learned as Neural Networks and has two variants handling different inputs, respectively: state (signed distance) input and point-cloud input from LiDAR. In the latter case, we also study two different settings: fully and partially observed environmental information. Compared to manually crafted CBF which suffers from over-approximating robot geometry, CBF-INC can balance safety and goal-reaching better without being over-conservative. Given state-based input, our neural CBF-induced neural controller-enhanced RRT (CBF-INC-RRT) can increase the success rate by 14% while reducing the number of nodes explored by 30%, compared with vanilla RRT on hard test cases. Given LiDAR input where vanilla RRT is not directly applicable, we demonstrate that our CBF-INC-RRT can improve the success rate by 10%, compared with planning with other steering controllers. Our project page with supplementary material is at https://mit-realm.github.io/CBF-INC-RRT-website/.
0805.1787
Michael Hilker
Michael Hilker and Christoph Schommer
A Network Protection Framework through Artificial Immunity
24 pages, 2 figures
null
null
null
cs.MA cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current network protection systems use a collection of intelligent components - e.g. classifiers or rule-based firewall systems to detect intrusions and anomalies and to secure a network against viruses, worms, or trojans. However, these network systems rely on individuality and support an architecture with less collaborative work of the protection components. They give less administration support for maintenance, but offer a large number of individual single points of failures - an ideal situation for network attacks to succeed. In this work, we discuss the required features, the performance, and the problems of a distributed protection system called {\it SANA}. It consists of a cooperative architecture, it is motivated by the human immune system, where the components correspond to artificial immune cells that are connected for their collaborative work. SANA promises a better protection against intruders than common known protection systems through an adaptive self-management while keeping the resources efficiently by an intelligent reduction of redundancies. We introduce a library of several novel and common used protection components and evaluate the performance of SANA by a proof-of-concept implementation.
[ { "created": "Tue, 13 May 2008 06:51:35 GMT", "version": "v1" } ]
2008-05-14
[ [ "Hilker", "Michael", "" ], [ "Schommer", "Christoph", "" ] ]
Current network protection systems use a collection of intelligent components - e.g. classifiers or rule-based firewall systems to detect intrusions and anomalies and to secure a network against viruses, worms, or trojans. However, these network systems rely on individuality and support an architecture with less collaborative work of the protection components. They give less administration support for maintenance, but offer a large number of individual single points of failures - an ideal situation for network attacks to succeed. In this work, we discuss the required features, the performance, and the problems of a distributed protection system called {\it SANA}. It consists of a cooperative architecture, it is motivated by the human immune system, where the components correspond to artificial immune cells that are connected for their collaborative work. SANA promises a better protection against intruders than common known protection systems through an adaptive self-management while keeping the resources efficiently by an intelligent reduction of redundancies. We introduce a library of several novel and common used protection components and evaluate the performance of SANA by a proof-of-concept implementation.
1108.2893
Xuebin Wu
Xuebin Wu and Zhiyuan Yan
Reduced-Complexity Decoder of Long Reed-Solomon Codes Based on Composite Cyclotomic Fourier Transforms
7 pages, 1 figure
null
10.1109/TSP.2012.2192435
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long Reed-Solomon (RS) codes are desirable for digital communication and storage systems due to their improved error performance, but the high computational complexity of their decoders is a key obstacle to their adoption in practice. As discrete Fourier transforms (DFTs) can evaluate a polynomial at multiple points, efficient DFT algorithms are promising in reducing the computational complexities of syndrome based decoders for long RS codes. In this paper, we first propose partial composite cyclotomic Fourier transforms (CCFTs) and then devise syndrome based decoders for long RS codes over large finite fields based on partial CCFTs. The new decoders based on partial CCFTs achieve a significant saving of computational complexities for long RS codes. Since partial CCFTs have modular and regular structures, the new decoders are suitable for hardware implementations. To further verify and demonstrate the advantages of partial CCFTs, we implement in hardware the syndrome computation block for a $(2720, 2550)$ shortened RS code over GF$(2^{12})$. In comparison to previous results based on Horner's rule, our hardware implementation not only has a smaller gate count, but also achieves much higher throughputs.
[ { "created": "Sun, 14 Aug 2011 17:41:44 GMT", "version": "v1" } ]
2015-05-30
[ [ "Wu", "Xuebin", "" ], [ "Yan", "Zhiyuan", "" ] ]
Long Reed-Solomon (RS) codes are desirable for digital communication and storage systems due to their improved error performance, but the high computational complexity of their decoders is a key obstacle to their adoption in practice. As discrete Fourier transforms (DFTs) can evaluate a polynomial at multiple points, efficient DFT algorithms are promising in reducing the computational complexities of syndrome based decoders for long RS codes. In this paper, we first propose partial composite cyclotomic Fourier transforms (CCFTs) and then devise syndrome based decoders for long RS codes over large finite fields based on partial CCFTs. The new decoders based on partial CCFTs achieve a significant saving of computational complexities for long RS codes. Since partial CCFTs have modular and regular structures, the new decoders are suitable for hardware implementations. To further verify and demonstrate the advantages of partial CCFTs, we implement in hardware the syndrome computation block for a $(2720, 2550)$ shortened RS code over GF$(2^{12})$. In comparison to previous results based on Horner's rule, our hardware implementation not only has a smaller gate count, but also achieves much higher throughputs.
1506.07688
Snikdho Sworov Haque
Snikdho Sworov Haque, Md. Munjure Mowla
Performance Improvement for Papr Reduction in Lte Downlink System with Elliptic Filtering
10 pages, 11 figures, Published in IJCNC. in International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with the performance improvement of PAPR reduction of orthogonal frequency division multiplexing (OFDM) signal using amplitude clipping & filtering based design. Note that OFDM is one of the well adept multi-carrier multiplexing transmission scheme which has been implemented in long term evolution (LTE) downlink. Nonetheless peak to average power ratio (PAPR) is the more rattling problem with OFDM, consequently in this paper a reduction procedure of the PAPR by using amplitude clipping and filtering is proposed. Here we used IIR bandpass elliptic filter after amplitude clipping to reduce the PAPR. The performance of the system in terms of bit error rate (BER) is also canvased as a new filter based clipping method. Our results show that the proposed methodology of clipping method with the IIR elliptic band pass filter significantly reduces the PAPR value.
[ { "created": "Thu, 25 Jun 2015 10:17:43 GMT", "version": "v1" } ]
2015-06-26
[ [ "Haque", "Snikdho Sworov", "" ], [ "Mowla", "Md. Munjure", "" ] ]
This paper is concerned with the performance improvement of PAPR reduction of orthogonal frequency division multiplexing (OFDM) signal using amplitude clipping & filtering based design. Note that OFDM is one of the well adept multi-carrier multiplexing transmission scheme which has been implemented in long term evolution (LTE) downlink. Nonetheless peak to average power ratio (PAPR) is the more rattling problem with OFDM, consequently in this paper a reduction procedure of the PAPR by using amplitude clipping and filtering is proposed. Here we used IIR bandpass elliptic filter after amplitude clipping to reduce the PAPR. The performance of the system in terms of bit error rate (BER) is also canvased as a new filter based clipping method. Our results show that the proposed methodology of clipping method with the IIR elliptic band pass filter significantly reduces the PAPR value.
2309.07103
William Godoy
Pedro Valero-Lara, Alexis Huante, Mustafa Al Lail, William F. Godoy, Keita Teranishi, Prasanna Balaprakash, Jeffrey S. Vetter
Comparing Llama-2 and GPT-3 LLMs for HPC kernels generation
Accepted at LCPC 2023, The 36th International Workshop on Languages and Compilers for Parallel Computing http://www.lcpcworkshop.org/LCPC23/ . 13 pages, 5 figures, 1 table
null
null
null
cs.SE cs.AI cs.DC cs.PL
http://creativecommons.org/licenses/by/4.0/
We evaluate the use of the open-source Llama-2 model for generating well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on different parallel programming models and languages (e.g., C++: OpenMP, OpenMP Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python: numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built upon our previous work that is based on the OpenAI Codex, which is a descendant of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot. Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline by using a similar metric. Llama-2 has a simplified model that shows competitive or even superior accuracy. We also report on the differences between these foundational large language models as generative AI continues to redefine human-computer interactions. Overall, Copilot generates codes that are more reliable but less optimized, whereas codes generated by Llama-2 are less reliable but more optimized when correct.
[ { "created": "Tue, 12 Sep 2023 01:19:54 GMT", "version": "v1" } ]
2023-09-14
[ [ "Valero-Lara", "Pedro", "" ], [ "Huante", "Alexis", "" ], [ "Lail", "Mustafa Al", "" ], [ "Godoy", "William F.", "" ], [ "Teranishi", "Keita", "" ], [ "Balaprakash", "Prasanna", "" ], [ "Vetter", "Jeffrey S.", "" ] ]
We evaluate the use of the open-source Llama-2 model for generating well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on different parallel programming models and languages (e.g., C++: OpenMP, OpenMP Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python: numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built upon our previous work that is based on the OpenAI Codex, which is a descendant of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot. Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline by using a similar metric. Llama-2 has a simplified model that shows competitive or even superior accuracy. We also report on the differences between these foundational large language models as generative AI continues to redefine human-computer interactions. Overall, Copilot generates codes that are more reliable but less optimized, whereas codes generated by Llama-2 are less reliable but more optimized when correct.
2204.00762
Lan Wang
Lan Wang and Vishnu Naresh Boddeti
Do learned representations respect causal relationships?
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data often has many semantic attributes that are causally associated with each other. But do attribute-specific learned representations of data also respect the same causal relations? We answer this question in three steps. First, we introduce NCINet, an approach for observational causal discovery from high-dimensional data. It is trained purely on synthetically generated representations and can be applied to real representations, and is specifically designed to mitigate the domain gap between the two. Second, we apply NCINet to identify the causal relations between image representations of different pairs of attributes with known and unknown causal relations between the labels. For this purpose, we consider image representations learned for predicting attributes on the 3D Shapes, CelebA, and the CASIA-WebFace datasets, which we annotate with multiple multi-class attributes. Third, we analyze the effect on the underlying causal relation between learned representations induced by various design choices in representation learning. Our experiments indicate that (1) NCINet significantly outperforms existing observational causal discovery approaches for estimating the causal relation between pairs of random samples, both in the presence and absence of an unobserved confounder, (2) under controlled scenarios, learned representations can indeed satisfy the underlying causal relations between their respective labels, and (3) the causal relations are positively correlated with the predictive capability of the representations.
[ { "created": "Sat, 2 Apr 2022 04:53:10 GMT", "version": "v1" }, { "created": "Thu, 7 Apr 2022 13:07:41 GMT", "version": "v2" } ]
2022-04-08
[ [ "Wang", "Lan", "" ], [ "Boddeti", "Vishnu Naresh", "" ] ]
Data often has many semantic attributes that are causally associated with each other. But do attribute-specific learned representations of data also respect the same causal relations? We answer this question in three steps. First, we introduce NCINet, an approach for observational causal discovery from high-dimensional data. It is trained purely on synthetically generated representations and can be applied to real representations, and is specifically designed to mitigate the domain gap between the two. Second, we apply NCINet to identify the causal relations between image representations of different pairs of attributes with known and unknown causal relations between the labels. For this purpose, we consider image representations learned for predicting attributes on the 3D Shapes, CelebA, and the CASIA-WebFace datasets, which we annotate with multiple multi-class attributes. Third, we analyze the effect on the underlying causal relation between learned representations induced by various design choices in representation learning. Our experiments indicate that (1) NCINet significantly outperforms existing observational causal discovery approaches for estimating the causal relation between pairs of random samples, both in the presence and absence of an unobserved confounder, (2) under controlled scenarios, learned representations can indeed satisfy the underlying causal relations between their respective labels, and (3) the causal relations are positively correlated with the predictive capability of the representations.
1605.09551
Vincent Tan
Vincent Y. F. Tan, Masahito Hayashi
Analysis of Remaining Uncertainties and Exponents under Various Conditional R\'{e}nyi Entropies
26 pages; 7 figures; To be presented in part at the 2016 International Symposium on Information Theory (ISIT) in Barcelona, Spain
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we analyze the asymptotics of the normalized remaining uncertainty of a source when a compressed or hashed version of it and correlated side-information is observed. For this system, commonly known as Slepian-Wolf source coding, we establish the optimal (minimum) rate of compression of the source to ensure that the remaining uncertainties vanish. We also study the exponential rate of decay of the remaining uncertainty to zero when the rate is above the optimal rate of compression. In our study, we consider various classes of random universal hash functions. Instead of measuring remaining uncertainties using traditional Shannon information measures, we do so using two forms of the conditional R\'{e}nyi entropy. Among other techniques, we employ new one-shot bounds and the moments of type class enumerator method for these evaluations. We show that these asymptotic results are generalizations of the strong converse exponent and the error exponent of the Slepian-Wolf problem under maximum \emph{a posteriori} (MAP) decoding.
[ { "created": "Tue, 31 May 2016 10:01:17 GMT", "version": "v1" } ]
2016-06-01
[ [ "Tan", "Vincent Y. F.", "" ], [ "Hayashi", "Masahito", "" ] ]
In this paper, we analyze the asymptotics of the normalized remaining uncertainty of a source when a compressed or hashed version of it and correlated side-information is observed. For this system, commonly known as Slepian-Wolf source coding, we establish the optimal (minimum) rate of compression of the source to ensure that the remaining uncertainties vanish. We also study the exponential rate of decay of the remaining uncertainty to zero when the rate is above the optimal rate of compression. In our study, we consider various classes of random universal hash functions. Instead of measuring remaining uncertainties using traditional Shannon information measures, we do so using two forms of the conditional R\'{e}nyi entropy. Among other techniques, we employ new one-shot bounds and the moments of type class enumerator method for these evaluations. We show that these asymptotic results are generalizations of the strong converse exponent and the error exponent of the Slepian-Wolf problem under maximum \emph{a posteriori} (MAP) decoding.
2402.09182
Benjamin Lee
Xingyao Yu and Benjamin Lee and Michael Sedlmair
Design Space of Visual Feedforward And Corrective Feedback in XR-Based Motion Guidance Systems
To appear in ACM CHI 2024
null
10.1145/3613904.3642143
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extended reality (XR) technologies are highly suited in assisting individuals in learning motor skills and movements -- referred to as motion guidance. In motion guidance, the "feedforward" provides instructional cues of the motions that are to be performed, whereas the "feedback" provides cues which help correct mistakes and minimize errors. Designing synergistic feedforward and feedback is vital to providing an effective learning experience, but this interplay between the two has not yet been adequately explored. Based on a survey of the literature, we propose design space for both motion feedforward and corrective feedback in XR, and describe the interaction effects between them. We identify common design approaches of XR-based motion guidance found in our literature corpus, and discuss them through the lens of our design dimensions. We then discuss additional contextual factors and considerations that influence this design, together with future research opportunities for motion guidance in XR.
[ { "created": "Wed, 14 Feb 2024 13:54:34 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 12:52:39 GMT", "version": "v2" } ]
2024-02-19
[ [ "Yu", "Xingyao", "" ], [ "Lee", "Benjamin", "" ], [ "Sedlmair", "Michael", "" ] ]
Extended reality (XR) technologies are highly suited in assisting individuals in learning motor skills and movements -- referred to as motion guidance. In motion guidance, the "feedforward" provides instructional cues of the motions that are to be performed, whereas the "feedback" provides cues which help correct mistakes and minimize errors. Designing synergistic feedforward and feedback is vital to providing an effective learning experience, but this interplay between the two has not yet been adequately explored. Based on a survey of the literature, we propose design space for both motion feedforward and corrective feedback in XR, and describe the interaction effects between them. We identify common design approaches of XR-based motion guidance found in our literature corpus, and discuss them through the lens of our design dimensions. We then discuss additional contextual factors and considerations that influence this design, together with future research opportunities for motion guidance in XR.
2012.01403
Minke Xiu
Minke Xiu, Ellis E. Eghan, Zhen Ming (Jack) Jiang, Bram Adams
Empirical Study on the Software Engineering Practices in Open Source ML Package Repositories
null
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advances in Artificial Intelligence (AI), especially in Machine Learning (ML), have introduced various practical applications (e.g., virtual personal assistants and autonomous cars) that enhance the experience of everyday users. However, modern ML technologies like Deep Learning require considerable technical expertise and resources to develop, train and deploy such models, making effective reuse of the ML models a necessity. Such discovery and reuse by practitioners and researchers are being addressed by public ML package repositories, which bundle up pre-trained models into packages for publication. Since such repositories are a recent phenomenon, there is no empirical data on their current state and challenges. Hence, this paper conducts an exploratory study that analyzes the structure and contents of two popular ML package repositories, TFHub and PyTorch Hub, comparing their information elements (features and policies), package organization, package manager functionalities and usage contexts against popular software package repositories (npm, PyPI, and CRAN). Through these studies, we have identified unique SE practices and challenges for sharing ML packages. These findings and implications would be useful for data scientists, researchers and software developers who intend to use these shared ML packages.
[ { "created": "Wed, 2 Dec 2020 18:52:56 GMT", "version": "v1" }, { "created": "Tue, 8 Dec 2020 16:02:00 GMT", "version": "v2" } ]
2020-12-09
[ [ "Xiu", "Minke", "", "Jack" ], [ "Eghan", "Ellis E.", "", "Jack" ], [ "Ming", "Zhen", "", "Jack" ], [ "Jiang", "", "" ], [ "Adams", "Bram", "" ] ]
Recent advances in Artificial Intelligence (AI), especially in Machine Learning (ML), have introduced various practical applications (e.g., virtual personal assistants and autonomous cars) that enhance the experience of everyday users. However, modern ML technologies like Deep Learning require considerable technical expertise and resources to develop, train and deploy such models, making effective reuse of the ML models a necessity. Such discovery and reuse by practitioners and researchers are being addressed by public ML package repositories, which bundle up pre-trained models into packages for publication. Since such repositories are a recent phenomenon, there is no empirical data on their current state and challenges. Hence, this paper conducts an exploratory study that analyzes the structure and contents of two popular ML package repositories, TFHub and PyTorch Hub, comparing their information elements (features and policies), package organization, package manager functionalities and usage contexts against popular software package repositories (npm, PyPI, and CRAN). Through these studies, we have identified unique SE practices and challenges for sharing ML packages. These findings and implications would be useful for data scientists, researchers and software developers who intend to use these shared ML packages.
1708.05095
Rodrigo Lobos
Rodrigo A. Lobos, Tae Hyung Kim, W. Scott Hoge, Justin P. Haldar
Navigator-free EPI Ghost Correction with Structured Low-Rank Matrix Models: New Theory and Methods
13 pages, 9 figures ; Submitted to IEEE Transactions on Medical Imaging
null
10.1109/TMI.2018.2822053
null
cs.CV cs.IT math.IT physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured low-rank matrix models have previously been introduced to enable calibrationless MR image reconstruction from sub-Nyquist data, and such ideas have recently been extended to enable navigator-free echo-planar imaging (EPI) ghost correction. This paper presents novel theoretical analysis which shows that, because of uniform subsampling, the structured low-rank matrix optimization problems for EPI data will always have either undesirable or non-unique solutions in the absence of additional constraints. This theory leads us to recommend and investigate problem formulations for navigator-free EPI that incorporate side information from either image-domain or k-space domain parallel imaging methods. The importance of using nonconvex low-rank matrix regularization is also identified. We demonstrate using phantom and \emph{in vivo} data that the proposed methods are able to eliminate ghost artifacts for several navigator-free EPI acquisition schemes, obtaining better performance in comparison to state-of-the-art methods across a range of different scenarios. Results are shown for both single-channel acquisition and highly accelerated multi-channel acquisition.
[ { "created": "Wed, 16 Aug 2017 22:11:50 GMT", "version": "v1" }, { "created": "Mon, 20 Nov 2017 02:57:45 GMT", "version": "v2" }, { "created": "Tue, 6 Mar 2018 01:41:59 GMT", "version": "v3" } ]
2019-02-11
[ [ "Lobos", "Rodrigo A.", "" ], [ "Kim", "Tae Hyung", "" ], [ "Hoge", "W. Scott", "" ], [ "Haldar", "Justin P.", "" ] ]
Structured low-rank matrix models have previously been introduced to enable calibrationless MR image reconstruction from sub-Nyquist data, and such ideas have recently been extended to enable navigator-free echo-planar imaging (EPI) ghost correction. This paper presents novel theoretical analysis which shows that, because of uniform subsampling, the structured low-rank matrix optimization problems for EPI data will always have either undesirable or non-unique solutions in the absence of additional constraints. This theory leads us to recommend and investigate problem formulations for navigator-free EPI that incorporate side information from either image-domain or k-space domain parallel imaging methods. The importance of using nonconvex low-rank matrix regularization is also identified. We demonstrate using phantom and \emph{in vivo} data that the proposed methods are able to eliminate ghost artifacts for several navigator-free EPI acquisition schemes, obtaining better performance in comparison to state-of-the-art methods across a range of different scenarios. Results are shown for both single-channel acquisition and highly accelerated multi-channel acquisition.
2302.02754
Shuche Wang
Shuche Wang, Van Khu Vu, Vincent Y. F. Tan
Codes for Correcting $t$ Limited-Magnitude Sticky Deletions
arXiv admin note: substantial text overlap with arXiv:2301.11680
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Codes for correcting sticky insertions/deletions and limited-magnitude errors have attracted significant attention due to their applications of flash memories, racetrack memories, and DNA data storage systems. In this paper, we first consider the error type of $t$-sticky deletions with $\ell$-limited-magnitude and propose a non-systematic code for correcting this type of error with redundancy $2t(1-1/p)\cdot\log(n+1)+O(1)$, where $p$ is the smallest prime larger than $\ell+1$. Next, we present a systematic code construction with an efficient encoding and decoding algorithm with redundancy $\frac{\lceil2t(1-1/p)\rceil\cdot\lceil\log p\rceil}{\log p} \log(n+1)+O(\log\log n)$, where $p$ is the smallest prime larger than $\ell+1$.
[ { "created": "Mon, 6 Feb 2023 13:01:51 GMT", "version": "v1" } ]
2023-02-07
[ [ "Wang", "Shuche", "" ], [ "Vu", "Van Khu", "" ], [ "Tan", "Vincent Y. F.", "" ] ]
Codes for correcting sticky insertions/deletions and limited-magnitude errors have attracted significant attention due to their applications of flash memories, racetrack memories, and DNA data storage systems. In this paper, we first consider the error type of $t$-sticky deletions with $\ell$-limited-magnitude and propose a non-systematic code for correcting this type of error with redundancy $2t(1-1/p)\cdot\log(n+1)+O(1)$, where $p$ is the smallest prime larger than $\ell+1$. Next, we present a systematic code construction with an efficient encoding and decoding algorithm with redundancy $\frac{\lceil2t(1-1/p)\rceil\cdot\lceil\log p\rceil}{\log p} \log(n+1)+O(\log\log n)$, where $p$ is the smallest prime larger than $\ell+1$.
2309.05845
Mengjia Niu
Mengjia Niu, Yuchen Zhao, Hamed Haddadi
Effective Abnormal Activity Detection on Multivariate Time Series Healthcare Data
Poster accepted by the 29th Annual International Conference On Mobile Computing And Networking (ACM MobiCom 2023)
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multivariate time series (MTS) data collected from multiple sensors provide the potential for accurate abnormal activity detection in smart healthcare scenarios. However, anomalies exhibit diverse patterns and become unnoticeable in MTS data. Consequently, achieving accurate anomaly detection is challenging since we have to capture both temporal dependencies of time series and inter-relationships among variables. To address this problem, we propose a Residual-based Anomaly Detection approach, Rs-AD, for effective representation learning and abnormal activity detection. We evaluate our scheme on a real-world gait dataset and the experimental results demonstrate an F1 score of 0.839.
[ { "created": "Mon, 11 Sep 2023 22:08:09 GMT", "version": "v1" } ]
2023-09-13
[ [ "Niu", "Mengjia", "" ], [ "Zhao", "Yuchen", "" ], [ "Haddadi", "Hamed", "" ] ]
Multivariate time series (MTS) data collected from multiple sensors provide the potential for accurate abnormal activity detection in smart healthcare scenarios. However, anomalies exhibit diverse patterns and become unnoticeable in MTS data. Consequently, achieving accurate anomaly detection is challenging since we have to capture both temporal dependencies of time series and inter-relationships among variables. To address this problem, we propose a Residual-based Anomaly Detection approach, Rs-AD, for effective representation learning and abnormal activity detection. We evaluate our scheme on a real-world gait dataset and the experimental results demonstrate an F1 score of 0.839.
1802.05176
Paulo Ferreira
Paulo Ferreira
Sampling Superquadric Point Clouds with Normals
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Superquadrics provide a compact representation of common shapes and have been used both for object/surface modelling in computer graphics and as object-part representation in computer vision and robotics. Superquadrics refer to a family of shapes: here we deal with the superellipsoids and superparaboloids. Due to the strong non-linearities involved in the equations, uniform or close-to-uniform sampling is not attainable through a naive approach of direct sampling from the parametric formulation. This is specially true for more `cubic' superquadrics (with shape parameters close to $0.1$). We extend a previous solution of 2D close-to-uniform uniform sampling of superellipses to the superellipsoid (3D) case and derive our own for the superparaboloid. Additionally, we are able to provide normals for each sampled point. To the best of our knowledge, this is the first complete approach for close-to-uniform sampling of superellipsoids and superparaboloids in one single framework. We present derivations, pseudocode and qualitative and quantitative results using our code, which is available online.
[ { "created": "Wed, 14 Feb 2018 16:04:27 GMT", "version": "v1" } ]
2018-02-15
[ [ "Ferreira", "Paulo", "" ] ]
Superquadrics provide a compact representation of common shapes and have been used both for object/surface modelling in computer graphics and as object-part representation in computer vision and robotics. Superquadrics refer to a family of shapes: here we deal with the superellipsoids and superparaboloids. Due to the strong non-linearities involved in the equations, uniform or close-to-uniform sampling is not attainable through a naive approach of direct sampling from the parametric formulation. This is specially true for more `cubic' superquadrics (with shape parameters close to $0.1$). We extend a previous solution of 2D close-to-uniform uniform sampling of superellipses to the superellipsoid (3D) case and derive our own for the superparaboloid. Additionally, we are able to provide normals for each sampled point. To the best of our knowledge, this is the first complete approach for close-to-uniform sampling of superellipsoids and superparaboloids in one single framework. We present derivations, pseudocode and qualitative and quantitative results using our code, which is available online.
2107.02826
Shohin Mukherjee
Shohin Mukherjee, Sandip Aine, Maxim Likhachev
MPLP: Massively Parallelized Lazy Planning
IEEE Robotics and Automation Letters (RA-L) 2022
in IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6067-6074, July 2022
10.1109/LRA.2022.3157544
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lazy search algorithms have been developed to efficiently solve planning problems in domains where the computational effort is dominated by the cost of edge evaluation. The existing algorithms operate by intelligently balancing computational effort between searching the graph and evaluating edges. However, they are designed to run as a single process and do not leverage the multithreading capability of modern processors. In this work, we propose a massively parallelized, bounded suboptimal, lazy search algorithm (MPLP) that harnesses modern multi-core processors. In MPLP, searching of the graph and edge evaluations are performed completely asynchronously in parallel, leading to a drastic improvement in planning time. We validate the proposed algorithm in two different planning domains: 1) motion planning for 3D humanoid navigation and 2) task and motion planning for a robotic assembly task. We show that MPLP outperforms the state-of-the-art lazy search as well as parallel search algorithms. The open-source code for MPLP is available here: https://github.com/shohinm/parallel_search
[ { "created": "Tue, 6 Jul 2021 18:17:02 GMT", "version": "v1" }, { "created": "Thu, 2 Sep 2021 20:44:26 GMT", "version": "v2" }, { "created": "Mon, 28 Feb 2022 21:17:45 GMT", "version": "v3" }, { "created": "Fri, 13 Jan 2023 02:06:23 GMT", "version": "v4" } ]
2023-01-16
[ [ "Mukherjee", "Shohin", "" ], [ "Aine", "Sandip", "" ], [ "Likhachev", "Maxim", "" ] ]
Lazy search algorithms have been developed to efficiently solve planning problems in domains where the computational effort is dominated by the cost of edge evaluation. The existing algorithms operate by intelligently balancing computational effort between searching the graph and evaluating edges. However, they are designed to run as a single process and do not leverage the multithreading capability of modern processors. In this work, we propose a massively parallelized, bounded suboptimal, lazy search algorithm (MPLP) that harnesses modern multi-core processors. In MPLP, searching of the graph and edge evaluations are performed completely asynchronously in parallel, leading to a drastic improvement in planning time. We validate the proposed algorithm in two different planning domains: 1) motion planning for 3D humanoid navigation and 2) task and motion planning for a robotic assembly task. We show that MPLP outperforms the state-of-the-art lazy search as well as parallel search algorithms. The open-source code for MPLP is available here: https://github.com/shohinm/parallel_search
1601.00846
Mohammad Khodaei
Mohammad Khodaei, Hongyu Jin, and Panos Papadimitratos
Towards Deploying a Scalable & Robust Vehicular Identity and Credential Management Infrastructure
8 pages, 13 figures, IEEE Vehicular Networking Conference (VNC). IEEE VNC, Dec. 2014, pp. 33-40
null
10.1109/VNC.2014.7013306
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming Vehicular Communication (VC) systems. There is a growing consensus towards deploying a Vehicular Public-Key Infrastructure (VPKI) enables pseudonymous authentication, with standardization efforts in that direction. However, there are still significant technical issues that remain unresolved. Existing proposals for instantiating the VPKI either need additional detailed specifications or enhanced security and privacy features. Equally important, there is limited experimental work that establishes the VPKI efficiency and scalability. In this paper, we are concerned with exactly these issues. We leverage the common VPKI approach and contribute an enhanced system with precisely defined, novel features that improve its resilience and the user privacy protection. In particular, we depart from the common assumption that the VPKI entities are fully trusted and we improve user privacy in the face of an honest-but-curious security infrastructure. Moreover, we fully implement our VPKI, in a standard-compliant manner, and we perform an extensive evaluation. Along with stronger protection and richer functionality, our system achieves very significant performance improvement over prior systems - contributing the most advanced VPKI towards deployment.
[ { "created": "Tue, 5 Jan 2016 14:35:24 GMT", "version": "v1" } ]
2016-01-06
[ [ "Khodaei", "Mohammad", "" ], [ "Jin", "Hongyu", "" ], [ "Papadimitratos", "Panos", "" ] ]
Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming Vehicular Communication (VC) systems. There is a growing consensus towards deploying a Vehicular Public-Key Infrastructure (VPKI) enables pseudonymous authentication, with standardization efforts in that direction. However, there are still significant technical issues that remain unresolved. Existing proposals for instantiating the VPKI either need additional detailed specifications or enhanced security and privacy features. Equally important, there is limited experimental work that establishes the VPKI efficiency and scalability. In this paper, we are concerned with exactly these issues. We leverage the common VPKI approach and contribute an enhanced system with precisely defined, novel features that improve its resilience and the user privacy protection. In particular, we depart from the common assumption that the VPKI entities are fully trusted and we improve user privacy in the face of an honest-but-curious security infrastructure. Moreover, we fully implement our VPKI, in a standard-compliant manner, and we perform an extensive evaluation. Along with stronger protection and richer functionality, our system achieves very significant performance improvement over prior systems - contributing the most advanced VPKI towards deployment.
1905.07339
Hang Zou
Hang Zou, Chao Zhang, Samson Lasaulce, Lucas Saludjian and Patrick Panciatici
Decision-Oriented Communications: Application to Energy-Efficient Resource Allocation
null
WINCOM2018
10.1109/WINCOM.2018.8629632
null
cs.LG cs.NI stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
In this paper, we introduce the problem of decision-oriented communications, that is, the goal of the source is to send the right amount of information in order for the intended destination to execute a task. More specifically, we restrict our attention to how the source should quantize information so that the destination can maximize a utility function which represents the task to be executed only knowing the quantized information. For example, for utility functions under the form $u\left(\boldsymbol{x};\ \boldsymbol{g}\right)$, $\boldsymbol{x}$ might represent a decision in terms of using some radio resources and $\boldsymbol{g}$ the system state which is only observed through its quantized version $Q(\boldsymbol{g})$. Both in the case where the utility function is known and the case where it is only observed through its realizations, we provide solutions to determine such a quantizer. We show how this approach applies to energy-efficient power allocation. In particular, it is seen that quantizing the state very roughly is perfectly suited to sum-rate-type function maximization, whereas energy-efficiency metrics are more sensitive to imperfections.
[ { "created": "Fri, 17 May 2019 15:55:07 GMT", "version": "v1" } ]
2019-05-20
[ [ "Zou", "Hang", "" ], [ "Zhang", "Chao", "" ], [ "Lasaulce", "Samson", "" ], [ "Saludjian", "Lucas", "" ], [ "Panciatici", "Patrick", "" ] ]
In this paper, we introduce the problem of decision-oriented communications, that is, the goal of the source is to send the right amount of information in order for the intended destination to execute a task. More specifically, we restrict our attention to how the source should quantize information so that the destination can maximize a utility function which represents the task to be executed only knowing the quantized information. For example, for utility functions under the form $u\left(\boldsymbol{x};\ \boldsymbol{g}\right)$, $\boldsymbol{x}$ might represent a decision in terms of using some radio resources and $\boldsymbol{g}$ the system state which is only observed through its quantized version $Q(\boldsymbol{g})$. Both in the case where the utility function is known and the case where it is only observed through its realizations, we provide solutions to determine such a quantizer. We show how this approach applies to energy-efficient power allocation. In particular, it is seen that quantizing the state very roughly is perfectly suited to sum-rate-type function maximization, whereas energy-efficiency metrics are more sensitive to imperfections.
2403.17826
Marcel Steinmetz
Gregor Behnke, Marcel Steinmetz
On the Computational Complexity of Stackelberg Planning and Meta-Operator Verification: Technical Report
Presented at ICAPS24
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Stackelberg planning is a recently introduced single-turn two-player adversarial planning model, where two players are acting in a joint classical planning task, the objective of the first player being hampering the second player from achieving its goal. This places the Stackelberg planning problem somewhere between classical planning and general combinatorial two-player games. But, where exactly? All investigations of Stackelberg planning so far focused on practical aspects. We close this gap by conducting the first theoretical complexity analysis of Stackelberg planning. We show that in general Stackelberg planning is actually no harder than classical planning. Under a polynomial plan-length restriction, however, Stackelberg planning is a level higher up in the polynomial complexity hierarchy, suggesting that compilations into classical planning come with a worst-case exponential plan-length increase. In attempts to identify tractable fragments, we further study its complexity under various planning task restrictions, showing that Stackelberg planning remains intractable where classical planning is not. We finally inspect the complexity of meta-operator verification, a problem that has been recently connected to Stackelberg planning.
[ { "created": "Tue, 26 Mar 2024 16:06:33 GMT", "version": "v1" } ]
2024-03-27
[ [ "Behnke", "Gregor", "" ], [ "Steinmetz", "Marcel", "" ] ]
Stackelberg planning is a recently introduced single-turn two-player adversarial planning model, where two players are acting in a joint classical planning task, the objective of the first player being hampering the second player from achieving its goal. This places the Stackelberg planning problem somewhere between classical planning and general combinatorial two-player games. But, where exactly? All investigations of Stackelberg planning so far focused on practical aspects. We close this gap by conducting the first theoretical complexity analysis of Stackelberg planning. We show that in general Stackelberg planning is actually no harder than classical planning. Under a polynomial plan-length restriction, however, Stackelberg planning is a level higher up in the polynomial complexity hierarchy, suggesting that compilations into classical planning come with a worst-case exponential plan-length increase. In attempts to identify tractable fragments, we further study its complexity under various planning task restrictions, showing that Stackelberg planning remains intractable where classical planning is not. We finally inspect the complexity of meta-operator verification, a problem that has been recently connected to Stackelberg planning.
2201.02304
Yan Shipeng
Shipeng Yan, Songyang Zhang, Xuming He
Budget-aware Few-shot Learning via Graph Convolutional Network
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples. A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels, which is inefficient in practical applications. In this work, we introduce a new budget-aware few-shot learning problem that not only aims to learn novel object categories, but also needs to select informative examples to annotate in order to achieve data efficiency. We develop a meta-learning strategy for our budget-aware few-shot learning task, which jointly learns a novel data selection policy based on a Graph Convolutional Network (GCN) and an example-based few-shot classifier. Our selection policy computes a context-sensitive representation for each unlabeled data by graph message passing, which is then used to predict an informativeness score for sequential selection. We validate our method by extensive experiments on the mini-ImageNet, tiered-ImageNet and Omniglot datasets. The results show our few-shot learning strategy outperforms baselines by a sizable margin, which demonstrates the efficacy of our method.
[ { "created": "Fri, 7 Jan 2022 02:46:35 GMT", "version": "v1" } ]
2022-01-10
[ [ "Yan", "Shipeng", "" ], [ "Zhang", "Songyang", "" ], [ "He", "Xuming", "" ] ]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples. A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels, which is inefficient in practical applications. In this work, we introduce a new budget-aware few-shot learning problem that not only aims to learn novel object categories, but also needs to select informative examples to annotate in order to achieve data efficiency. We develop a meta-learning strategy for our budget-aware few-shot learning task, which jointly learns a novel data selection policy based on a Graph Convolutional Network (GCN) and an example-based few-shot classifier. Our selection policy computes a context-sensitive representation for each unlabeled data by graph message passing, which is then used to predict an informativeness score for sequential selection. We validate our method by extensive experiments on the mini-ImageNet, tiered-ImageNet and Omniglot datasets. The results show our few-shot learning strategy outperforms baselines by a sizable margin, which demonstrates the efficacy of our method.
2209.09493
Marek Gagolewski
Marek Gagolewski
A framework for benchmarking clustering algorithms
This preprint includes some minor corrections
SoftwareX 20 (2022) 101270
10.1016/j.softx.2022.101270
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
The evaluation of clustering algorithms can involve running them on a variety of benchmark problems, and comparing their outputs to the reference, ground-truth groupings provided by experts. Unfortunately, many research papers and graduate theses consider only a small number of datasets. Also, the fact that there can be many equally valid ways to cluster a given problem set is rarely taken into account. In order to overcome these limitations, we have developed a framework whose aim is to introduce a consistent methodology for testing clustering algorithms. Furthermore, we have aggregated, polished, and standardised many clustering benchmark dataset collections referred to across the machine learning and data mining literature, and included new datasets of different dimensionalities, sizes, and cluster types. An interactive datasets explorer, the documentation of the Python API, a description of the ways to interact with the framework from other programming languages such as R or MATLAB, and other details are all provided at <https://clustering-benchmarks.gagolewski.com>.
[ { "created": "Tue, 20 Sep 2022 06:10:41 GMT", "version": "v1" }, { "created": "Mon, 12 Dec 2022 22:53:54 GMT", "version": "v2" }, { "created": "Wed, 25 Oct 2023 22:32:18 GMT", "version": "v3" } ]
2023-10-27
[ [ "Gagolewski", "Marek", "" ] ]
The evaluation of clustering algorithms can involve running them on a variety of benchmark problems, and comparing their outputs to the reference, ground-truth groupings provided by experts. Unfortunately, many research papers and graduate theses consider only a small number of datasets. Also, the fact that there can be many equally valid ways to cluster a given problem set is rarely taken into account. In order to overcome these limitations, we have developed a framework whose aim is to introduce a consistent methodology for testing clustering algorithms. Furthermore, we have aggregated, polished, and standardised many clustering benchmark dataset collections referred to across the machine learning and data mining literature, and included new datasets of different dimensionalities, sizes, and cluster types. An interactive datasets explorer, the documentation of the Python API, a description of the ways to interact with the framework from other programming languages such as R or MATLAB, and other details are all provided at <https://clustering-benchmarks.gagolewski.com>.
0903.0548
Li Chia Choo
Li-Chia Choo and Kai-Kit Wong
On the 3-Receiver Broadcast Channel with Degraded Message Sets and Confidential Messages
Revised version submiitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, bounds to the rate-equivocation region for the general 3-receiver broadcast channel (BC) with degraded message sets, are presented for confidential messages to be kept secret from one of the receivers. This model is more general than the 2-receiver BCs with confidential messages with an external wiretapper, and the recently studied 3-receiver degraded BCs with confidential messages, since in the model studied in this paper, the conditions on the receivers are general and the wiretapper receives the common message. Wyner's code partitioning combined with double-binning is used to show the achievable rate tuples. Error probability analysis and equivocation calculation are also provided. The secure coding scheme is sufficient to provide security for the 3-receiver BC with 2 or 3 degraded message sets, for the scenarios: (i) 3 degraded message sets, where the first confidential message is sent to receivers 1 and 2 and the second confidential message is sent to receiver 1, (ii) 2 degraded message sets, where one confidential message is sent to receiver 1, and (iii) 2 degraded message sets, where one confidential message is sent to receivers 1 and 2. The proof for the outer bound is shown for the cases where receiver 1 is more capable than the wiretap receiver 3, for the first two scenarios. Under the condition that both receivers 1 and 2 are less noisy than the wiretap receiver 3, the inner and outer bounds coincide, giving the rate-equivocation region for (iii). In addition, a new outer bound for the general 3-receiver BC with 3 degraded messages is obtained.
[ { "created": "Tue, 3 Mar 2009 14:53:11 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2009 16:10:28 GMT", "version": "v2" } ]
2009-10-27
[ [ "Choo", "Li-Chia", "" ], [ "Wong", "Kai-Kit", "" ] ]
In this paper, bounds to the rate-equivocation region for the general 3-receiver broadcast channel (BC) with degraded message sets, are presented for confidential messages to be kept secret from one of the receivers. This model is more general than the 2-receiver BCs with confidential messages with an external wiretapper, and the recently studied 3-receiver degraded BCs with confidential messages, since in the model studied in this paper, the conditions on the receivers are general and the wiretapper receives the common message. Wyner's code partitioning combined with double-binning is used to show the achievable rate tuples. Error probability analysis and equivocation calculation are also provided. The secure coding scheme is sufficient to provide security for the 3-receiver BC with 2 or 3 degraded message sets, for the scenarios: (i) 3 degraded message sets, where the first confidential message is sent to receivers 1 and 2 and the second confidential message is sent to receiver 1, (ii) 2 degraded message sets, where one confidential message is sent to receiver 1, and (iii) 2 degraded message sets, where one confidential message is sent to receivers 1 and 2. The proof for the outer bound is shown for the cases where receiver 1 is more capable than the wiretap receiver 3, for the first two scenarios. Under the condition that both receivers 1 and 2 are less noisy than the wiretap receiver 3, the inner and outer bounds coincide, giving the rate-equivocation region for (iii). In addition, a new outer bound for the general 3-receiver BC with 3 degraded messages is obtained.
2304.09677
Ashkan Mirzaei
Ashkan Mirzaei, Tristan Aumentado-Armstrong, Marcus A. Brubaker, Jonathan Kelly, Alex Levinshtein, Konstantinos G. Derpanis, Igor Gilitschenski
Reference-guided Controllable Inpainting of Neural Radiance Fields
Project Page: https://ashmrz.github.io/reference-guided-3d
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The popularity of Neural Radiance Fields (NeRFs) for view synthesis has led to a desire for NeRF editing tools. Here, we focus on inpainting regions in a view-consistent and controllable manner. In addition to the typical NeRF inputs and masks delineating the unwanted region in each view, we require only a single inpainted view of the scene, i.e., a reference view. We use monocular depth estimators to back-project the inpainted view to the correct 3D positions. Then, via a novel rendering technique, a bilateral solver can construct view-dependent effects in non-reference views, making the inpainted region appear consistent from any view. For non-reference disoccluded regions, which cannot be supervised by the single reference view, we devise a method based on image inpainters to guide both the geometry and appearance. Our approach shows superior performance to NeRF inpainting baselines, with the additional advantage that a user can control the generated scene via a single inpainted image. Project page: https://ashmrz.github.io/reference-guided-3d
[ { "created": "Wed, 19 Apr 2023 14:11:21 GMT", "version": "v1" }, { "created": "Thu, 20 Apr 2023 15:19:12 GMT", "version": "v2" } ]
2023-04-21
[ [ "Mirzaei", "Ashkan", "" ], [ "Aumentado-Armstrong", "Tristan", "" ], [ "Brubaker", "Marcus A.", "" ], [ "Kelly", "Jonathan", "" ], [ "Levinshtein", "Alex", "" ], [ "Derpanis", "Konstantinos G.", "" ], [ "Gilitschenski", "Igor", "" ] ]
The popularity of Neural Radiance Fields (NeRFs) for view synthesis has led to a desire for NeRF editing tools. Here, we focus on inpainting regions in a view-consistent and controllable manner. In addition to the typical NeRF inputs and masks delineating the unwanted region in each view, we require only a single inpainted view of the scene, i.e., a reference view. We use monocular depth estimators to back-project the inpainted view to the correct 3D positions. Then, via a novel rendering technique, a bilateral solver can construct view-dependent effects in non-reference views, making the inpainted region appear consistent from any view. For non-reference disoccluded regions, which cannot be supervised by the single reference view, we devise a method based on image inpainters to guide both the geometry and appearance. Our approach shows superior performance to NeRF inpainting baselines, with the additional advantage that a user can control the generated scene via a single inpainted image. Project page: https://ashmrz.github.io/reference-guided-3d
2103.07246
Beomyoung Kim
Beomyoung Kim, Sangeun Han, Junmo Kim
Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation
AAAI 2021, Accepted
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly-supervised semantic segmentation (WSSS) using image-level labels has recently attracted much attention for reducing annotation costs. Existing WSSS methods utilize localization maps from the classification network to generate pseudo segmentation labels. However, since localization maps obtained from the classifier focus only on sparse discriminative object regions, it is difficult to generate high-quality segmentation labels. To address this issue, we introduce discriminative region suppression (DRS) module that is a simple yet effective method to expand object activation regions. DRS suppresses the attention on discriminative regions and spreads it to adjacent non-discriminative regions, generating dense localization maps. DRS requires few or no additional parameters and can be plugged into any network. Furthermore, we introduce an additional learning strategy to give a self-enhancement of localization maps, named localization map refinement learning. Benefiting from this refinement learning, localization maps are refined and enhanced by recovering some missing parts or removing noise itself. Due to its simplicity and effectiveness, our approach achieves mIoU 71.4% on the PASCAL VOC 2012 segmentation benchmark using only image-level labels. Extensive experiments demonstrate the effectiveness of our approach. The code is available at https://github.com/qjadud1994/DRS.
[ { "created": "Fri, 12 Mar 2021 12:56:06 GMT", "version": "v1" }, { "created": "Mon, 5 Apr 2021 09:09:41 GMT", "version": "v2" } ]
2021-04-06
[ [ "Kim", "Beomyoung", "" ], [ "Han", "Sangeun", "" ], [ "Kim", "Junmo", "" ] ]
Weakly-supervised semantic segmentation (WSSS) using image-level labels has recently attracted much attention for reducing annotation costs. Existing WSSS methods utilize localization maps from the classification network to generate pseudo segmentation labels. However, since localization maps obtained from the classifier focus only on sparse discriminative object regions, it is difficult to generate high-quality segmentation labels. To address this issue, we introduce discriminative region suppression (DRS) module that is a simple yet effective method to expand object activation regions. DRS suppresses the attention on discriminative regions and spreads it to adjacent non-discriminative regions, generating dense localization maps. DRS requires few or no additional parameters and can be plugged into any network. Furthermore, we introduce an additional learning strategy to give a self-enhancement of localization maps, named localization map refinement learning. Benefiting from this refinement learning, localization maps are refined and enhanced by recovering some missing parts or removing noise itself. Due to its simplicity and effectiveness, our approach achieves mIoU 71.4% on the PASCAL VOC 2012 segmentation benchmark using only image-level labels. Extensive experiments demonstrate the effectiveness of our approach. The code is available at https://github.com/qjadud1994/DRS.
0808.4156
Shirin Jalali
Shirin Jalali, Tsachy Weissman
Rate-Distortion via Markov Chain Monte Carlo
35 pages, 16 figures, Submitted to IEEE Transactions on Information Theory
null
10.1109/ISIT.2008.4595107
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a `heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-logarithmically with the sequence length). We show that the proposed algorithms achieve optimum rate-distortion performance in the limits of large number of iterations, and sequence length, when employed on any stationary ergodic source. Experimentation shows promising initial results. Employing our lossy compressors on noisy data, with appropriately chosen distortion measure and level, followed by a simple de-randomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).
[ { "created": "Fri, 29 Aug 2008 19:23:16 GMT", "version": "v1" }, { "created": "Fri, 7 May 2010 18:02:18 GMT", "version": "v2" } ]
2016-11-17
[ [ "Jalali", "Shirin", "" ], [ "Weissman", "Tsachy", "" ] ]
We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a `heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-logarithmically with the sequence length). We show that the proposed algorithms achieve optimum rate-distortion performance in the limits of large number of iterations, and sequence length, when employed on any stationary ergodic source. Experimentation shows promising initial results. Employing our lossy compressors on noisy data, with appropriately chosen distortion measure and level, followed by a simple de-randomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).
2310.01088
Kentaro Mitsui
Kentaro Mitsui, Yukiya Hono, Kei Sawada
Towards human-like spoken dialogue generation between AI agents from written dialogue
18 pages, 8 figures, 9 tables, audio samples: https://rinnakk.github.io/research/publications/CHATS/
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of large language models (LLMs) has made it possible to generate natural written dialogues between two agents. However, generating human-like spoken dialogues from these written dialogues remains challenging. Spoken dialogues have several unique characteristics: they frequently include backchannels and laughter, and the smoothness of turn-taking significantly influences the fluidity of conversation. This study proposes CHATS - CHatty Agents Text-to-Speech - a discrete token-based system designed to generate spoken dialogues based on written dialogues. Our system can generate speech for both the speaker side and the listener side simultaneously, using only the transcription from the speaker side, which eliminates the need for transcriptions of backchannels or laughter. Moreover, CHATS facilitates natural turn-taking; it determines the appropriate duration of silence after each utterance in the absence of overlap, and it initiates the generation of overlapping speech based on the phoneme sequence of the next utterance in case of overlap. Experimental evaluations indicate that CHATS outperforms the text-to-speech baseline, producing spoken dialogues that are more interactive and fluid while retaining clarity and intelligibility.
[ { "created": "Mon, 2 Oct 2023 11:03:20 GMT", "version": "v1" } ]
2023-10-03
[ [ "Mitsui", "Kentaro", "" ], [ "Hono", "Yukiya", "" ], [ "Sawada", "Kei", "" ] ]
The advent of large language models (LLMs) has made it possible to generate natural written dialogues between two agents. However, generating human-like spoken dialogues from these written dialogues remains challenging. Spoken dialogues have several unique characteristics: they frequently include backchannels and laughter, and the smoothness of turn-taking significantly influences the fluidity of conversation. This study proposes CHATS - CHatty Agents Text-to-Speech - a discrete token-based system designed to generate spoken dialogues based on written dialogues. Our system can generate speech for both the speaker side and the listener side simultaneously, using only the transcription from the speaker side, which eliminates the need for transcriptions of backchannels or laughter. Moreover, CHATS facilitates natural turn-taking; it determines the appropriate duration of silence after each utterance in the absence of overlap, and it initiates the generation of overlapping speech based on the phoneme sequence of the next utterance in case of overlap. Experimental evaluations indicate that CHATS outperforms the text-to-speech baseline, producing spoken dialogues that are more interactive and fluid while retaining clarity and intelligibility.
2106.00477
Antti Koskela
Antti Koskela, Mikko A. Heikkil\"a, Antti Honkela
Tight Accounting in the Shuffle Model of Differential Privacy
21 pages, 5 figures
null
null
null
cs.CR cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler. It has been shown that the additional randomisation provided by the shuffler improves privacy bounds compared to the purely local mechanisms. Accounting tight bounds, however, is complicated by the complexity brought by the shuffler. The recently proposed numerical techniques for evaluating $(\varepsilon,\delta)$-differential privacy guarantees have been shown to give tighter bounds than commonly used methods for compositions of various complex mechanisms. In this paper, we show how to obtain accurate bounds for adaptive compositions of general $\varepsilon$-LDP shufflers using the analysis by Feldman et al. (2021) and tight bounds for adaptive compositions of shufflers of $k$-randomised response mechanisms, using the analysis by Balle et al. (2019). We show how to speed up the evaluation of the resulting privacy loss distribution from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$, where $n$ is the number of users, without noticeable change in the resulting $\delta(\varepsilon)$-upper bounds. We also demonstrate looseness of the existing bounds and methods found in the literature, improving previous composition results significantly.
[ { "created": "Tue, 1 Jun 2021 13:30:32 GMT", "version": "v1" }, { "created": "Wed, 3 Nov 2021 14:47:36 GMT", "version": "v2" }, { "created": "Mon, 31 Jan 2022 21:38:37 GMT", "version": "v3" } ]
2022-02-02
[ [ "Koskela", "Antti", "" ], [ "Heikkilä", "Mikko A.", "" ], [ "Honkela", "Antti", "" ] ]
Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler. It has been shown that the additional randomisation provided by the shuffler improves privacy bounds compared to the purely local mechanisms. Accounting tight bounds, however, is complicated by the complexity brought by the shuffler. The recently proposed numerical techniques for evaluating $(\varepsilon,\delta)$-differential privacy guarantees have been shown to give tighter bounds than commonly used methods for compositions of various complex mechanisms. In this paper, we show how to obtain accurate bounds for adaptive compositions of general $\varepsilon$-LDP shufflers using the analysis by Feldman et al. (2021) and tight bounds for adaptive compositions of shufflers of $k$-randomised response mechanisms, using the analysis by Balle et al. (2019). We show how to speed up the evaluation of the resulting privacy loss distribution from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$, where $n$ is the number of users, without noticeable change in the resulting $\delta(\varepsilon)$-upper bounds. We also demonstrate looseness of the existing bounds and methods found in the literature, improving previous composition results significantly.
2005.11040
Yuki Ueda
Yuki Ueda, Takashi Ishio, Akinori Ihara, Kenichi Matsumoto
DevReplay: Automatic Repair with Editable Fix Pattern
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Static analysis tools, or linters, detect violation of source code conventions to maintain project readability. Those tools automatically fix specific violations while developers edit the source code. However, existing tools are designed for the general conventions of programming languages. These tools do not check the project/API-specific conventions. We propose a novel static analysis tool DevReplay that generates code change patterns by mining the code change history, and we recommend changes using the matched patterns. Using DevReplay, developers can automatically detect and fix project/API-specific problems in the code editor and code review. Also, we evaluate the accuracy of DevReplay using automatic program repair tool benchmarks and real software. We found that DevReplay resolves more bugs than state-of-the-art APR tools. Finally, we submitted patches to the most popular open-source projects that are implemented by different languages, and project reviewers accepted 80% (8 of 10) patches. DevReplay is available on https://devreplay.github.io.
[ { "created": "Fri, 22 May 2020 07:36:08 GMT", "version": "v1" } ]
2020-05-25
[ [ "Ueda", "Yuki", "" ], [ "Ishio", "Takashi", "" ], [ "Ihara", "Akinori", "" ], [ "Matsumoto", "Kenichi", "" ] ]
Static analysis tools, or linters, detect violation of source code conventions to maintain project readability. Those tools automatically fix specific violations while developers edit the source code. However, existing tools are designed for the general conventions of programming languages. These tools do not check the project/API-specific conventions. We propose a novel static analysis tool DevReplay that generates code change patterns by mining the code change history, and we recommend changes using the matched patterns. Using DevReplay, developers can automatically detect and fix project/API-specific problems in the code editor and code review. Also, we evaluate the accuracy of DevReplay using automatic program repair tool benchmarks and real software. We found that DevReplay resolves more bugs than state-of-the-art APR tools. Finally, we submitted patches to the most popular open-source projects that are implemented by different languages, and project reviewers accepted 80% (8 of 10) patches. DevReplay is available on https://devreplay.github.io.
1609.00833
Nan Liu
Tianyu Yang, Nan Liu, Wei Kang, Shlomo Shamai
An Upper Bound on the Sum Capacity of the Downlink Multicell Processing with Finite Backhaul Capacity
23 pages, 4 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study upper bounds on the sum capacity of the downlink multicell processing model with finite backhaul capacity for the simple case of 2 base stations and 2 mobile users. It is modelled as a two-user multiple access diamond channel. It consists of a first hop from the central processor to the base stations via orthogonal links of finite capacity, and the second hop from the base stations to the mobile users via a Gaussian interference channel. The converse is derived using the converse tools of the multiple access diamond channel and that of the Gaussian MIMO broadcast channel. Through numerical results, it is shown that our upper bound improves upon the existing upper bound greatly in the medium backhaul capacity range, and as a result, the gap between the upper bounds and the sum rate of the time-sharing of the known achievable schemes is significantly reduced.
[ { "created": "Sat, 3 Sep 2016 14:52:21 GMT", "version": "v1" } ]
2016-09-06
[ [ "Yang", "Tianyu", "" ], [ "Liu", "Nan", "" ], [ "Kang", "Wei", "" ], [ "Shamai", "Shlomo", "" ] ]
In this paper, we study upper bounds on the sum capacity of the downlink multicell processing model with finite backhaul capacity for the simple case of 2 base stations and 2 mobile users. It is modelled as a two-user multiple access diamond channel. It consists of a first hop from the central processor to the base stations via orthogonal links of finite capacity, and the second hop from the base stations to the mobile users via a Gaussian interference channel. The converse is derived using the converse tools of the multiple access diamond channel and that of the Gaussian MIMO broadcast channel. Through numerical results, it is shown that our upper bound improves upon the existing upper bound greatly in the medium backhaul capacity range, and as a result, the gap between the upper bounds and the sum rate of the time-sharing of the known achievable schemes is significantly reduced.
2406.20085
Yicheng Chen
Yicheng Chen, Xiangtai Li, Yining Li, Yanhong Zeng, Jianzong Wu, Xiangyu Zhao, Kai Chen
Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language
19 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion-based models have shown great potential in generating high-quality images with various layouts, which can benefit downstream perception tasks. However, a fully automatic layout generation driven only by language and a suitable metric for measuring multiple generated instances has not been well explored. In this work, we present Auto Cherry-Picker (ACP), a novel framework that generates high-quality multi-modal training examples to augment perception and multi-modal training. Starting with a simple list of natural language concepts, we prompt large language models (LLMs) to generate a detailed description and design reasonable layouts. Next, we use an off-the-shelf text-to-image model to generate multiple images. Then, the generated data are refined using a comprehensively designed metric to ensure quality. In particular, we present a new metric, Composite Layout and Image Score (CLIS), to evaluate the generated images fairly. Our synthetic high-quality examples boost performance in various scenarios by customizing the initial concept list, especially in addressing challenges associated with long-tailed distribution and imbalanced datasets. Experiment results on downstream tasks demonstrate that Auto Cherry-Picker can significantly improve the performance of existing models. In addition, we have thoroughly investigated the correlation between CLIS and performance gains in downstream tasks, and we find that a better CLIS score results in better performance. This finding shows the potential for evaluation metrics as the role for various visual perception and MLLM tasks. Code will be available.
[ { "created": "Fri, 28 Jun 2024 17:53:18 GMT", "version": "v1" } ]
2024-07-01
[ [ "Chen", "Yicheng", "" ], [ "Li", "Xiangtai", "" ], [ "Li", "Yining", "" ], [ "Zeng", "Yanhong", "" ], [ "Wu", "Jianzong", "" ], [ "Zhao", "Xiangyu", "" ], [ "Chen", "Kai", "" ] ]
Diffusion-based models have shown great potential in generating high-quality images with various layouts, which can benefit downstream perception tasks. However, a fully automatic layout generation driven only by language and a suitable metric for measuring multiple generated instances has not been well explored. In this work, we present Auto Cherry-Picker (ACP), a novel framework that generates high-quality multi-modal training examples to augment perception and multi-modal training. Starting with a simple list of natural language concepts, we prompt large language models (LLMs) to generate a detailed description and design reasonable layouts. Next, we use an off-the-shelf text-to-image model to generate multiple images. Then, the generated data are refined using a comprehensively designed metric to ensure quality. In particular, we present a new metric, Composite Layout and Image Score (CLIS), to evaluate the generated images fairly. Our synthetic high-quality examples boost performance in various scenarios by customizing the initial concept list, especially in addressing challenges associated with long-tailed distribution and imbalanced datasets. Experiment results on downstream tasks demonstrate that Auto Cherry-Picker can significantly improve the performance of existing models. In addition, we have thoroughly investigated the correlation between CLIS and performance gains in downstream tasks, and we find that a better CLIS score results in better performance. This finding shows the potential for evaluation metrics as the role for various visual perception and MLLM tasks. Code will be available.
2211.11682
Xiangyang Zhu
Xiangyang Zhu, Renrui Zhang, Bowei He, Ziyu Guo, Ziyao Zeng, Zipeng Qin, Shanghang Zhang, Peng Gao
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
Code is available at https://github.com/yangyangyang127/PointCLIP_V2
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale pre-trained models have shown promising open-world performance for both vision and language tasks. However, their transferred capacity on 3D point clouds is still limited and only constrained to the classification task. In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection. To better align 3D data with the pre-trained language knowledge, PointCLIP V2 contains two key designs. For the visual end, we prompt CLIP via a shape projection module to generate more realistic depth maps, narrowing the domain gap between projected point clouds with natural images. For the textual end, we prompt the GPT model to generate 3D-specific text as the input of CLIP's textual encoder. Without any training in 3D domains, our approach significantly surpasses PointCLIP by +42.90%, +40.44%, and +28.75% accuracy on three datasets for zero-shot 3D classification. On top of that, V2 can be extended to few-shot 3D classification, zero-shot 3D part segmentation, and 3D object detection in a simple manner, demonstrating our generalization ability for unified 3D open-world learning.
[ { "created": "Mon, 21 Nov 2022 17:52:43 GMT", "version": "v1" }, { "created": "Sat, 26 Aug 2023 16:14:09 GMT", "version": "v2" } ]
2023-08-29
[ [ "Zhu", "Xiangyang", "" ], [ "Zhang", "Renrui", "" ], [ "He", "Bowei", "" ], [ "Guo", "Ziyu", "" ], [ "Zeng", "Ziyao", "" ], [ "Qin", "Zipeng", "" ], [ "Zhang", "Shanghang", "" ], [ "Gao", "Peng", "" ] ]
Large-scale pre-trained models have shown promising open-world performance for both vision and language tasks. However, their transferred capacity on 3D point clouds is still limited and only constrained to the classification task. In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection. To better align 3D data with the pre-trained language knowledge, PointCLIP V2 contains two key designs. For the visual end, we prompt CLIP via a shape projection module to generate more realistic depth maps, narrowing the domain gap between projected point clouds with natural images. For the textual end, we prompt the GPT model to generate 3D-specific text as the input of CLIP's textual encoder. Without any training in 3D domains, our approach significantly surpasses PointCLIP by +42.90%, +40.44%, and +28.75% accuracy on three datasets for zero-shot 3D classification. On top of that, V2 can be extended to few-shot 3D classification, zero-shot 3D part segmentation, and 3D object detection in a simple manner, demonstrating our generalization ability for unified 3D open-world learning.
2007.02370
Adel Nabli
Adel Nabli, Margarida Carvalho, Pierre Hosteins
Complexity of the Multilevel Critical Node Problem
null
null
null
null
cs.CC cs.DM cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we analyze a sequential game played in a graph called the Multilevel Critical Node problem (MCN). A defender and an attacker are the players of this game. The defender starts by preventively interdicting vertices (vaccination) from being attacked. Then, the attacker infects a subset of non-vaccinated vertices and, finally, the defender reacts with a protection strategy. We provide the first computational complexity results associated with MCN and its subgames. Moreover, by considering unitary, weighted, undirected, and directed graphs, we clarify how the theoretical tractability of those problems vary. Our findings contribute with new NP-complete, $\Sigma_2^p$-complete and $\Sigma_3^p$-complete problems. Furthermore, for the last level of the game, the protection stage, we build polynomial time algorithms for certain graph classes.
[ { "created": "Sun, 5 Jul 2020 15:54:53 GMT", "version": "v1" }, { "created": "Fri, 2 Oct 2020 14:22:48 GMT", "version": "v2" } ]
2020-10-05
[ [ "Nabli", "Adel", "" ], [ "Carvalho", "Margarida", "" ], [ "Hosteins", "Pierre", "" ] ]
In this work, we analyze a sequential game played in a graph called the Multilevel Critical Node problem (MCN). A defender and an attacker are the players of this game. The defender starts by preventively interdicting vertices (vaccination) from being attacked. Then, the attacker infects a subset of non-vaccinated vertices and, finally, the defender reacts with a protection strategy. We provide the first computational complexity results associated with MCN and its subgames. Moreover, by considering unitary, weighted, undirected, and directed graphs, we clarify how the theoretical tractability of those problems vary. Our findings contribute with new NP-complete, $\Sigma_2^p$-complete and $\Sigma_3^p$-complete problems. Furthermore, for the last level of the game, the protection stage, we build polynomial time algorithms for certain graph classes.
1605.03079
Wassim Jerbi
Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
A novel clustering algorithm for coverage a large scale in WSN
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The applications require coverage of the whole monitored area for long periods of time. Clustering is a way to reduce communications, minimize energy consumption and organize messages among the cluster head and their members. The message exchange of communication and data transmission between the different sensor nodes must be minimized to keep and extended the lifetime of the network because of limited energy resources of the sensors. In this paper, we take into consideration the problem isolated nodes that are away from the cluster head (CH) and by consequence or CH is not within the reach from these nodes. To solve this problem, we propose O-LEACH (Orphan Low Energy Adaptive Clustering Hierarchy) a routing protocol that takes into account the orphan nodes. Indeed, a cluster member will be able to play the role of a gateway which allows the joining of orphan nodes. Our contribution is to election a cluster head that has enough energy for a better now to coordinate with these member nodes and maintain the full coverage for applications which requires of useful data for the entire area to be covered. The simulation results show that O-LEACH performs better than LEACH in terms of connectivity rate, energy, scalability and coverage.
[ { "created": "Tue, 10 May 2016 16:16:13 GMT", "version": "v1" } ]
2016-05-11
[ [ "Jerbi", "Wassim", "" ], [ "Guermazi", "Abderrahmen", "" ], [ "Trabelsi", "Hafedh", "" ] ]
The applications require coverage of the whole monitored area for long periods of time. Clustering is a way to reduce communications, minimize energy consumption and organize messages among the cluster head and their members. The message exchange of communication and data transmission between the different sensor nodes must be minimized to keep and extended the lifetime of the network because of limited energy resources of the sensors. In this paper, we take into consideration the problem isolated nodes that are away from the cluster head (CH) and by consequence or CH is not within the reach from these nodes. To solve this problem, we propose O-LEACH (Orphan Low Energy Adaptive Clustering Hierarchy) a routing protocol that takes into account the orphan nodes. Indeed, a cluster member will be able to play the role of a gateway which allows the joining of orphan nodes. Our contribution is to election a cluster head that has enough energy for a better now to coordinate with these member nodes and maintain the full coverage for applications which requires of useful data for the entire area to be covered. The simulation results show that O-LEACH performs better than LEACH in terms of connectivity rate, energy, scalability and coverage.
2011.13200
Sourav Dutta
Silviu Oprea and Sourav Dutta and Haytham Assem
Unsupervised Word Translation Pairing using Refinement based Point Set Registration
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Cross-lingual alignment of word embeddings play an important role in knowledge transfer across languages, for improving machine translation and other multi-lingual applications. Current unsupervised approaches rely on similarities in geometric structure of word embedding spaces across languages, to learn structure-preserving linear transformations using adversarial networks and refinement strategies. However, such techniques, in practice, tend to suffer from instability and convergence issues, requiring tedious fine-tuning for precise parameter setting. This paper proposes BioSpere, a novel framework for unsupervised mapping of bi-lingual word embeddings onto a shared vector space, by combining adversarial initialization and refinement procedure with point set registration algorithm used in image processing. We show that our framework alleviates the shortcomings of existing methodologies, and is relatively invariant to variable adversarial learning performance, depicting robustness in terms of parameter choices and training losses. Experimental evaluation on parallel dictionary induction task demonstrates state-of-the-art results for our framework on diverse language pairs.
[ { "created": "Thu, 26 Nov 2020 09:51:29 GMT", "version": "v1" } ]
2020-11-30
[ [ "Oprea", "Silviu", "" ], [ "Dutta", "Sourav", "" ], [ "Assem", "Haytham", "" ] ]
Cross-lingual alignment of word embeddings play an important role in knowledge transfer across languages, for improving machine translation and other multi-lingual applications. Current unsupervised approaches rely on similarities in geometric structure of word embedding spaces across languages, to learn structure-preserving linear transformations using adversarial networks and refinement strategies. However, such techniques, in practice, tend to suffer from instability and convergence issues, requiring tedious fine-tuning for precise parameter setting. This paper proposes BioSpere, a novel framework for unsupervised mapping of bi-lingual word embeddings onto a shared vector space, by combining adversarial initialization and refinement procedure with point set registration algorithm used in image processing. We show that our framework alleviates the shortcomings of existing methodologies, and is relatively invariant to variable adversarial learning performance, depicting robustness in terms of parameter choices and training losses. Experimental evaluation on parallel dictionary induction task demonstrates state-of-the-art results for our framework on diverse language pairs.