id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1107.0088
Marcel de Carli Silva
Marcel K. de Carli Silva, Nicholas J. A. Harvey, Cristiane M. Sato
Sparse Sums of Positive Semidefinite Matrices
null
null
10.1145/2746241
null
cs.DM cs.DS cs.NA math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently there has been much interest in "sparsifying" sums of rank one matrices: modifying the coefficients such that only a few are nonzero, while approximately preserving the matrix that results from the sum. Results of this sort have found applications in many different areas, including sparsifying graphs. In this paper we consider the more general problem of sparsifying sums of positive semidefinite matrices that have arbitrary rank. We give several algorithms for solving this problem. The first algorithm is based on the method of Batson, Spielman and Srivastava (2009). The second algorithm is based on the matrix multiplicative weights update method of Arora and Kale (2007). We also highlight an interesting connection between these two algorithms. Our algorithms have numerous applications. We show how they can be used to construct graph sparsifiers with auxiliary constraints, sparsifiers of hypergraphs, and sparse solutions to semidefinite programs.
[ { "created": "Fri, 1 Jul 2011 00:36:03 GMT", "version": "v1" }, { "created": "Tue, 18 Oct 2011 03:50:03 GMT", "version": "v2" } ]
2018-01-30
[ [ "Silva", "Marcel K. de Carli", "" ], [ "Harvey", "Nicholas J. A.", "" ], [ "Sato", "Cristiane M.", "" ] ]
Recently there has been much interest in "sparsifying" sums of rank one matrices: modifying the coefficients such that only a few are nonzero, while approximately preserving the matrix that results from the sum. Results of this sort have found applications in many different areas, including sparsifying graphs. In this paper we consider the more general problem of sparsifying sums of positive semidefinite matrices that have arbitrary rank. We give several algorithms for solving this problem. The first algorithm is based on the method of Batson, Spielman and Srivastava (2009). The second algorithm is based on the matrix multiplicative weights update method of Arora and Kale (2007). We also highlight an interesting connection between these two algorithms. Our algorithms have numerous applications. We show how they can be used to construct graph sparsifiers with auxiliary constraints, sparsifiers of hypergraphs, and sparse solutions to semidefinite programs.
2401.10228
Xiangtai Li Dr
Shilin Xu, Haobo Yuan, Qingyu Shi, Lu Qi, Jingbo Wang, Yibo Yang, Yining Li, Kai Chen, Yunhai Tong, Bernard Ghanem, Xiangtai Li, Ming-Hsuan Yang
RAP-SAM: Towards Real-Time All-Purpose Segment Anything
Project Page: https://xushilin1.github.io/rap_sam/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advanced by transformer architecture, vision foundation models (VFMs) achieve remarkable progress in performance and generalization ability. Segment Anything Model (SAM) is one remarkable model that can achieve generalized segmentation. However, most VFMs cannot run in realtime, which makes it difficult to transfer them into several products. On the other hand, current real-time segmentation mainly has one purpose, such as semantic segmentation on the driving scene. We argue that diverse outputs are needed for real applications. Thus, this work explores a new real-time segmentation setting, named all-purpose segmentation in real-time, to transfer VFMs in real-time deployment. It contains three different tasks, including interactive segmentation, panoptic segmentation, and video segmentation. We aim to use one model to achieve the above tasks in real-time. We first benchmark several strong baselines. Then, we present Real-Time All Purpose SAM (RAP-SAM). It contains an efficient encoder and an efficient decoupled decoder to perform prompt-driven decoding. Moreover, we further explore different training strategies and tuning methods to boost co-training performance further. Our code and model are available at https://github.com/xushilin1/RAP-SAM/.
[ { "created": "Thu, 18 Jan 2024 18:59:30 GMT", "version": "v1" } ]
2024-01-19
[ [ "Xu", "Shilin", "" ], [ "Yuan", "Haobo", "" ], [ "Shi", "Qingyu", "" ], [ "Qi", "Lu", "" ], [ "Wang", "Jingbo", "" ], [ "Yang", "Yibo", "" ], [ "Li", "Yining", "" ], [ "Chen", "Kai", "" ], [ "Tong", "Yunhai", "" ], [ "Ghanem", "Bernard", "" ], [ "Li", "Xiangtai", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
Advanced by transformer architecture, vision foundation models (VFMs) achieve remarkable progress in performance and generalization ability. Segment Anything Model (SAM) is one remarkable model that can achieve generalized segmentation. However, most VFMs cannot run in realtime, which makes it difficult to transfer them into several products. On the other hand, current real-time segmentation mainly has one purpose, such as semantic segmentation on the driving scene. We argue that diverse outputs are needed for real applications. Thus, this work explores a new real-time segmentation setting, named all-purpose segmentation in real-time, to transfer VFMs in real-time deployment. It contains three different tasks, including interactive segmentation, panoptic segmentation, and video segmentation. We aim to use one model to achieve the above tasks in real-time. We first benchmark several strong baselines. Then, we present Real-Time All Purpose SAM (RAP-SAM). It contains an efficient encoder and an efficient decoupled decoder to perform prompt-driven decoding. Moreover, we further explore different training strategies and tuning methods to boost co-training performance further. Our code and model are available at https://github.com/xushilin1/RAP-SAM/.
2212.08290
Ambrish Rawat
Ambrish Rawat, Giulio Zizzo, Swanand Kadhe, Jonathan P. Epperlein, Stefano Braghin
Robust Learning Protocol for Federated Tumor Segmentation Challenge
14 pages, 2 figures, 3 tables
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we devise robust and efficient learning protocols for orchestrating a Federated Learning (FL) process for the Federated Tumor Segmentation Challenge (FeTS 2022). Enabling FL for FeTS setup is challenging mainly due to data heterogeneity among collaborators and communication cost of training. To tackle these challenges, we propose Robust Learning Protocol (RoLePRO) which is a combination of server-side adaptive optimisation (e.g., server-side Adam) and judicious parameter (weights) aggregation schemes (e.g., adaptive weighted aggregation). RoLePRO takes a two-phase approach, where the first phase consists of vanilla Federated Averaging, while the second phase consists of a judicious aggregation scheme that uses a sophisticated reweighting, all in the presence of an adaptive optimisation algorithm at the server. We draw insights from extensive experimentation to tune learning rates for the two phases.
[ { "created": "Fri, 16 Dec 2022 05:51:52 GMT", "version": "v1" } ]
2022-12-19
[ [ "Rawat", "Ambrish", "" ], [ "Zizzo", "Giulio", "" ], [ "Kadhe", "Swanand", "" ], [ "Epperlein", "Jonathan P.", "" ], [ "Braghin", "Stefano", "" ] ]
In this work, we devise robust and efficient learning protocols for orchestrating a Federated Learning (FL) process for the Federated Tumor Segmentation Challenge (FeTS 2022). Enabling FL for FeTS setup is challenging mainly due to data heterogeneity among collaborators and communication cost of training. To tackle these challenges, we propose Robust Learning Protocol (RoLePRO) which is a combination of server-side adaptive optimisation (e.g., server-side Adam) and judicious parameter (weights) aggregation schemes (e.g., adaptive weighted aggregation). RoLePRO takes a two-phase approach, where the first phase consists of vanilla Federated Averaging, while the second phase consists of a judicious aggregation scheme that uses a sophisticated reweighting, all in the presence of an adaptive optimisation algorithm at the server. We draw insights from extensive experimentation to tune learning rates for the two phases.
1507.05463
Robert Ganian
Eduard Eiben and Robert Ganian and Stefan Szeider
Solving Problems on Graphs of High Rank-Width
Accepted at WADS 2015
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A modulator of a graph G to a specified graph class H is a set of vertices whose deletion puts G into H. The cardinality of a modulator to various tractable graph classes has long been used as a structural parameter which can be exploited to obtain FPT algorithms for a range of hard problems. Here we investigate what happens when a graph contains a modulator which is large but "well-structured" (in the sense of having bounded rank-width). Can such modulators still be exploited to obtain efficient algorithms? And is it even possible to find such modulators efficiently? We first show that the parameters derived from such well-structured modulators are strictly more general than the cardinality of modulators and rank-width itself. Then, we develop an FPT algorithm for finding such well-structured modulators to any graph class which can be characterized by a finite set of forbidden induced subgraphs. We proceed by showing how well-structured modulators can be used to obtain efficient parameterized algorithms for Minimum Vertex Cover and Maximum Clique. Finally, we use well-structured modulators to develop an algorithmic meta-theorem for deciding problems expressible in Monadic Second Order (MSO) logic, and prove that this result is tight in the sense that it cannot be generalized to LinEMSO problems.
[ { "created": "Mon, 20 Jul 2015 12:17:26 GMT", "version": "v1" } ]
2015-07-21
[ [ "Eiben", "Eduard", "" ], [ "Ganian", "Robert", "" ], [ "Szeider", "Stefan", "" ] ]
A modulator of a graph G to a specified graph class H is a set of vertices whose deletion puts G into H. The cardinality of a modulator to various tractable graph classes has long been used as a structural parameter which can be exploited to obtain FPT algorithms for a range of hard problems. Here we investigate what happens when a graph contains a modulator which is large but "well-structured" (in the sense of having bounded rank-width). Can such modulators still be exploited to obtain efficient algorithms? And is it even possible to find such modulators efficiently? We first show that the parameters derived from such well-structured modulators are strictly more general than the cardinality of modulators and rank-width itself. Then, we develop an FPT algorithm for finding such well-structured modulators to any graph class which can be characterized by a finite set of forbidden induced subgraphs. We proceed by showing how well-structured modulators can be used to obtain efficient parameterized algorithms for Minimum Vertex Cover and Maximum Clique. Finally, we use well-structured modulators to develop an algorithmic meta-theorem for deciding problems expressible in Monadic Second Order (MSO) logic, and prove that this result is tight in the sense that it cannot be generalized to LinEMSO problems.
1608.06469
Jerome Darmont
Ayb\"uk\"e Ozt\"urk (ERIC,ARAR), Louis Eyango (ARAR), Sylvie Yona Waksman (ARAR), St\'ephane Lallich (ERIC), J\'er\^ome Darmont (ERIC)
Warehousing Complex Archaeological Objects
9th International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT 2015), Nov 2015, Larnaca, Cyprus. Springer, Proceedings of the 9th International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT 2015), 9405, pp.226-239, 2015, Lecture Notes in Artificial Intelligence
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data organization is a difficult and essential component in cultural heritage applications. Over the years, a great amount of archaeological ceramic data have been created and processed by various methods and devices. Such ceramic data are stored in databases that concur to increase the amount of available information rapidly. However , such databases typically focus on one type of ceramic descriptors, e.g., qualitative textual descriptions, petrographic or chemical analysis results, and do not interoperate. Thus, research involving archaeological ceramics cannot easily take advantage of combining all these types of information. In this application paper, we introduce an evolution of the Ceramom database that includes text descriptors of archaeological features, chemical analysis results, and various images, including petrographic and fabric images. To illustrate what new analyses are permitted by such a database, we source it to a data warehouse and present a sample on-line analysis processing (OLAP) scenario to gain deep understanding of ceramic context.
[ { "created": "Tue, 23 Aug 2016 11:30:48 GMT", "version": "v1" } ]
2016-08-24
[ [ "Oztürk", "Aybükë", "", "ERIC,ARAR" ], [ "Eyango", "Louis", "", "ARAR" ], [ "Waksman", "Sylvie Yona", "", "ARAR" ], [ "Lallich", "Stéphane", "", "ERIC" ], [ "Darmont", "Jérôme", "", "ERIC" ] ]
Data organization is a difficult and essential component in cultural heritage applications. Over the years, a great amount of archaeological ceramic data have been created and processed by various methods and devices. Such ceramic data are stored in databases that concur to increase the amount of available information rapidly. However , such databases typically focus on one type of ceramic descriptors, e.g., qualitative textual descriptions, petrographic or chemical analysis results, and do not interoperate. Thus, research involving archaeological ceramics cannot easily take advantage of combining all these types of information. In this application paper, we introduce an evolution of the Ceramom database that includes text descriptors of archaeological features, chemical analysis results, and various images, including petrographic and fabric images. To illustrate what new analyses are permitted by such a database, we source it to a data warehouse and present a sample on-line analysis processing (OLAP) scenario to gain deep understanding of ceramic context.
1701.07274
Yuxi Li
Yuxi Li
Deep Reinforcement Learning: An Overview
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions. Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update.
[ { "created": "Wed, 25 Jan 2017 11:52:11 GMT", "version": "v1" }, { "created": "Thu, 26 Jan 2017 16:38:08 GMT", "version": "v2" }, { "created": "Sat, 15 Jul 2017 01:49:43 GMT", "version": "v3" }, { "created": "Sun, 3 Sep 2017 12:39:11 GMT", "version": "v4" }, { "created": "Fri, 15 Sep 2017 13:12:26 GMT", "version": "v5" }, { "created": "Mon, 26 Nov 2018 04:56:31 GMT", "version": "v6" } ]
2018-11-27
[ [ "Li", "Yuxi", "" ] ]
We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions. Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update.
2211.06163
Longbin Yan
Longbin Yan, Yunxiao Qin, Shumin Liu, Jie Chen
Dual Complementary Dynamic Convolution for Image Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a powerful engine, vanilla convolution has promoted huge breakthroughs in various computer tasks. However, it often suffers from sample and content agnostic problems, which limits the representation capacities of the convolutional neural networks (CNNs). In this paper, we for the first time model the scene features as a combination of the local spatial-adaptive parts owned by the individual and the global shift-invariant parts shared to all individuals, and then propose a novel two-branch dual complementary dynamic convolution (DCDC) operator to flexibly deal with these two types of features. The DCDC operator overcomes the limitations of vanilla convolution and most existing dynamic convolutions who capture only spatial-adaptive features, and thus markedly boosts the representation capacities of CNNs. Experiments show that the DCDC operator based ResNets (DCDC-ResNets) significantly outperform vanilla ResNets and most state-of-the-art dynamic convolutional networks on image classification, as well as downstream tasks including object detection, instance and panoptic segmentation tasks, while with lower FLOPs and parameters.
[ { "created": "Fri, 11 Nov 2022 12:32:12 GMT", "version": "v1" } ]
2022-11-14
[ [ "Yan", "Longbin", "" ], [ "Qin", "Yunxiao", "" ], [ "Liu", "Shumin", "" ], [ "Chen", "Jie", "" ] ]
As a powerful engine, vanilla convolution has promoted huge breakthroughs in various computer tasks. However, it often suffers from sample and content agnostic problems, which limits the representation capacities of the convolutional neural networks (CNNs). In this paper, we for the first time model the scene features as a combination of the local spatial-adaptive parts owned by the individual and the global shift-invariant parts shared to all individuals, and then propose a novel two-branch dual complementary dynamic convolution (DCDC) operator to flexibly deal with these two types of features. The DCDC operator overcomes the limitations of vanilla convolution and most existing dynamic convolutions who capture only spatial-adaptive features, and thus markedly boosts the representation capacities of CNNs. Experiments show that the DCDC operator based ResNets (DCDC-ResNets) significantly outperform vanilla ResNets and most state-of-the-art dynamic convolutional networks on image classification, as well as downstream tasks including object detection, instance and panoptic segmentation tasks, while with lower FLOPs and parameters.
1611.07119
Chongxuan Li
Chongxuan Li and Jun Zhu and Bo Zhang
Max-Margin Deep Generative Models for (Semi-)Supervised Learning
arXiv admin note: substantial text overlap with arXiv:1504.06787
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability. However, it is relatively insufficient to empower the discriminative ability of DGMs on making accurate predictions. This paper presents max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability. In semi-supervised learning, we use the predictions of a max-margin classifier as the missing labels instead of performing full posterior inference for efficiency; we also introduce additional max-margin and label-balance regularization terms of unlabeled data for effectiveness. We develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objectives in different settings. Empirical results on various datasets demonstrate that: (1) max-margin learning can significantly improve the prediction performance of DGMs and meanwhile retain the generative ability; (2) in supervised learning, mmDGMs are competitive to the best fully discriminative networks when employing convolutional neural networks as the generative and recognition models; and (3) in semi-supervised learning, mmDCGMs can perform efficient inference and achieve state-of-the-art classification results on several benchmarks.
[ { "created": "Tue, 22 Nov 2016 01:36:29 GMT", "version": "v1" } ]
2016-11-23
[ [ "Li", "Chongxuan", "" ], [ "Zhu", "Jun", "" ], [ "Zhang", "Bo", "" ] ]
Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability. However, it is relatively insufficient to empower the discriminative ability of DGMs on making accurate predictions. This paper presents max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability. In semi-supervised learning, we use the predictions of a max-margin classifier as the missing labels instead of performing full posterior inference for efficiency; we also introduce additional max-margin and label-balance regularization terms of unlabeled data for effectiveness. We develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objectives in different settings. Empirical results on various datasets demonstrate that: (1) max-margin learning can significantly improve the prediction performance of DGMs and meanwhile retain the generative ability; (2) in supervised learning, mmDGMs are competitive to the best fully discriminative networks when employing convolutional neural networks as the generative and recognition models; and (3) in semi-supervised learning, mmDCGMs can perform efficient inference and achieve state-of-the-art classification results on several benchmarks.
2310.18979
Henrik Leopold
Jan Mendling, Henrik Leopold, Henning Meyerhenke, Beno\^it Depaire
Methodology of Algorithm Engineering
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on algorithms has drastically increased in recent years. Various sub-disciplines of computer science investigate algorithms according to different objectives and standards. This plurality of the field has led to various methodological advances that have not yet been transferred to neighboring sub-disciplines. The central roadblock for a better knowledge exchange is the lack of a common methodological framework integrating the perspectives of these sub-disciplines. It is the objective of this paper to develop a research framework for algorithm engineering. Our framework builds on three areas discussed in the philosophy of science: ontology, epistemology and methodology. In essence, ontology describes algorithm engineering as being concerned with algorithmic problems, algorithmic tasks, algorithm designs and algorithm implementations. Epistemology describes the body of knowledge of algorithm engineering as a collection of prescriptive and descriptive knowledge, residing in World 3 of Popper's Three Worlds model. Methodology refers to the steps how we can systematically enhance our knowledge of specific algorithms. The framework helps us to identify and discuss various validity concerns relevant to any algorithm engineering contribution. In this way, our framework has important implications for researching algorithms in various areas of computer science.
[ { "created": "Sun, 29 Oct 2023 11:14:02 GMT", "version": "v1" } ]
2023-10-31
[ [ "Mendling", "Jan", "" ], [ "Leopold", "Henrik", "" ], [ "Meyerhenke", "Henning", "" ], [ "Depaire", "Benoît", "" ] ]
Research on algorithms has drastically increased in recent years. Various sub-disciplines of computer science investigate algorithms according to different objectives and standards. This plurality of the field has led to various methodological advances that have not yet been transferred to neighboring sub-disciplines. The central roadblock for a better knowledge exchange is the lack of a common methodological framework integrating the perspectives of these sub-disciplines. It is the objective of this paper to develop a research framework for algorithm engineering. Our framework builds on three areas discussed in the philosophy of science: ontology, epistemology and methodology. In essence, ontology describes algorithm engineering as being concerned with algorithmic problems, algorithmic tasks, algorithm designs and algorithm implementations. Epistemology describes the body of knowledge of algorithm engineering as a collection of prescriptive and descriptive knowledge, residing in World 3 of Popper's Three Worlds model. Methodology refers to the steps how we can systematically enhance our knowledge of specific algorithms. The framework helps us to identify and discuss various validity concerns relevant to any algorithm engineering contribution. In this way, our framework has important implications for researching algorithms in various areas of computer science.
0804.1083
Ambedkar Dukkipati
Ambedkar Dukkipati
Towards algebraic methods for maximum entropy estimation
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that various formulations (e.g., dual and Kullback-Csiszar iterations) of estimation of maximum entropy (ME) models can be transformed to solving systems of polynomial equations in several variables for which one can use celebrated Grobner bases methods. Posing of ME estimation as solving polynomial equations is possible, in the cases where feature functions (sufficient statistic) that provides the information about the underlying random variable in the form of expectations are integer valued.
[ { "created": "Mon, 7 Apr 2008 17:09:00 GMT", "version": "v1" } ]
2008-04-08
[ [ "Dukkipati", "Ambedkar", "" ] ]
We show that various formulations (e.g., dual and Kullback-Csiszar iterations) of estimation of maximum entropy (ME) models can be transformed to solving systems of polynomial equations in several variables for which one can use celebrated Grobner bases methods. Posing of ME estimation as solving polynomial equations is possible, in the cases where feature functions (sufficient statistic) that provides the information about the underlying random variable in the form of expectations are integer valued.
2102.04448
Valentin Khrulkov
Valentin Khrulkov, Artem Babenko, Ivan Oseledets
Functional Space Analysis of Local GAN Convergence
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work demonstrated the benefits of studying continuous-time dynamics governing the GAN training. However, this dynamics is analyzed in the model parameter space, which results in finite-dimensional dynamical systems. We propose a novel perspective where we study the local dynamics of adversarial training in the general functional space and show how it can be represented as a system of partial differential equations. Thus, the convergence properties can be inferred from the eigenvalues of the resulting differential operator. We show that these eigenvalues can be efficiently estimated from the target dataset before training. Our perspective reveals several insights on the practical tricks commonly used to stabilize GANs, such as gradient penalty, data augmentation, and advanced integration schemes. As an immediate practical benefit, we demonstrate how one can a priori select an optimal data augmentation strategy for a particular generation task.
[ { "created": "Mon, 8 Feb 2021 18:59:46 GMT", "version": "v1" } ]
2021-02-09
[ [ "Khrulkov", "Valentin", "" ], [ "Babenko", "Artem", "" ], [ "Oseledets", "Ivan", "" ] ]
Recent work demonstrated the benefits of studying continuous-time dynamics governing the GAN training. However, this dynamics is analyzed in the model parameter space, which results in finite-dimensional dynamical systems. We propose a novel perspective where we study the local dynamics of adversarial training in the general functional space and show how it can be represented as a system of partial differential equations. Thus, the convergence properties can be inferred from the eigenvalues of the resulting differential operator. We show that these eigenvalues can be efficiently estimated from the target dataset before training. Our perspective reveals several insights on the practical tricks commonly used to stabilize GANs, such as gradient penalty, data augmentation, and advanced integration schemes. As an immediate practical benefit, we demonstrate how one can a priori select an optimal data augmentation strategy for a particular generation task.
2402.10211
Raunaq Mahesh Bhirangi
Raunaq Bhirangi, Chenyu Wang, Venkatesh Pattabiraman, Carmel Majidi, Abhinav Gupta, Tess Hellebrekers, Lerrel Pinto
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling
null
null
null
null
cs.LG cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements). While classical approaches are powerful for locally-linear prediction problems, they often fall short when using real-world sensors. These sensors are typically non-linear, are affected by extraneous variables (e.g. vibration), and exhibit data-dependent drift. For many problems, the prediction task is exacerbated by small labeled datasets since obtaining ground-truth labels requires expensive equipment. In this work, we present Hierarchical State-Space Models (HiSS), a conceptually simple, new technique for continuous sequential prediction. HiSS stacks structured state-space models on top of each other to create a temporal hierarchy. Across six real-world sensor datasets, from tactile-based state prediction to accelerometer-based inertial measurement, HiSS outperforms state-of-the-art sequence models such as causal Transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments further indicate that HiSS demonstrates efficient scaling to smaller datasets and is compatible with existing data-filtering techniques. Code, datasets and videos can be found on https://hiss-csp.github.io.
[ { "created": "Thu, 15 Feb 2024 18:59:43 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2024 08:38:41 GMT", "version": "v2" }, { "created": "Wed, 31 Jul 2024 21:17:43 GMT", "version": "v3" } ]
2024-08-02
[ [ "Bhirangi", "Raunaq", "" ], [ "Wang", "Chenyu", "" ], [ "Pattabiraman", "Venkatesh", "" ], [ "Majidi", "Carmel", "" ], [ "Gupta", "Abhinav", "" ], [ "Hellebrekers", "Tess", "" ], [ "Pinto", "Lerrel", "" ] ]
Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements). While classical approaches are powerful for locally-linear prediction problems, they often fall short when using real-world sensors. These sensors are typically non-linear, are affected by extraneous variables (e.g. vibration), and exhibit data-dependent drift. For many problems, the prediction task is exacerbated by small labeled datasets since obtaining ground-truth labels requires expensive equipment. In this work, we present Hierarchical State-Space Models (HiSS), a conceptually simple, new technique for continuous sequential prediction. HiSS stacks structured state-space models on top of each other to create a temporal hierarchy. Across six real-world sensor datasets, from tactile-based state prediction to accelerometer-based inertial measurement, HiSS outperforms state-of-the-art sequence models such as causal Transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments further indicate that HiSS demonstrates efficient scaling to smaller datasets and is compatible with existing data-filtering techniques. Code, datasets and videos can be found on https://hiss-csp.github.io.
2302.12148
Yunyu Huang
Yunyu Huang, Yani Feng, Qifeng Liao
Streaming data recovery via Bayesian tensor train decomposition
null
null
null
null
cs.LG math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a Bayesian tensor train (TT) decomposition method to recover streaming data by approximating the latent structure in high-order streaming data. Drawing on the streaming variational Bayes method, we introduce the TT format into Bayesian tensor decomposition methods for streaming data, and formulate posteriors of TT cores. Thanks to the Bayesian framework of the TT format, the proposed algorithm (SPTT) excels in recovering streaming data with high-order, incomplete, and noisy properties. The experiments in synthetic and real-world datasets show the accuracy of our method compared to state-of-the-art Bayesian tensor decomposition methods for streaming data.
[ { "created": "Thu, 23 Feb 2023 16:32:31 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 09:32:02 GMT", "version": "v2" } ]
2024-02-29
[ [ "Huang", "Yunyu", "" ], [ "Feng", "Yani", "" ], [ "Liao", "Qifeng", "" ] ]
In this paper, we study a Bayesian tensor train (TT) decomposition method to recover streaming data by approximating the latent structure in high-order streaming data. Drawing on the streaming variational Bayes method, we introduce the TT format into Bayesian tensor decomposition methods for streaming data, and formulate posteriors of TT cores. Thanks to the Bayesian framework of the TT format, the proposed algorithm (SPTT) excels in recovering streaming data with high-order, incomplete, and noisy properties. The experiments in synthetic and real-world datasets show the accuracy of our method compared to state-of-the-art Bayesian tensor decomposition methods for streaming data.
1912.02628
Song Fang
Song Fang, Quanyan Zhu
Fundamental Limitations in Sequential Prediction and Recursive Algorithms: $\mathcal{L}_{p}$ Bounds via an Entropic Analysis
arXiv admin note: substantial text overlap with arXiv:1910.06742. text overlap with arXiv:1912.05541
null
null
null
cs.LG cs.IT eess.SP math.IT math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we obtain fundamental $\mathcal{L}_{p}$ bounds in sequential prediction and recursive algorithms via an entropic analysis. Both classes of problems are examined by investigating the underlying entropic relationships of the data and/or noises involved, and the derived lower bounds may all be quantified in a conditional entropy characterization. We also study the conditions to achieve the generic bounds from an innovations' viewpoint.
[ { "created": "Tue, 3 Dec 2019 16:52:15 GMT", "version": "v1" }, { "created": "Tue, 11 May 2021 15:56:59 GMT", "version": "v2" } ]
2021-05-12
[ [ "Fang", "Song", "" ], [ "Zhu", "Quanyan", "" ] ]
In this paper, we obtain fundamental $\mathcal{L}_{p}$ bounds in sequential prediction and recursive algorithms via an entropic analysis. Both classes of problems are examined by investigating the underlying entropic relationships of the data and/or noises involved, and the derived lower bounds may all be quantified in a conditional entropy characterization. We also study the conditions to achieve the generic bounds from an innovations' viewpoint.
2303.05429
Isadora Cardoso-Pereira
Isadora Cardoso-Pereira, Geraldo Gomes, Danilo Monteiro Ribeiro, Alberto de Souza, Danilo Lucena, Gustavo Pinto
Supporting the Careers of Developers with Disabilities: Lessons from Zup Innovation
5 pages (two columns), 1 figures
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
People with still face discrimination, which creates significant obstacles to accessing higher education, ultimately hindering their access to high-skilled occupations. In this study we present Catalisa, an eight-month training camp (developed by Zup Innovation) that hires and trains people with disabilities as software developers. We interviewed 12 Catalisa participants to better understand their challenges and limitations regarding inclusion and accessibility. We offer four recommendations to improve inclusion and accessibility in Catalisa-like programs, that we hope could motive others to build a more inclusive and equitable workplace that benefits everyone.
[ { "created": "Thu, 9 Mar 2023 17:21:27 GMT", "version": "v1" }, { "created": "Fri, 26 May 2023 20:36:56 GMT", "version": "v2" } ]
2023-05-30
[ [ "Cardoso-Pereira", "Isadora", "" ], [ "Gomes", "Geraldo", "" ], [ "Ribeiro", "Danilo Monteiro", "" ], [ "de Souza", "Alberto", "" ], [ "Lucena", "Danilo", "" ], [ "Pinto", "Gustavo", "" ] ]
People with still face discrimination, which creates significant obstacles to accessing higher education, ultimately hindering their access to high-skilled occupations. In this study we present Catalisa, an eight-month training camp (developed by Zup Innovation) that hires and trains people with disabilities as software developers. We interviewed 12 Catalisa participants to better understand their challenges and limitations regarding inclusion and accessibility. We offer four recommendations to improve inclusion and accessibility in Catalisa-like programs, that we hope could motive others to build a more inclusive and equitable workplace that benefits everyone.
1609.01461
Battista Biggio
Battista Biggio and Giorgio Fumera and Gian Luca Marcialis and Fabio Roli
Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016
10.1109/TPAMI.2016.2558154
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait. We have recently shown that this assumption is not representative of current fingerprint and face presentation attacks, leading one to overestimate the vulnerability of multibiometric systems, and to design less effective fusion rules. In this paper, we overcome these limitations by proposing a statistical meta-model of face and fingerprint presentation attacks that characterizes a wider family of fake score distributions, including distributions of known and, potentially, unknown attacks. This allows us to perform a thorough security evaluation of multibiometric systems against presentation attacks, quantifying how their vulnerability may vary also under attacks that are different from those considered during design, through an uncertainty analysis. We empirically show that our approach can reliably predict the performance of multibiometric systems even under never-before-seen face and fingerprint presentation attacks, and that the secure fusion rules designed using our approach can exhibit an improved trade-off between the performance in the absence and in the presence of attack. We finally argue that our method can be extended to other biometrics besides faces and fingerprints.
[ { "created": "Tue, 6 Sep 2016 09:44:47 GMT", "version": "v1" } ]
2016-09-07
[ [ "Biggio", "Battista", "" ], [ "Fumera", "Giorgio", "" ], [ "Marcialis", "Gian Luca", "" ], [ "Roli", "Fabio", "" ] ]
Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait. We have recently shown that this assumption is not representative of current fingerprint and face presentation attacks, leading one to overestimate the vulnerability of multibiometric systems, and to design less effective fusion rules. In this paper, we overcome these limitations by proposing a statistical meta-model of face and fingerprint presentation attacks that characterizes a wider family of fake score distributions, including distributions of known and, potentially, unknown attacks. This allows us to perform a thorough security evaluation of multibiometric systems against presentation attacks, quantifying how their vulnerability may vary also under attacks that are different from those considered during design, through an uncertainty analysis. We empirically show that our approach can reliably predict the performance of multibiometric systems even under never-before-seen face and fingerprint presentation attacks, and that the secure fusion rules designed using our approach can exhibit an improved trade-off between the performance in the absence and in the presence of attack. We finally argue that our method can be extended to other biometrics besides faces and fingerprints.
2207.02025
Ashish Tiwari
Ashish Tiwari and Shanmuganathan Raman
DeepPS2: Revisiting Photometric Stereo Using Two Differently Illuminated Images
Accepted in ECCV 2022 Project Page: https://sites.google.com/iitgn.ac.in/deepps2/home
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Photometric stereo, a problem of recovering 3D surface normals using images of an object captured under different lightings, has been of great interest and importance in computer vision research. Despite the success of existing traditional and deep learning-based methods, it is still challenging due to: (i) the requirement of three or more differently illuminated images, (ii) the inability to model unknown general reflectance, and (iii) the requirement of accurate 3D ground truth surface normals and known lighting information for training. In this work, we attempt to address an under-explored problem of photometric stereo using just two differently illuminated images, referred to as the PS2 problem. It is an intermediate case between a single image-based reconstruction method like Shape from Shading (SfS) and the traditional Photometric Stereo (PS), which requires three or more images. We propose an inverse rendering-based deep learning framework, called DeepPS2, that jointly performs surface normal, albedo, lighting estimation, and image relighting in a completely self-supervised manner with no requirement of ground truth data. We demonstrate how image relighting in conjunction with image reconstruction enhances the lighting estimation in a self-supervised setting.
[ { "created": "Tue, 5 Jul 2022 13:14:10 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2022 08:56:21 GMT", "version": "v2" } ]
2022-08-31
[ [ "Tiwari", "Ashish", "" ], [ "Raman", "Shanmuganathan", "" ] ]
Photometric stereo, a problem of recovering 3D surface normals using images of an object captured under different lightings, has been of great interest and importance in computer vision research. Despite the success of existing traditional and deep learning-based methods, it is still challenging due to: (i) the requirement of three or more differently illuminated images, (ii) the inability to model unknown general reflectance, and (iii) the requirement of accurate 3D ground truth surface normals and known lighting information for training. In this work, we attempt to address an under-explored problem of photometric stereo using just two differently illuminated images, referred to as the PS2 problem. It is an intermediate case between a single image-based reconstruction method like Shape from Shading (SfS) and the traditional Photometric Stereo (PS), which requires three or more images. We propose an inverse rendering-based deep learning framework, called DeepPS2, that jointly performs surface normal, albedo, lighting estimation, and image relighting in a completely self-supervised manner with no requirement of ground truth data. We demonstrate how image relighting in conjunction with image reconstruction enhances the lighting estimation in a self-supervised setting.
1812.01410
Vincent Schellekens
Vincent Schellekens and Laurent Jacques
Compressive Classification (Machine Learning without learning)
in Proceedings of iTWIST'18, Paper-ID: 8, Marseille, France, November, 21-23, 2018
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it. We propose a compressive learning classification method, and a novel sketch function for images.
[ { "created": "Tue, 4 Dec 2018 13:50:11 GMT", "version": "v1" } ]
2018-12-05
[ [ "Schellekens", "Vincent", "" ], [ "Jacques", "Laurent", "" ] ]
Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it. We propose a compressive learning classification method, and a novel sketch function for images.
1702.01823
Mostafa Dehghan
Mostafa Dehghan, Weibo Chu, Philippe Nain, Don Towsley
Sharing LRU Cache Resources among Content Providers: A Utility-Based Approach
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the problem of allocating cache resources among multiple content providers. The cache can be partitioned into slices and each partition can be dedicated to a particular content provider, or shared among a number of them. It is assumed that each partition employs the LRU policy for managing content. We propose utility-driven partitioning, where we associate with each content provider a utility that is a function of the hit rate observed by the content provider. We consider two scenarios: i)~content providers serve disjoint sets of files, ii)~there is some overlap in the content served by multiple content providers. In the first case, we prove that cache partitioning outperforms cache sharing as cache size and numbers of contents served by providers go to infinity. In the second case, It can be beneficial to have separate partitions for overlapped content. In the case of two providers, it is usually always beneficial to allocate a cache partition to serve all overlapped content and separate partitions to serve the non-overlapped contents of both providers. We establish conditions when this is true asymptotically but also present an example where it is not true asymptotically. We develop online algorithms that dynamically adjust partition sizes in order to maximize the overall utility and prove that they converge to optimal solutions, and through numerical evaluations, we show they are effective.
[ { "created": "Mon, 6 Feb 2017 23:42:59 GMT", "version": "v1" } ]
2017-02-08
[ [ "Dehghan", "Mostafa", "" ], [ "Chu", "Weibo", "" ], [ "Nain", "Philippe", "" ], [ "Towsley", "Don", "" ] ]
In this paper, we consider the problem of allocating cache resources among multiple content providers. The cache can be partitioned into slices and each partition can be dedicated to a particular content provider, or shared among a number of them. It is assumed that each partition employs the LRU policy for managing content. We propose utility-driven partitioning, where we associate with each content provider a utility that is a function of the hit rate observed by the content provider. We consider two scenarios: i)~content providers serve disjoint sets of files, ii)~there is some overlap in the content served by multiple content providers. In the first case, we prove that cache partitioning outperforms cache sharing as cache size and numbers of contents served by providers go to infinity. In the second case, It can be beneficial to have separate partitions for overlapped content. In the case of two providers, it is usually always beneficial to allocate a cache partition to serve all overlapped content and separate partitions to serve the non-overlapped contents of both providers. We establish conditions when this is true asymptotically but also present an example where it is not true asymptotically. We develop online algorithms that dynamically adjust partition sizes in order to maximize the overall utility and prove that they converge to optimal solutions, and through numerical evaluations, we show they are effective.
2008.08903
Nan Gao
Nan Gao, Hao Xue, Wei Shao, Sichen Zhao, Kyle Kai Qin, Arian Prabowo, Mohammad Saiedur Rahaman, Flora D. Salim
Generative Adversarial Networks for Spatio-temporal Data: A Survey
This paper has been accepted by ACM Transactions on Intelligent Systems and Technology (TIST)
null
10.1145/3474838
null
cs.LG cs.IR eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Networks (GANs) have shown remarkable success in producing realistic-looking images in the computer vision area. Recently, GAN-based techniques are shown to be promising for spatio-temporal-based applications such as trajectory prediction, events generation and time-series data imputation. While several reviews for GANs in computer vision have been presented, no one has considered addressing the practical applications and challenges relevant to spatio-temporal data. In this paper, we have conducted a comprehensive review of the recent developments of GANs for spatio-temporal data. We summarise the application of popular GAN architectures for spatio-temporal data and the common practices for evaluating the performance of spatio-temporal applications with GANs. Finally, we point out future research directions to benefit researchers in this area.
[ { "created": "Tue, 18 Aug 2020 11:05:40 GMT", "version": "v1" }, { "created": "Tue, 1 Dec 2020 01:30:20 GMT", "version": "v2" }, { "created": "Wed, 30 Jun 2021 06:01:17 GMT", "version": "v3" }, { "created": "Fri, 30 Jul 2021 02:36:45 GMT", "version": "v4" } ]
2021-08-02
[ [ "Gao", "Nan", "" ], [ "Xue", "Hao", "" ], [ "Shao", "Wei", "" ], [ "Zhao", "Sichen", "" ], [ "Qin", "Kyle Kai", "" ], [ "Prabowo", "Arian", "" ], [ "Rahaman", "Mohammad Saiedur", "" ], [ "Salim", "Flora D.", "" ] ]
Generative Adversarial Networks (GANs) have shown remarkable success in producing realistic-looking images in the computer vision area. Recently, GAN-based techniques are shown to be promising for spatio-temporal-based applications such as trajectory prediction, events generation and time-series data imputation. While several reviews for GANs in computer vision have been presented, no one has considered addressing the practical applications and challenges relevant to spatio-temporal data. In this paper, we have conducted a comprehensive review of the recent developments of GANs for spatio-temporal data. We summarise the application of popular GAN architectures for spatio-temporal data and the common practices for evaluating the performance of spatio-temporal applications with GANs. Finally, we point out future research directions to benefit researchers in this area.
1101.4435
Ali Fakoorian
S. Ali. A. Fakoorian and A. Lee Swindlehurst
Solutions for the MIMO Gaussian Wiretap Channel with a Cooperative Jammer
null
null
10.1109/TSP.2011.2161298
null
cs.IT math.IT
http://creativecommons.org/licenses/by/3.0/
We study the Gaussian MIMO wiretap channel with a transmitter, a legitimate receiver, an eavesdropper and an external helper, each equipped with multiple antennas. The transmitter sends confidential messages to its intended receiver, while the helper transmits jamming signals independent of the source message to confuse the eavesdropper. The jamming signal is assumed to be treated as noise at both the intended receiver and the eavesdropper. We obtain a closed-form expression for the structure of the artificial noise covariance matrix that guarantees no decrease in the secrecy capacity of the wiretap channel. We also describe how to find specific realizations of this covariance matrix expression that provide good secrecy rate performance, even when there is no non-trivial null space between the helper and the intended receiver. Unlike prior work, our approach considers the general MIMO case, and is not restricted to SISO or MISO scenarios.
[ { "created": "Mon, 24 Jan 2011 03:09:16 GMT", "version": "v1" } ]
2015-05-27
[ [ "Fakoorian", "S. Ali. A.", "" ], [ "Swindlehurst", "A. Lee", "" ] ]
We study the Gaussian MIMO wiretap channel with a transmitter, a legitimate receiver, an eavesdropper and an external helper, each equipped with multiple antennas. The transmitter sends confidential messages to its intended receiver, while the helper transmits jamming signals independent of the source message to confuse the eavesdropper. The jamming signal is assumed to be treated as noise at both the intended receiver and the eavesdropper. We obtain a closed-form expression for the structure of the artificial noise covariance matrix that guarantees no decrease in the secrecy capacity of the wiretap channel. We also describe how to find specific realizations of this covariance matrix expression that provide good secrecy rate performance, even when there is no non-trivial null space between the helper and the intended receiver. Unlike prior work, our approach considers the general MIMO case, and is not restricted to SISO or MISO scenarios.
2003.04094
Jacek Dabrowski
Mikolaj Wieczorek (1), Andrzej Michalowski (1), Anna Wroblewska (1 and 2), Jacek Dabrowski (1) ((1) Synerise, (2) Warsaw University of Technology)
A Strong Baseline for Fashion Retrieval with Person Re-Identification Models
33 pages, 14 figures
short paper in Neural Information Processing, Communications in Computer and Information Science, 2020
10.1007/978-3-030-63820-7_33
null
cs.CV cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fashion retrieval is the challenging task of finding an exact match for fashion items contained within an image. Difficulties arise from the fine-grained nature of clothing items, very large intra-class and inter-class variance. Additionally, query and source images for the task usually come from different domains - street photos and catalogue photos respectively. Due to these differences, a significant gap in quality, lighting, contrast, background clutter and item presentation exists between domains. As a result, fashion retrieval is an active field of research both in academia and the industry. Inspired by recent advancements in Person Re-Identification research, we adapt leading ReID models to be used in fashion retrieval tasks. We introduce a simple baseline model for fashion retrieval, significantly outperforming previous state-of-the-art results despite a much simpler architecture. We conduct in-depth experiments on Street2Shop and DeepFashion datasets and validate our results. Finally, we propose a cross-domain (cross-dataset) evaluation method to test the robustness of fashion retrieval models.
[ { "created": "Mon, 9 Mar 2020 12:50:15 GMT", "version": "v1" } ]
2022-11-24
[ [ "Wieczorek", "Mikolaj", "", "Synerise" ], [ "Michalowski", "Andrzej", "", "Synerise" ], [ "Wroblewska", "Anna", "", "1 and\n 2" ], [ "Dabrowski", "Jacek", "", "Synerise" ] ]
Fashion retrieval is the challenging task of finding an exact match for fashion items contained within an image. Difficulties arise from the fine-grained nature of clothing items, very large intra-class and inter-class variance. Additionally, query and source images for the task usually come from different domains - street photos and catalogue photos respectively. Due to these differences, a significant gap in quality, lighting, contrast, background clutter and item presentation exists between domains. As a result, fashion retrieval is an active field of research both in academia and the industry. Inspired by recent advancements in Person Re-Identification research, we adapt leading ReID models to be used in fashion retrieval tasks. We introduce a simple baseline model for fashion retrieval, significantly outperforming previous state-of-the-art results despite a much simpler architecture. We conduct in-depth experiments on Street2Shop and DeepFashion datasets and validate our results. Finally, we propose a cross-domain (cross-dataset) evaluation method to test the robustness of fashion retrieval models.
1711.06333
Jason Phipps Morgan
J. M. Taram\'on, J. P. Morgan, C. Shi, and J. Hasenclever
Generation of unstructured meshes in 2-D, 3-D, and spherical geometries with embedded high resolution sub-regions
20 pages + supplement, submitted to SIAM J. Sci. Comp
null
null
null
cs.CG cs.CE math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present 2-D, 3-D, and spherical mesh generators for the Finite Element Method (FEM) using triangular and tetrahedral elements. The mesh nodes are treated as if they were linked by virtual springs that obey Hooke's law. Given the desired length for the springs, the FEM is used to solve for the optimal nodal positions for the static equilibrium of this spring system. A 'guide-mesh' approach allows the user to create embedded high resolution sub-regions within a coarser mesh. The method converges rapidly. For example, in 3-D, the algorithm is able to refine a specific region within an unstructured tetrahedral spherical shell so that the edge-length factor $l_{0r}/l_{0c} = 1/33$ within a few iterations, where $l_{0r}$ and $l_{0c}$ are the desired spring length for elements inside the refined and coarse regions respectively. One use for this type of mesh is to model regional problems as a fine region within a global mesh that has no fictitious boundaries, at only a small additional computational cost. The algorithm also includes routines to locally improve the quality of the mesh and to avoid badly shaped 'slivers-like' tetrahedra.
[ { "created": "Tue, 14 Nov 2017 20:01:38 GMT", "version": "v1" } ]
2017-11-20
[ [ "Taramón", "J. M.", "" ], [ "Morgan", "J. P.", "" ], [ "Shi", "C.", "" ], [ "Hasenclever", "J.", "" ] ]
We present 2-D, 3-D, and spherical mesh generators for the Finite Element Method (FEM) using triangular and tetrahedral elements. The mesh nodes are treated as if they were linked by virtual springs that obey Hooke's law. Given the desired length for the springs, the FEM is used to solve for the optimal nodal positions for the static equilibrium of this spring system. A 'guide-mesh' approach allows the user to create embedded high resolution sub-regions within a coarser mesh. The method converges rapidly. For example, in 3-D, the algorithm is able to refine a specific region within an unstructured tetrahedral spherical shell so that the edge-length factor $l_{0r}/l_{0c} = 1/33$ within a few iterations, where $l_{0r}$ and $l_{0c}$ are the desired spring length for elements inside the refined and coarse regions respectively. One use for this type of mesh is to model regional problems as a fine region within a global mesh that has no fictitious boundaries, at only a small additional computational cost. The algorithm also includes routines to locally improve the quality of the mesh and to avoid badly shaped 'slivers-like' tetrahedra.
1805.06334
Lukas Liebel
Lukas Liebel and Marco K\"orner
Auxiliary Tasks in Multi-task Learning
fixed minor typesetting issue
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-task convolutional neural networks (CNNs) have shown impressive results for certain combinations of tasks, such as single-image depth estimation (SIDE) and semantic segmentation. This is achieved by pushing the network towards learning a robust representation that generalizes well to different atomic tasks. We extend this concept by adding auxiliary tasks, which are of minor relevance for the application, to the set of learned tasks. As a kind of additional regularization, they are expected to boost the performance of the ultimately desired main tasks. To study the proposed approach, we picked vision-based road scene understanding (RSU) as an exemplary application. Since multi-task learning requires specialized datasets, particularly when using extensive sets of tasks, we provide a multi-modal dataset for multi-task RSU, called synMT. More than 2.5 $\cdot$ 10^5 synthetic images, annotated with 21 different labels, were acquired from the video game Grand Theft Auto V (GTA V). Our proposed deep multi-task CNN architecture was trained on various combination of tasks using synMT. The experiments confirmed that auxiliary tasks can indeed boost network performance, both in terms of final results and training time.
[ { "created": "Wed, 16 May 2018 13:59:20 GMT", "version": "v1" }, { "created": "Thu, 17 May 2018 16:12:16 GMT", "version": "v2" } ]
2018-11-05
[ [ "Liebel", "Lukas", "" ], [ "Körner", "Marco", "" ] ]
Multi-task convolutional neural networks (CNNs) have shown impressive results for certain combinations of tasks, such as single-image depth estimation (SIDE) and semantic segmentation. This is achieved by pushing the network towards learning a robust representation that generalizes well to different atomic tasks. We extend this concept by adding auxiliary tasks, which are of minor relevance for the application, to the set of learned tasks. As a kind of additional regularization, they are expected to boost the performance of the ultimately desired main tasks. To study the proposed approach, we picked vision-based road scene understanding (RSU) as an exemplary application. Since multi-task learning requires specialized datasets, particularly when using extensive sets of tasks, we provide a multi-modal dataset for multi-task RSU, called synMT. More than 2.5 $\cdot$ 10^5 synthetic images, annotated with 21 different labels, were acquired from the video game Grand Theft Auto V (GTA V). Our proposed deep multi-task CNN architecture was trained on various combination of tasks using synMT. The experiments confirmed that auxiliary tasks can indeed boost network performance, both in terms of final results and training time.
2306.03197
Martin Schr\"oder
Martin Schroder
AutoScrum: Automating Project Planning Using Large Language Models
25 pages, 3 figures, demo: https://github.com/autoscrum/autoscrum
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Recent advancements in the field of large language models have made it possible to use language models for advanced reasoning. In this paper we leverage this ability for designing complex project plans based only on knowing the current state and the desired state. Two approaches are demonstrated - a scrum based approach and a shortcut plan approach. The scrum based approach executes an automated process of requirements gathering, user story mapping, feature identification, task decomposition and finally generates questions and search terms for seeking out domain specific information to assist with task completion. The shortcut approach looks at most recent snapshot of the current and desired state and generates the next most reasonable task to do in order to get to the desired state as quickly as possible. In this paper we automate everything using a novel concept of "Language Programs". These are programs written in natural language designed to process input data through the language model. Guidance language is used for all LLM programs. All demo source code for this paper is available at https://github.com/autoscrum/autoscrum
[ { "created": "Mon, 5 Jun 2023 19:16:37 GMT", "version": "v1" } ]
2023-06-07
[ [ "Schroder", "Martin", "" ] ]
Recent advancements in the field of large language models have made it possible to use language models for advanced reasoning. In this paper we leverage this ability for designing complex project plans based only on knowing the current state and the desired state. Two approaches are demonstrated - a scrum based approach and a shortcut plan approach. The scrum based approach executes an automated process of requirements gathering, user story mapping, feature identification, task decomposition and finally generates questions and search terms for seeking out domain specific information to assist with task completion. The shortcut approach looks at most recent snapshot of the current and desired state and generates the next most reasonable task to do in order to get to the desired state as quickly as possible. In this paper we automate everything using a novel concept of "Language Programs". These are programs written in natural language designed to process input data through the language model. Guidance language is used for all LLM programs. All demo source code for this paper is available at https://github.com/autoscrum/autoscrum
1802.05004
Khoa Nguyen
Khoa Nguyen and Benjamin Hong Meng Tan and Huaxiong Wang
Zero-Knowledge Password Policy Check from Lattices
null
ISC 2017
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Passwords are ubiquitous and most commonly used to authenticate users when logging into online services. Using high entropy passwords is critical to prevent unauthorized access and password policies emerged to enforce this requirement on passwords. However, with current methods of password storage, poor practices and server breaches have leaked many passwords to the public. To protect one's sensitive information in case of such events, passwords should be hidden from servers. Verifier-based password authenticated key exchange, proposed by Bellovin and Merrit (IEEE S\&P, 1992), allows authenticated secure channels to be established with a hash of a password (verifier). Unfortunately, this restricts password policies as passwords cannot be checked from their verifier. To address this issue, Kiefer and Manulis (ESORICS 2014) proposed zero-knowledge password policy check (ZKPPC). A ZKPPC protocol allows users to prove in zero knowledge that a hash of the user's password satisfies the password policy required by the server. Unfortunately, their proposal is not quantum resistant with the use of discrete logarithm-based cryptographic tools and there are currently no other viable alternatives. In this work, we construct the first post-quantum ZKPPC using lattice-based tools. To this end, we introduce a new randomised password hashing scheme for ASCII-based passwords and design an accompanying zero-knowledge protocol for policy compliance. Interestingly, our proposal does not follow the framework established by Kiefer and Manulis and offers an alternate construction without homomorphic commitments. Although our protocol is not ready to be used in practice, we think it is an important first step towards a quantum-resistant privacy-preserving password-based authentication and key exchange system.
[ { "created": "Wed, 14 Feb 2018 09:39:10 GMT", "version": "v1" } ]
2018-02-15
[ [ "Nguyen", "Khoa", "" ], [ "Tan", "Benjamin Hong Meng", "" ], [ "Wang", "Huaxiong", "" ] ]
Passwords are ubiquitous and most commonly used to authenticate users when logging into online services. Using high entropy passwords is critical to prevent unauthorized access and password policies emerged to enforce this requirement on passwords. However, with current methods of password storage, poor practices and server breaches have leaked many passwords to the public. To protect one's sensitive information in case of such events, passwords should be hidden from servers. Verifier-based password authenticated key exchange, proposed by Bellovin and Merrit (IEEE S\&P, 1992), allows authenticated secure channels to be established with a hash of a password (verifier). Unfortunately, this restricts password policies as passwords cannot be checked from their verifier. To address this issue, Kiefer and Manulis (ESORICS 2014) proposed zero-knowledge password policy check (ZKPPC). A ZKPPC protocol allows users to prove in zero knowledge that a hash of the user's password satisfies the password policy required by the server. Unfortunately, their proposal is not quantum resistant with the use of discrete logarithm-based cryptographic tools and there are currently no other viable alternatives. In this work, we construct the first post-quantum ZKPPC using lattice-based tools. To this end, we introduce a new randomised password hashing scheme for ASCII-based passwords and design an accompanying zero-knowledge protocol for policy compliance. Interestingly, our proposal does not follow the framework established by Kiefer and Manulis and offers an alternate construction without homomorphic commitments. Although our protocol is not ready to be used in practice, we think it is an important first step towards a quantum-resistant privacy-preserving password-based authentication and key exchange system.
1808.09170
Math\'e Zeegers
Math\'e Zeegers, Felix Lucka and Kees Joost Batenburg
A Multi-channel DART Algorithm
16 pages. 17 figures. Paper for IWCIA 2018 conference
null
10.1007/978-3-030-05288-1_13
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tomography deals with the reconstruction of objects from their projections, acquired along a range of angles. Discrete tomography is concerned with objects that consist of a small number of materials, which makes it possible to compute accurate reconstructions from highly limited projection data. For cases where the allowed intensity values in the reconstruction are known a priori, the discrete algebraic reconstruction technique (DART) has shown to yield accurate reconstructions from few projections. However, a key limitation is that the benefit of DART diminishes as the number of different materials increases. Many tomographic imaging techniques can simultaneously record tomographic data at multiple channels, each corresponding to a different weighting of the materials in the object. Whenever projection data from more than one channel is available, this additional information can potentially be exploited by the reconstruction algorithm. In this paper we present Multi-Channel DART (MC-DART), which deals effectively with multi-channel data. This class of algorithms is a generalization of DART to multiple channels and combines the information for each separate channel-reconstruction in a multi-channel segmentation step. We demonstrate that in a range of simulation experiments, MC-DART is capable of producing more accurate reconstructions compared to single-channel DART.
[ { "created": "Tue, 28 Aug 2018 08:41:55 GMT", "version": "v1" } ]
2020-09-07
[ [ "Zeegers", "Mathé", "" ], [ "Lucka", "Felix", "" ], [ "Batenburg", "Kees Joost", "" ] ]
Tomography deals with the reconstruction of objects from their projections, acquired along a range of angles. Discrete tomography is concerned with objects that consist of a small number of materials, which makes it possible to compute accurate reconstructions from highly limited projection data. For cases where the allowed intensity values in the reconstruction are known a priori, the discrete algebraic reconstruction technique (DART) has shown to yield accurate reconstructions from few projections. However, a key limitation is that the benefit of DART diminishes as the number of different materials increases. Many tomographic imaging techniques can simultaneously record tomographic data at multiple channels, each corresponding to a different weighting of the materials in the object. Whenever projection data from more than one channel is available, this additional information can potentially be exploited by the reconstruction algorithm. In this paper we present Multi-Channel DART (MC-DART), which deals effectively with multi-channel data. This class of algorithms is a generalization of DART to multiple channels and combines the information for each separate channel-reconstruction in a multi-channel segmentation step. We demonstrate that in a range of simulation experiments, MC-DART is capable of producing more accurate reconstructions compared to single-channel DART.
1809.02681
Zhedong Zheng
Zhedong Zheng, Liang Zheng, Yi Yang, Fei Wu
Query Attack via Opposite-Direction Feature:Towards Robust Image Retrieval
12 pages, 9 figures, 3 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing works of adversarial samples focus on attacking image recognition models, while little attention is paid to the image retrieval task. In this paper, we identify two inherent challenges in applying prevailing image recognition attack methods to image retrieval. First, image retrieval demands discriminative visual features, which is significantly different from the one-hot class prediction in image recognition. Second, due to the disjoint and potentially unrelated classes between the training and test set in image retrieval, predicting the query category from predefined training classes is not accurate and leads to a sub-optimal adversarial gradient. To address these limitations, we propose a new white-box attack approach, Opposite-Direction Feature Attack (ODFA), to generate adversarial queries. Opposite-Direction Feature Attack (ODFA) effectively exploits feature-level adversarial gradients and takes advantage of feature distance in the representation space. To our knowledge, we are among the early attempts to design an attack method specifically for image retrieval. When we deploy an attacked image as the query, the true matches are prone to receive low ranks. We demonstrate through extensive experiments that (1) only crafting adversarial queries is sufficient to fool the state-of-the-art retrieval systems; (2) the proposed attack method, ODFA, leads to a higher attack success rate than classification attack methods, validating the necessity of leveraging characteristics of image retrieval; (3) the adversarial queries generated by our method have good transferability to other retrieval models without accessing their parameters, i.e.,the black-box setting.
[ { "created": "Fri, 7 Sep 2018 21:29:32 GMT", "version": "v1" }, { "created": "Tue, 20 Oct 2020 04:04:01 GMT", "version": "v2" } ]
2020-10-21
[ [ "Zheng", "Zhedong", "" ], [ "Zheng", "Liang", "" ], [ "Yang", "Yi", "" ], [ "Wu", "Fei", "" ] ]
Most existing works of adversarial samples focus on attacking image recognition models, while little attention is paid to the image retrieval task. In this paper, we identify two inherent challenges in applying prevailing image recognition attack methods to image retrieval. First, image retrieval demands discriminative visual features, which is significantly different from the one-hot class prediction in image recognition. Second, due to the disjoint and potentially unrelated classes between the training and test set in image retrieval, predicting the query category from predefined training classes is not accurate and leads to a sub-optimal adversarial gradient. To address these limitations, we propose a new white-box attack approach, Opposite-Direction Feature Attack (ODFA), to generate adversarial queries. Opposite-Direction Feature Attack (ODFA) effectively exploits feature-level adversarial gradients and takes advantage of feature distance in the representation space. To our knowledge, we are among the early attempts to design an attack method specifically for image retrieval. When we deploy an attacked image as the query, the true matches are prone to receive low ranks. We demonstrate through extensive experiments that (1) only crafting adversarial queries is sufficient to fool the state-of-the-art retrieval systems; (2) the proposed attack method, ODFA, leads to a higher attack success rate than classification attack methods, validating the necessity of leveraging characteristics of image retrieval; (3) the adversarial queries generated by our method have good transferability to other retrieval models without accessing their parameters, i.e.,the black-box setting.
2009.00296
Alessandro Betti
Alessandro Betti, Marco Gori, Simone Marullo, Stefano Melacci
Developing Constrained Neural Units Over Time
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a foundational study on a constrained method that defines learning problems with Neural Networks in the context of the principle of least cognitive action, which very much resembles the principle of least action in mechanics. Starting from a general approach to enforce constraints into the dynamical laws of learning, this work focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches. In particular, the structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data, leading to "architectural" and "input-related" constraints, respectively. The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner, that makes this study an important step toward alternative ways of processing continuous streams of data with Neural Networks. The connection with the classic Backpropagation-based update rule of the weights of networks is discussed, showing that there are conditions under which our approach degenerates to Backpropagation. Moreover, the theory is experimentally evaluated on a simple problem that allows us to deeply study several aspects of the theory itself and to show the soundness of the model.
[ { "created": "Tue, 1 Sep 2020 09:07:25 GMT", "version": "v1" } ]
2020-09-02
[ [ "Betti", "Alessandro", "" ], [ "Gori", "Marco", "" ], [ "Marullo", "Simone", "" ], [ "Melacci", "Stefano", "" ] ]
In this paper we present a foundational study on a constrained method that defines learning problems with Neural Networks in the context of the principle of least cognitive action, which very much resembles the principle of least action in mechanics. Starting from a general approach to enforce constraints into the dynamical laws of learning, this work focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches. In particular, the structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data, leading to "architectural" and "input-related" constraints, respectively. The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner, that makes this study an important step toward alternative ways of processing continuous streams of data with Neural Networks. The connection with the classic Backpropagation-based update rule of the weights of networks is discussed, showing that there are conditions under which our approach degenerates to Backpropagation. Moreover, the theory is experimentally evaluated on a simple problem that allows us to deeply study several aspects of the theory itself and to show the soundness of the model.
2102.01397
Simon Scherrer
Simon Scherrer, Che-Yu Wu, Yu-Hsi Chiang, Benjamin Rothenberger, Daniele E. Asoni, Arish Sateesan, Jo Vliegen, Nele Mentens, Hsu-Chun Hsiao, Adrian Perrig
Low-Rate Overuse Flow Tracer (LOFT): An Efficient and Scalable Algorithm for Detecting Overuse Flows
null
null
null
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current probabilistic flow-size monitoring can only detect heavy hitters (e.g., flows utilizing 10 times their permitted bandwidth), but cannot detect smaller overuse (e.g., flows utilizing 50-100% more than their permitted bandwidth). Thus, these systems lack accuracy in the challenging environment of high-throughput packet processing, where fast-memory resources are scarce. Nevertheless, many applications rely on accurate flow-size estimation, e.g. for network monitoring, anomaly detection and Quality of Service. We design, analyze, implement, and evaluate LOFT, a new approach for efficiently detecting overuse flows that achieves dramatically better properties than prior work. LOFT can detect 1.5x overuse flows in one second, whereas prior approaches fail to detect 2x overuse flows within a timeout of 300 seconds. We demonstrate LOFT's suitability for high-speed packet processing with implementations in the DPDK framework and on an FPGA.
[ { "created": "Tue, 2 Feb 2021 09:33:14 GMT", "version": "v1" } ]
2021-02-03
[ [ "Scherrer", "Simon", "" ], [ "Wu", "Che-Yu", "" ], [ "Chiang", "Yu-Hsi", "" ], [ "Rothenberger", "Benjamin", "" ], [ "Asoni", "Daniele E.", "" ], [ "Sateesan", "Arish", "" ], [ "Vliegen", "Jo", "" ], [ "Mentens", "Nele", "" ], [ "Hsiao", "Hsu-Chun", "" ], [ "Perrig", "Adrian", "" ] ]
Current probabilistic flow-size monitoring can only detect heavy hitters (e.g., flows utilizing 10 times their permitted bandwidth), but cannot detect smaller overuse (e.g., flows utilizing 50-100% more than their permitted bandwidth). Thus, these systems lack accuracy in the challenging environment of high-throughput packet processing, where fast-memory resources are scarce. Nevertheless, many applications rely on accurate flow-size estimation, e.g. for network monitoring, anomaly detection and Quality of Service. We design, analyze, implement, and evaluate LOFT, a new approach for efficiently detecting overuse flows that achieves dramatically better properties than prior work. LOFT can detect 1.5x overuse flows in one second, whereas prior approaches fail to detect 2x overuse flows within a timeout of 300 seconds. We demonstrate LOFT's suitability for high-speed packet processing with implementations in the DPDK framework and on an FPGA.
2305.10444
Wadii Boulila Prof.
Mahdi Jemmali, Loai Kayed B.Melhim, Wadii Boulila, Hajer Amdouni, Mafawez T. Alharbi
Optimizing Forest Fire Prevention: Intelligent Scheduling Algorithms for Drone-Based Surveillance System
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Given the importance of forests and their role in maintaining the ecological balance, which directly affects the planet, the climate, and the life on this planet, this research presents the problem of forest fire monitoring using drones. The forest monitoring process is performed continuously to track any changes in the monitored region within the forest. During fires, drones' capture data is used to increase the follow-up speed and enhance the control process of these fires to prevent their spread. The time factor in such problems determines the success rate of the fire extinguishing process, as appropriate data at the right time may be the decisive factor in controlling fires, preventing their spread, extinguishing them, and limiting their losses. Therefore, this research presented the problem of monitoring task scheduling for drones in the forest monitoring system. This problem is solved by developing several algorithms with the aim of minimizing the total completion time required to carry out all the drones' assigned tasks. System performance is measured by using 990 instances of three different classes. The performed experimental results indicated the effectiveness of the proposed algorithms and their ability to act efficiently to achieve the desired goal. The algorithm $RID$ achieved the best performance with a percentage rate of up to 90.3% with a time of 0.088 seconds.
[ { "created": "Sun, 14 May 2023 13:52:43 GMT", "version": "v1" } ]
2023-05-19
[ [ "Jemmali", "Mahdi", "" ], [ "Melhim", "Loai Kayed B.", "" ], [ "Boulila", "Wadii", "" ], [ "Amdouni", "Hajer", "" ], [ "Alharbi", "Mafawez T.", "" ] ]
Given the importance of forests and their role in maintaining the ecological balance, which directly affects the planet, the climate, and the life on this planet, this research presents the problem of forest fire monitoring using drones. The forest monitoring process is performed continuously to track any changes in the monitored region within the forest. During fires, drones' capture data is used to increase the follow-up speed and enhance the control process of these fires to prevent their spread. The time factor in such problems determines the success rate of the fire extinguishing process, as appropriate data at the right time may be the decisive factor in controlling fires, preventing their spread, extinguishing them, and limiting their losses. Therefore, this research presented the problem of monitoring task scheduling for drones in the forest monitoring system. This problem is solved by developing several algorithms with the aim of minimizing the total completion time required to carry out all the drones' assigned tasks. System performance is measured by using 990 instances of three different classes. The performed experimental results indicated the effectiveness of the proposed algorithms and their ability to act efficiently to achieve the desired goal. The algorithm $RID$ achieved the best performance with a percentage rate of up to 90.3% with a time of 0.088 seconds.
1603.02604
Ralf Steinberger
Ralf Steinberger, Aldo Podavini, Alexandra Balahur, Guillaume Jacquet, Hristo Tanev, Jens Linge, Martin Atkinson, Michele Chinosi, Vanni Zavarella, Yaniv Steiner, Erik van der Goot
Observing Trends in Automated Multilingual Media Analysis
Proceedings of the Symposium on New Frontiers of Automated Content Analysis in the Social Sciences (ACA'2015), Z\"urich, Switzerland, 1-3 July 2015 (20 pages)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Any large organisation, be it public or private, monitors the media for information to keep abreast of developments in their field of interest, and usually also to become aware of positive or negative opinions expressed towards them. At least for the written media, computer programs have become very efficient at helping the human analysts significantly in their monitoring task by gathering media reports, analysing them, detecting trends and - in some cases - even to issue early warnings or to make predictions of likely future developments. We present here trend recognition-related functionality of the Europe Media Monitor (EMM) system, which was developed by the European Commission's Joint Research Centre (JRC) for public administrations in the European Union (EU) and beyond. EMM performs large-scale media analysis in up to seventy languages and recognises various types of trends, some of them combining information from news articles written in different languages and from social media posts. EMM also lets users explore the huge amount of multilingual media data through interactive maps and graphs, allowing them to examine the data from various view points and according to multiple criteria. A lot of EMM's functionality is accessibly freely over the internet or via apps for hand-held devices.
[ { "created": "Tue, 8 Mar 2016 17:43:48 GMT", "version": "v1" } ]
2016-03-09
[ [ "Steinberger", "Ralf", "" ], [ "Podavini", "Aldo", "" ], [ "Balahur", "Alexandra", "" ], [ "Jacquet", "Guillaume", "" ], [ "Tanev", "Hristo", "" ], [ "Linge", "Jens", "" ], [ "Atkinson", "Martin", "" ], [ "Chinosi", "Michele", "" ], [ "Zavarella", "Vanni", "" ], [ "Steiner", "Yaniv", "" ], [ "van der Goot", "Erik", "" ] ]
Any large organisation, be it public or private, monitors the media for information to keep abreast of developments in their field of interest, and usually also to become aware of positive or negative opinions expressed towards them. At least for the written media, computer programs have become very efficient at helping the human analysts significantly in their monitoring task by gathering media reports, analysing them, detecting trends and - in some cases - even to issue early warnings or to make predictions of likely future developments. We present here trend recognition-related functionality of the Europe Media Monitor (EMM) system, which was developed by the European Commission's Joint Research Centre (JRC) for public administrations in the European Union (EU) and beyond. EMM performs large-scale media analysis in up to seventy languages and recognises various types of trends, some of them combining information from news articles written in different languages and from social media posts. EMM also lets users explore the huge amount of multilingual media data through interactive maps and graphs, allowing them to examine the data from various view points and according to multiple criteria. A lot of EMM's functionality is accessibly freely over the internet or via apps for hand-held devices.
2211.04545
Karl-Dieter Crisman
Karl-Dieter Crisman, Abraham Holleran, Micah Martin, and Josephine Noonan
Voting on Cyclic Orders, Group Theory, and Ballots
29 pages, to be published in conference proceedings from AMS Special Session on The Mathematics of Decisions, Elections and Games, 2022
null
10.1090/conm/795/15967
null
cs.GT math.RT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cyclic order may be thought of informally as a way to seat people around a table, perhaps for a game of chance or for dinner. Given a set of agents such as $\{A,B,C\}$, we can formalize this by defining a cyclic order as a permutation or linear order on this finite set, under the equivalence relation where $A\succ B\succ C$ is identified with both $B\succ C\succ A$ and $C\succ A\succ B$. As with other collections of sets with some structure, we might want to aggregate preferences of a (possibly different) set of voters on the set of possible ways to choose a cyclic order. However, given the combinatorial explosion of the number of full rankings of cyclic orders, one may not wish to use the usual voting machinery. This raises the question of what sort of ballots may be appropriate; a single cyclic order, a set of them, or some other ballot type? Further, there is a natural action of the group of permutations on the set of agents. A reasonable requirement for a choice procedure would be to respect this symmetry (the equivalent of neutrality in normal voting theory). In this paper we will exploit the representation theory of the symmetric group to analyze several natural types of ballots for voting on cyclic orders, and points-based procedures using such ballots. We provide a full characterization of such procedures for two quite different ballot types for $n=4$, along with the most important observations for $n=5$.
[ { "created": "Tue, 8 Nov 2022 20:32:14 GMT", "version": "v1" } ]
2024-07-23
[ [ "Crisman", "Karl-Dieter", "" ], [ "Holleran", "Abraham", "" ], [ "Martin", "Micah", "" ], [ "Noonan", "Josephine", "" ] ]
A cyclic order may be thought of informally as a way to seat people around a table, perhaps for a game of chance or for dinner. Given a set of agents such as $\{A,B,C\}$, we can formalize this by defining a cyclic order as a permutation or linear order on this finite set, under the equivalence relation where $A\succ B\succ C$ is identified with both $B\succ C\succ A$ and $C\succ A\succ B$. As with other collections of sets with some structure, we might want to aggregate preferences of a (possibly different) set of voters on the set of possible ways to choose a cyclic order. However, given the combinatorial explosion of the number of full rankings of cyclic orders, one may not wish to use the usual voting machinery. This raises the question of what sort of ballots may be appropriate; a single cyclic order, a set of them, or some other ballot type? Further, there is a natural action of the group of permutations on the set of agents. A reasonable requirement for a choice procedure would be to respect this symmetry (the equivalent of neutrality in normal voting theory). In this paper we will exploit the representation theory of the symmetric group to analyze several natural types of ballots for voting on cyclic orders, and points-based procedures using such ballots. We provide a full characterization of such procedures for two quite different ballot types for $n=4$, along with the most important observations for $n=5$.
2003.02609
Mirco Theile
Mirco Theile, Harald Bayerlein, Richard Nai, David Gesbert and Marco Caccamo
UAV Coverage Path Planning under Varying Power Constraints using Deep Reinforcement Learning
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
null
10.1109/IROS45743.2020.9340934
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coverage path planning (CPP) is the task of designing a trajectory that enables a mobile agent to travel over every point of an area of interest. We propose a new method to control an unmanned aerial vehicle (UAV) carrying a camera on a CPP mission with random start positions and multiple options for landing positions in an environment containing no-fly zones. While numerous approaches have been proposed to solve similar CPP problems, we leverage end-to-end reinforcement learning (RL) to learn a control policy that generalizes over varying power constraints for the UAV. Despite recent improvements in battery technology, the maximum flying range of small UAVs is still a severe constraint, which is exacerbated by variations in the UAV's power consumption that are hard to predict. By using map-like input channels to feed spatial information through convolutional network layers to the agent, we are able to train a double deep Q-network (DDQN) to make control decisions for the UAV, balancing limited power budget and coverage goal. The proposed method can be applied to a wide variety of environments and harmonizes complex goal structures with system constraints.
[ { "created": "Thu, 5 Mar 2020 13:43:47 GMT", "version": "v1" }, { "created": "Fri, 12 Feb 2021 13:22:45 GMT", "version": "v2" } ]
2021-02-15
[ [ "Theile", "Mirco", "" ], [ "Bayerlein", "Harald", "" ], [ "Nai", "Richard", "" ], [ "Gesbert", "David", "" ], [ "Caccamo", "Marco", "" ] ]
Coverage path planning (CPP) is the task of designing a trajectory that enables a mobile agent to travel over every point of an area of interest. We propose a new method to control an unmanned aerial vehicle (UAV) carrying a camera on a CPP mission with random start positions and multiple options for landing positions in an environment containing no-fly zones. While numerous approaches have been proposed to solve similar CPP problems, we leverage end-to-end reinforcement learning (RL) to learn a control policy that generalizes over varying power constraints for the UAV. Despite recent improvements in battery technology, the maximum flying range of small UAVs is still a severe constraint, which is exacerbated by variations in the UAV's power consumption that are hard to predict. By using map-like input channels to feed spatial information through convolutional network layers to the agent, we are able to train a double deep Q-network (DDQN) to make control decisions for the UAV, balancing limited power budget and coverage goal. The proposed method can be applied to a wide variety of environments and harmonizes complex goal structures with system constraints.
1202.3898
Peter Bro Miltersen
Kristoffer Arnsfelt Hansen, Michal Koucky, Niels Lauritzen, Peter Bro Miltersen, Elias Tsigaridas
Exact Algorithms for Solving Stochastic Games
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shapley's discounted stochastic games, Everett's recursive games and Gillette's undiscounted stochastic games are classical models of game theory describing two-player zero-sum games of potentially infinite duration. We describe algorithms for exactly solving these games.
[ { "created": "Fri, 17 Feb 2012 12:37:42 GMT", "version": "v1" } ]
2012-02-20
[ [ "Hansen", "Kristoffer Arnsfelt", "" ], [ "Koucky", "Michal", "" ], [ "Lauritzen", "Niels", "" ], [ "Miltersen", "Peter Bro", "" ], [ "Tsigaridas", "Elias", "" ] ]
Shapley's discounted stochastic games, Everett's recursive games and Gillette's undiscounted stochastic games are classical models of game theory describing two-player zero-sum games of potentially infinite duration. We describe algorithms for exactly solving these games.
2012.13744
Ryoichi Takase
Ryoichi Takase, Nobuyuki Yoshikawa, Toshisada Mariyama, and Takeshi Tsuchiya
Stability-Certified Reinforcement Learning via Spectral Normalization
null
null
null
null
cs.AI cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, two types of methods from different perspectives based on spectral normalization are described for ensuring the stability of the system controlled by a neural network. The first one is that the L2 gain of the feedback system is bounded less than 1 to satisfy the stability condition derived from the small-gain theorem. While explicitly including the stability condition, the first method may provide an insufficient performance on the neural network controller due to its strict stability condition. To overcome this difficulty, the second one is proposed, which improves the performance while ensuring the local stability with a larger region of attraction. In the second method, the stability is ensured by solving linear matrix inequalities after training the neural network controller. The spectral normalization proposed in this article improves the feasibility of the a-posteriori stability test by constructing tighter local sectors. The numerical experiments show that the second method provides enough performance compared with the first one while ensuring enough stability compared with the existing reinforcement learning algorithms.
[ { "created": "Sat, 26 Dec 2020 14:26:24 GMT", "version": "v1" } ]
2020-12-29
[ [ "Takase", "Ryoichi", "" ], [ "Yoshikawa", "Nobuyuki", "" ], [ "Mariyama", "Toshisada", "" ], [ "Tsuchiya", "Takeshi", "" ] ]
In this article, two types of methods from different perspectives based on spectral normalization are described for ensuring the stability of the system controlled by a neural network. The first one is that the L2 gain of the feedback system is bounded less than 1 to satisfy the stability condition derived from the small-gain theorem. While explicitly including the stability condition, the first method may provide an insufficient performance on the neural network controller due to its strict stability condition. To overcome this difficulty, the second one is proposed, which improves the performance while ensuring the local stability with a larger region of attraction. In the second method, the stability is ensured by solving linear matrix inequalities after training the neural network controller. The spectral normalization proposed in this article improves the feasibility of the a-posteriori stability test by constructing tighter local sectors. The numerical experiments show that the second method provides enough performance compared with the first one while ensuring enough stability compared with the existing reinforcement learning algorithms.
cs/0504036
Richard Belew
Richard K. Belew
Scientific impact quantity and quality: Analysis of two sources of bibliographic data
12 pages, 1 table, 6 figures
null
null
null
cs.IR cs.DL
null
Attempts to understand the consequence of any individual scientist's activity within the long-term trajectory of science is one of the most difficult questions within the philosophy of science. Because scientific publications play such as central role in the modern enterprise of science, bibliometric techniques which measure the ``impact'' of an individual publication as a function of the number of citations it receives from subsequent authors have provided some of the most useful empirical data on this question. Until recently, Thompson/ISI has provided the only source of large-scale ``inverted'' bibliographic data of the sort required for impact analysis. In the end of 2004, Google introduced a new service, GoogleScholar, making much of this same data available. Here we analyze 203 publications, collectively cited by more than 4000 other publications. We show surprisingly good agreement between data citation counts provided by the two services. Data quality across the systems is analyzed, and potentially useful complementarities between are considered. The additional robustness offered by multiple sources of such data promises to increase the utility of these measurements as open citation protocols and open access increase their impact on electronic scientific publication practices.
[ { "created": "Mon, 11 Apr 2005 13:52:55 GMT", "version": "v1" } ]
2007-05-23
[ [ "Belew", "Richard K.", "" ] ]
Attempts to understand the consequence of any individual scientist's activity within the long-term trajectory of science is one of the most difficult questions within the philosophy of science. Because scientific publications play such as central role in the modern enterprise of science, bibliometric techniques which measure the ``impact'' of an individual publication as a function of the number of citations it receives from subsequent authors have provided some of the most useful empirical data on this question. Until recently, Thompson/ISI has provided the only source of large-scale ``inverted'' bibliographic data of the sort required for impact analysis. In the end of 2004, Google introduced a new service, GoogleScholar, making much of this same data available. Here we analyze 203 publications, collectively cited by more than 4000 other publications. We show surprisingly good agreement between data citation counts provided by the two services. Data quality across the systems is analyzed, and potentially useful complementarities between are considered. The additional robustness offered by multiple sources of such data promises to increase the utility of these measurements as open citation protocols and open access increase their impact on electronic scientific publication practices.
2004.12213
Wenying Ji
Yitong Li, Wenying Ji, Simaan M. AbouRizk
Automated Abstraction of Operation Processes from Unstructured Text for Simulation Modeling
null
null
null
null
cs.IR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abstraction of operation processes is a fundamental step for simulation modeling. To reliably abstract an operation process, modelers rely on text information to study and understand details of operations. Aiming at reducing modelers' interpretation load and ensuring the reliability of the abstracted information, this research proposes a systematic methodology to automate the abstraction of operation processes. The methodology applies rule-based information extraction to automatically extract operation process-related information from unstructured text and creates graphical representations of operation processes using the extracted information. To demonstrate the applicability and feasibility of the proposed methodology, a text description of an earthmoving operation is used to create its corresponding graphical representation. Overall, this research enhances the state-of-the-art simulation modeling through achieving automated abstraction of operation processes, which largely reduces modelers' interpretation load and ensures the reliability of the abstracted operation processes.
[ { "created": "Sat, 25 Apr 2020 19:18:23 GMT", "version": "v1" }, { "created": "Thu, 18 Jun 2020 22:47:02 GMT", "version": "v2" }, { "created": "Sat, 4 Jul 2020 14:34:44 GMT", "version": "v3" } ]
2020-07-07
[ [ "Li", "Yitong", "" ], [ "Ji", "Wenying", "" ], [ "AbouRizk", "Simaan M.", "" ] ]
Abstraction of operation processes is a fundamental step for simulation modeling. To reliably abstract an operation process, modelers rely on text information to study and understand details of operations. Aiming at reducing modelers' interpretation load and ensuring the reliability of the abstracted information, this research proposes a systematic methodology to automate the abstraction of operation processes. The methodology applies rule-based information extraction to automatically extract operation process-related information from unstructured text and creates graphical representations of operation processes using the extracted information. To demonstrate the applicability and feasibility of the proposed methodology, a text description of an earthmoving operation is used to create its corresponding graphical representation. Overall, this research enhances the state-of-the-art simulation modeling through achieving automated abstraction of operation processes, which largely reduces modelers' interpretation load and ensures the reliability of the abstracted operation processes.
2305.14327
Da Yin
Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han, Kai-Wei Chang
Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation
EMNLP 2023. Code and data are available at https://github.com/WadeYin9712/Dynosaur
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses. Existing methods either manually annotate or employ LLM (e.g., GPT-series) to generate data for instruction tuning. However, they often overlook associating instructions with existing annotated datasets. In this paper, we propose Dynosaur, a dynamic growth paradigm for the automatic curation of instruction-tuning data. Based on the metadata of existing datasets, we use LLMs to automatically construct instruction-tuning data by identifying relevant data fields and generating appropriate instructions. By leveraging the existing annotated datasets, Dynosaur offers several advantages: 1) it reduces the API cost for generating instructions (e.g., it costs less than $12 USD by calling GPT-3.5-turbo for generating 800K instruction tuning samples; 2) it provides high-quality data for instruction tuning (e.g., it performs better than Alpaca and Flan on Super-NI and Longform with comparable data sizes); and 3) it supports the continuous improvement of models by generating instruction-tuning data when a new annotated dataset becomes available. We further investigate a continual learning scheme for learning with the ever-growing instruction-tuning dataset, and demonstrate that replaying tasks with diverse instruction embeddings not only helps mitigate forgetting issues but generalizes to unseen tasks better. Code and data are available at https://github.com/WadeYin9712/Dynosaur.
[ { "created": "Tue, 23 May 2023 17:56:26 GMT", "version": "v1" }, { "created": "Thu, 26 Oct 2023 05:10:18 GMT", "version": "v2" } ]
2023-10-27
[ [ "Yin", "Da", "" ], [ "Liu", "Xiao", "" ], [ "Yin", "Fan", "" ], [ "Zhong", "Ming", "" ], [ "Bansal", "Hritik", "" ], [ "Han", "Jiawei", "" ], [ "Chang", "Kai-Wei", "" ] ]
Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses. Existing methods either manually annotate or employ LLM (e.g., GPT-series) to generate data for instruction tuning. However, they often overlook associating instructions with existing annotated datasets. In this paper, we propose Dynosaur, a dynamic growth paradigm for the automatic curation of instruction-tuning data. Based on the metadata of existing datasets, we use LLMs to automatically construct instruction-tuning data by identifying relevant data fields and generating appropriate instructions. By leveraging the existing annotated datasets, Dynosaur offers several advantages: 1) it reduces the API cost for generating instructions (e.g., it costs less than $12 USD by calling GPT-3.5-turbo for generating 800K instruction tuning samples; 2) it provides high-quality data for instruction tuning (e.g., it performs better than Alpaca and Flan on Super-NI and Longform with comparable data sizes); and 3) it supports the continuous improvement of models by generating instruction-tuning data when a new annotated dataset becomes available. We further investigate a continual learning scheme for learning with the ever-growing instruction-tuning dataset, and demonstrate that replaying tasks with diverse instruction embeddings not only helps mitigate forgetting issues but generalizes to unseen tasks better. Code and data are available at https://github.com/WadeYin9712/Dynosaur.
2310.08967
Maxime Bouthors
Maxime Bouthors, Josep Crego and Fran\c{c}ois Yvon
Towards Example-Based NMT with Multi-Levenshtein Transformers
17 pages, EMNLP 2023 submission
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Retrieval-Augmented Machine Translation (RAMT) is attracting growing attention. This is because RAMT not only improves translation metrics, but is also assumed to implement some form of domain adaptation. In this contribution, we study another salient trait of RAMT, its ability to make translation decisions more transparent by allowing users to go back to examples that contributed to these decisions. For this, we propose a novel architecture aiming to increase this transparency. This model adapts a retrieval-augmented version of the Levenshtein Transformer and makes it amenable to simultaneously edit multiple fuzzy matches found in memory. We discuss how to perform training and inference in this model, based on multi-way alignment algorithms and imitation learning. Our experiments show that editing several examples positively impacts translation scores, notably increasing the number of target spans that are copied from existing instances.
[ { "created": "Fri, 13 Oct 2023 09:18:57 GMT", "version": "v1" } ]
2023-10-16
[ [ "Bouthors", "Maxime", "" ], [ "Crego", "Josep", "" ], [ "Yvon", "François", "" ] ]
Retrieval-Augmented Machine Translation (RAMT) is attracting growing attention. This is because RAMT not only improves translation metrics, but is also assumed to implement some form of domain adaptation. In this contribution, we study another salient trait of RAMT, its ability to make translation decisions more transparent by allowing users to go back to examples that contributed to these decisions. For this, we propose a novel architecture aiming to increase this transparency. This model adapts a retrieval-augmented version of the Levenshtein Transformer and makes it amenable to simultaneously edit multiple fuzzy matches found in memory. We discuss how to perform training and inference in this model, based on multi-way alignment algorithms and imitation learning. Our experiments show that editing several examples positively impacts translation scores, notably increasing the number of target spans that are copied from existing instances.
2403.17831
Thomas Wolgast
Thomas Wolgast and Astrid Nie{\ss}e
Learning the Optimal Power Flow: Environment Design Matters
null
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
To solve the optimal power flow (OPF) problem, reinforcement learning (RL) emerges as a promising new approach. However, the RL-OPF literature is strongly divided regarding the exact formulation of the OPF problem as an RL environment. In this work, we collect and implement diverse environment design decisions from the literature regarding training data, observation space, episode definition, and reward function choice. In an experimental analysis, we show the significant impact of these environment design options on RL-OPF training performance. Further, we derive some first recommendations regarding the choice of these design decisions. The created environment framework is fully open-source and can serve as a benchmark for future research in the RL-OPF field.
[ { "created": "Tue, 26 Mar 2024 16:13:55 GMT", "version": "v1" } ]
2024-03-27
[ [ "Wolgast", "Thomas", "" ], [ "Nieße", "Astrid", "" ] ]
To solve the optimal power flow (OPF) problem, reinforcement learning (RL) emerges as a promising new approach. However, the RL-OPF literature is strongly divided regarding the exact formulation of the OPF problem as an RL environment. In this work, we collect and implement diverse environment design decisions from the literature regarding training data, observation space, episode definition, and reward function choice. In an experimental analysis, we show the significant impact of these environment design options on RL-OPF training performance. Further, we derive some first recommendations regarding the choice of these design decisions. The created environment framework is fully open-source and can serve as a benchmark for future research in the RL-OPF field.
1504.05727
Xiaofu Wu Dr
Xiaofu Wu and ZhenYang
Coding vs. Spreading for Narrow-Band Interference Suppression
13 pages, 9 figures, accepted for publication in IEEE Trans. Vech. Tech
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of active narrow-band interference (NBI) suppression in direct-sequence spread-spectrum (DS-SS) communications has been extensively studied. In this paper, we address the problem of optimum coding-spreading tradeoff for NBI suppression. With maximum likelihood decoding, we first derive upper bounds on the error probability of coded systems in the presence of a special class of NBI, namely, multi-tone interference with orthogonal signatures. By employing the well-developed bounding techniques, we show there is no advantage in spreading, and hence a low-rate full coding approach is always preferred. Then, we propose a practical low-rate turbo-Hadamard coding approach, in which the NBI suppression is naturally achieved through iterative decoding. The proposed turbo-Hadamard coding approach employs a kind of coded spread-spectrum signalling with time-varying spreading sequences, which is sharply compared with the code-aided DS-SS approach. With a spreading sequence of length 32 and a fixed bandwidth allocated for both approaches, it is shown through extensive simulations that the proposed turbo-Hadamard coding approach outperforms the code-aided DS-SS approach for three types of NBI, even when its transmission information rate is about 5 times higher than that of the code-aided DS-SS approach. The use of the proposed turbo-Hadmard coding approach in multipath fading channels is also discussed.
[ { "created": "Wed, 22 Apr 2015 10:54:06 GMT", "version": "v1" } ]
2015-04-23
[ [ "Wu", "Xiaofu", "" ], [ "ZhenYang", "", "" ] ]
The use of active narrow-band interference (NBI) suppression in direct-sequence spread-spectrum (DS-SS) communications has been extensively studied. In this paper, we address the problem of optimum coding-spreading tradeoff for NBI suppression. With maximum likelihood decoding, we first derive upper bounds on the error probability of coded systems in the presence of a special class of NBI, namely, multi-tone interference with orthogonal signatures. By employing the well-developed bounding techniques, we show there is no advantage in spreading, and hence a low-rate full coding approach is always preferred. Then, we propose a practical low-rate turbo-Hadamard coding approach, in which the NBI suppression is naturally achieved through iterative decoding. The proposed turbo-Hadamard coding approach employs a kind of coded spread-spectrum signalling with time-varying spreading sequences, which is sharply compared with the code-aided DS-SS approach. With a spreading sequence of length 32 and a fixed bandwidth allocated for both approaches, it is shown through extensive simulations that the proposed turbo-Hadamard coding approach outperforms the code-aided DS-SS approach for three types of NBI, even when its transmission information rate is about 5 times higher than that of the code-aided DS-SS approach. The use of the proposed turbo-Hadmard coding approach in multipath fading channels is also discussed.
2109.10582
Tamara Mueller
Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis
Partial sensitivity analysis in differential privacy
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Differential privacy (DP) allows the quantification of privacy loss when the data of individuals is subjected to algorithmic processing such as machine learning, as well as the provision of objective privacy guarantees. However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss. Here we extend the view of individual RDP by introducing a new concept we call partial sensitivity, which leverages symbolic automatic differentiation to determine the influence of each input feature on the gradient norm of a function. We experimentally evaluate our approach on queries over private databases, where we obtain a feature-level contribution of private attributes to the DP guarantee of individuals. Furthermore, we explore our findings in the context of neural network training on synthetic data by investigating the partial sensitivity of input pixels on an image classification task.
[ { "created": "Wed, 22 Sep 2021 08:29:16 GMT", "version": "v1" }, { "created": "Sun, 28 Nov 2021 08:29:30 GMT", "version": "v2" } ]
2021-11-30
[ [ "Mueller", "Tamara T.", "" ], [ "Ziller", "Alexander", "" ], [ "Usynin", "Dmitrii", "" ], [ "Knolle", "Moritz", "" ], [ "Jungmann", "Friederike", "" ], [ "Rueckert", "Daniel", "" ], [ "Kaissis", "Georgios", "" ] ]
Differential privacy (DP) allows the quantification of privacy loss when the data of individuals is subjected to algorithmic processing such as machine learning, as well as the provision of objective privacy guarantees. However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss. Here we extend the view of individual RDP by introducing a new concept we call partial sensitivity, which leverages symbolic automatic differentiation to determine the influence of each input feature on the gradient norm of a function. We experimentally evaluate our approach on queries over private databases, where we obtain a feature-level contribution of private attributes to the DP guarantee of individuals. Furthermore, we explore our findings in the context of neural network training on synthetic data by investigating the partial sensitivity of input pixels on an image classification task.
1904.05717
Tyler Smith PhD
Tyler M. Smith and Robert A. van de Geijn
The MOMMS Family of Matrix Multiplication Algorithms
null
null
null
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the ratio between the rate of computation and rate with which data can be retrieved from various layers of memory continues to deteriorate, a question arises: Will the current best algorithms for computing matrix-matrix multiplication on future CPUs continue to be (near) optimal? This paper provides compelling analytical and empirical evidence that the answer is "no". The analytical results guide us to a new family of algorithms of which the current state-of-the-art "Goto's algorithm" is but one member. The empirical results, on architectures that were custom built to reduce the amount of bandwidth to main memory, show that under different circumstances, different and particular members of the family become more superior. Thus, this family will likely start playing a prominent role going forward.
[ { "created": "Thu, 11 Apr 2019 14:25:27 GMT", "version": "v1" } ]
2019-04-12
[ [ "Smith", "Tyler M.", "" ], [ "van de Geijn", "Robert A.", "" ] ]
As the ratio between the rate of computation and rate with which data can be retrieved from various layers of memory continues to deteriorate, a question arises: Will the current best algorithms for computing matrix-matrix multiplication on future CPUs continue to be (near) optimal? This paper provides compelling analytical and empirical evidence that the answer is "no". The analytical results guide us to a new family of algorithms of which the current state-of-the-art "Goto's algorithm" is but one member. The empirical results, on architectures that were custom built to reduce the amount of bandwidth to main memory, show that under different circumstances, different and particular members of the family become more superior. Thus, this family will likely start playing a prominent role going forward.
2102.01771
Praneeth Kumar Vippathalla
Praneeth Kumar Vippathalla, Chung Chan, Navin Kashyap and Qiaoqiao Zhou
Secret Key Agreement and Secure Omniscience of Tree-PIN Source with Linear Wiretapper
13 pages, 2 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the wiretap secret key capacity remains unknown for general source models even in the two-user case, we obtained a single-letter characterization for a large class of multi-user source models with a linear wiretapper who can observe any linear combinations of the source. We introduced the idea of irreducible sources to show existence of an optimal communication scheme that achieves perfect omniscience with minimum leakage of information to the wiretapper. This implies a duality between the problems of wiretap secret key agreement and secure omniscience, and such duality potentially holds for more general sources.
[ { "created": "Tue, 2 Feb 2021 21:46:39 GMT", "version": "v1" } ]
2021-02-04
[ [ "Vippathalla", "Praneeth Kumar", "" ], [ "Chan", "Chung", "" ], [ "Kashyap", "Navin", "" ], [ "Zhou", "Qiaoqiao", "" ] ]
While the wiretap secret key capacity remains unknown for general source models even in the two-user case, we obtained a single-letter characterization for a large class of multi-user source models with a linear wiretapper who can observe any linear combinations of the source. We introduced the idea of irreducible sources to show existence of an optimal communication scheme that achieves perfect omniscience with minimum leakage of information to the wiretapper. This implies a duality between the problems of wiretap secret key agreement and secure omniscience, and such duality potentially holds for more general sources.
1812.09887
Stefan Schmid
Harald R\"acke and Stefan Schmid
Compact Oblivious Routing
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oblivious routing is an attractive paradigm for large distributed systems in which centralized control and frequent reconfigurations are infeasible or undesired (e.g., costly). Over the last almost 20 years, much progress has been made in devising oblivious routing schemes that guarantee close to optimal load and also algorithms for constructing such schemes efficiently have been designed. However, a common drawback of existing oblivious routing schemes is that they are not compact: they require large routing tables (of polynomial size), which does not scale. This paper presents the first oblivious routing scheme which guarantees close to optimal load and is compact at the same time -- requiring routing tables of polylogarithmic size. Our algorithm maintains the polynomial runtime and polylogarithmic competitive ratio of existing algorithms, and is hence particularly well-suited for emerging large-scale networks.
[ { "created": "Mon, 24 Dec 2018 10:48:44 GMT", "version": "v1" } ]
2018-12-27
[ [ "Räcke", "Harald", "" ], [ "Schmid", "Stefan", "" ] ]
Oblivious routing is an attractive paradigm for large distributed systems in which centralized control and frequent reconfigurations are infeasible or undesired (e.g., costly). Over the last almost 20 years, much progress has been made in devising oblivious routing schemes that guarantee close to optimal load and also algorithms for constructing such schemes efficiently have been designed. However, a common drawback of existing oblivious routing schemes is that they are not compact: they require large routing tables (of polynomial size), which does not scale. This paper presents the first oblivious routing scheme which guarantees close to optimal load and is compact at the same time -- requiring routing tables of polylogarithmic size. Our algorithm maintains the polynomial runtime and polylogarithmic competitive ratio of existing algorithms, and is hence particularly well-suited for emerging large-scale networks.
1704.01045
Christoph Martin
Christoph Martin and Peter Niemeyer
Estimating the sensitivity of centrality measures w.r.t. measurement errors
null
Network Science. 2019;7(2):180-195
10.1017/nws.2019.12
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most network studies rely on an observed network that differs from the underlying network which is obfuscated by measurement errors. It is well known that such errors can have a severe impact on the reliability of network metrics, especially on centrality measures: a more central node in the observed network might be less central in the underlying network. We introduce a metric for the reliability of centrality measures -- called sensitivity. Given two randomly chosen nodes, the sensitivity means the probability that the more central node in the observed network is also more central in the underlying network. The sensitivity concept relies on the underlying network which is usually not accessible. Therefore, we propose two methods to approximate the sensitivity. The iterative method, which simulates possible underlying networks for the estimation and the imputation method, which uses the sensitivity of the observed network for the estimation. Both methods rely on the observed network and assumptions about the underlying type of measurement error (e.g., the percentage of missing edges or nodes). Our experiments on real-world networks and random graphs show that the iterative method performs well in many cases. In contrast, the imputation method does not yield useful estimations for networks other than Erd\H{o}s-R\'enyi graphs.
[ { "created": "Tue, 4 Apr 2017 15:00:10 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2017 13:20:26 GMT", "version": "v2" } ]
2020-01-09
[ [ "Martin", "Christoph", "" ], [ "Niemeyer", "Peter", "" ] ]
Most network studies rely on an observed network that differs from the underlying network which is obfuscated by measurement errors. It is well known that such errors can have a severe impact on the reliability of network metrics, especially on centrality measures: a more central node in the observed network might be less central in the underlying network. We introduce a metric for the reliability of centrality measures -- called sensitivity. Given two randomly chosen nodes, the sensitivity means the probability that the more central node in the observed network is also more central in the underlying network. The sensitivity concept relies on the underlying network which is usually not accessible. Therefore, we propose two methods to approximate the sensitivity. The iterative method, which simulates possible underlying networks for the estimation and the imputation method, which uses the sensitivity of the observed network for the estimation. Both methods rely on the observed network and assumptions about the underlying type of measurement error (e.g., the percentage of missing edges or nodes). Our experiments on real-world networks and random graphs show that the iterative method performs well in many cases. In contrast, the imputation method does not yield useful estimations for networks other than Erd\H{o}s-R\'enyi graphs.
1904.02767
Reno Kriz
Reno Kriz, Jo\~ao Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki and Chris Callison-Burch
Complexity-Weighted Loss and Diverse Reranking for Sentence Simplification
11 pages, North American Association of Computational Linguistics (NAACL 2019)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sentence simplification is the task of rewriting texts so they are easier to understand. Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and memory augmentation. One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex. We aim to alleviate this issue through the use of two main techniques. First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training. Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity. Here, we measure simplicity through a novel sentence complexity model. These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler sentences. We report standard automatic and human evaluation metrics.
[ { "created": "Thu, 4 Apr 2019 19:47:17 GMT", "version": "v1" } ]
2019-04-08
[ [ "Kriz", "Reno", "" ], [ "Sedoc", "João", "" ], [ "Apidianaki", "Marianna", "" ], [ "Zheng", "Carolina", "" ], [ "Kumar", "Gaurav", "" ], [ "Miltsakaki", "Eleni", "" ], [ "Callison-Burch", "Chris", "" ] ]
Sentence simplification is the task of rewriting texts so they are easier to understand. Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and memory augmentation. One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex. We aim to alleviate this issue through the use of two main techniques. First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training. Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity. Here, we measure simplicity through a novel sentence complexity model. These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler sentences. We report standard automatic and human evaluation metrics.
1702.08199
Shenghui Wang
Rob Koopman, Shenghui Wang
Mutual Information based labelling and comparing clusters
Special Issue of Scientometrics: Same data - different results? Towards a comparative approach to the identification of thematic structures in science
null
10.1007/s11192-017-2305-2
null
cs.IR cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After a clustering solution is generated automatically, labelling these clusters becomes important to help understanding the results. In this paper, we propose to use a Mutual Information based method to label clusters of journal articles. Topical terms which have the highest Normalised Mutual Information (NMI) with a certain cluster are selected to be the labels of the cluster. Discussion of the labelling technique with a domain expert was used as a check that the labels are discriminating not only lexical-wise but also semantically. Based on a common set of topical terms, we also propose to generate lexical fingerprints as a representation of individual clusters. Eventually, we visualise and compare these fingerprints of different clusters from either one clustering solution or different ones.
[ { "created": "Mon, 27 Feb 2017 09:23:46 GMT", "version": "v1" } ]
2017-02-28
[ [ "Koopman", "Rob", "" ], [ "Wang", "Shenghui", "" ] ]
After a clustering solution is generated automatically, labelling these clusters becomes important to help understanding the results. In this paper, we propose to use a Mutual Information based method to label clusters of journal articles. Topical terms which have the highest Normalised Mutual Information (NMI) with a certain cluster are selected to be the labels of the cluster. Discussion of the labelling technique with a domain expert was used as a check that the labels are discriminating not only lexical-wise but also semantically. Based on a common set of topical terms, we also propose to generate lexical fingerprints as a representation of individual clusters. Eventually, we visualise and compare these fingerprints of different clusters from either one clustering solution or different ones.
1903.07788
Jianxin Wu
Kun Yi and Jianxin Wu
Probabilistic End-to-end Noise Correction for Learning with Noisy Labels
CVPR 2019
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels. It is easy to collect a dataset with noisy labels, but such noise makes networks overfit seriously and accuracies drop dramatically. To address this problem, we propose an end-to-end framework called PENCIL, which can update both network parameters and label estimations as label distributions. PENCIL is independent of the backbone network structure and does not need an auxiliary clean dataset or prior information about noise, thus it is more general and robust than existing methods and is easy to apply. PENCIL outperforms previous state-of-the-art methods by large margins on both synthetic and real-world datasets with different noise types and noise rates. Experiments show that PENCIL is robust on clean datasets, too.
[ { "created": "Tue, 19 Mar 2019 01:38:08 GMT", "version": "v1" } ]
2019-03-20
[ [ "Yi", "Kun", "" ], [ "Wu", "Jianxin", "" ] ]
Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels. It is easy to collect a dataset with noisy labels, but such noise makes networks overfit seriously and accuracies drop dramatically. To address this problem, we propose an end-to-end framework called PENCIL, which can update both network parameters and label estimations as label distributions. PENCIL is independent of the backbone network structure and does not need an auxiliary clean dataset or prior information about noise, thus it is more general and robust than existing methods and is easy to apply. PENCIL outperforms previous state-of-the-art methods by large margins on both synthetic and real-world datasets with different noise types and noise rates. Experiments show that PENCIL is robust on clean datasets, too.
2212.00616
Tao Ge
Tao Ge, Jing Hu, Li Dong, Shaoguang Mao, Yan Xia, Xun Wang, Si-Qing Chen, Furu Wei
Extensible Prompts for Language Models on Zero-shot Language Style Customization
Accepted by NeurIPS 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose eXtensible Prompt (X-Prompt) for prompting a large language model (LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL but also an extensible vocabulary of imaginary words. Registering new imaginary words allows us to instruct the LLM to comprehend concepts that are difficult to describe with NL words, thereby making a prompt more descriptive. Also, these imaginary words are designed to be out-of-distribution (OOD) robust so that they can be (re)used like NL words in various prompts, distinguishing X-Prompt from soft prompt that is for fitting in-distribution data. We propose context-augmented learning (CAL) to learn imaginary words for general usability, enabling them to work properly in OOD (unseen) prompts. We experiment X-Prompt for zero-shot language style customization as a case study. The promising results of X-Prompt demonstrate its potential to facilitate advanced interaction beyond the natural language interface, bridging the communication gap between humans and LLMs.
[ { "created": "Thu, 1 Dec 2022 16:11:56 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2023 20:11:14 GMT", "version": "v2" } ]
2023-12-04
[ [ "Ge", "Tao", "" ], [ "Hu", "Jing", "" ], [ "Dong", "Li", "" ], [ "Mao", "Shaoguang", "" ], [ "Xia", "Yan", "" ], [ "Wang", "Xun", "" ], [ "Chen", "Si-Qing", "" ], [ "Wei", "Furu", "" ] ]
We propose eXtensible Prompt (X-Prompt) for prompting a large language model (LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL but also an extensible vocabulary of imaginary words. Registering new imaginary words allows us to instruct the LLM to comprehend concepts that are difficult to describe with NL words, thereby making a prompt more descriptive. Also, these imaginary words are designed to be out-of-distribution (OOD) robust so that they can be (re)used like NL words in various prompts, distinguishing X-Prompt from soft prompt that is for fitting in-distribution data. We propose context-augmented learning (CAL) to learn imaginary words for general usability, enabling them to work properly in OOD (unseen) prompts. We experiment X-Prompt for zero-shot language style customization as a case study. The promising results of X-Prompt demonstrate its potential to facilitate advanced interaction beyond the natural language interface, bridging the communication gap between humans and LLMs.
2005.03566
Bo Zhang
Xiangxiang Chu and Bo Zhang
Noisy Differentiable Architecture Search
BMVC 2021
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simplicity is the ultimate sophistication. Differentiable Architecture Search (DARTS) has now become one of the mainstream paradigms of neural architecture search. However, it largely suffers from the well-known performance collapse issue due to the aggregation of skip connections. It is thought to have overly benefited from the residual structure which accelerates the information flow. To weaken this impact, we propose to inject unbiased random noise to impede the flow. We name this novel approach NoisyDARTS. In effect, a network optimizer should perceive this difficulty at each training step and refrain from overshooting, especially on skip connections. In the long run, since we add no bias to the gradient in terms of expectation, it is still likely to converge to the right solution area. We also prove that the injected noise plays a role in smoothing the loss landscape, which makes the optimization easier. Our method features extreme simplicity and acts as a new strong baseline. We perform extensive experiments across various search spaces, datasets, and tasks, where we robustly achieve state-of-the-art results. Our code is available at https://github.com/xiaomi-automl/NoisyDARTS.
[ { "created": "Thu, 7 May 2020 15:53:52 GMT", "version": "v1" }, { "created": "Tue, 19 May 2020 14:42:33 GMT", "version": "v2" }, { "created": "Sun, 17 Oct 2021 14:57:46 GMT", "version": "v3" } ]
2021-10-19
[ [ "Chu", "Xiangxiang", "" ], [ "Zhang", "Bo", "" ] ]
Simplicity is the ultimate sophistication. Differentiable Architecture Search (DARTS) has now become one of the mainstream paradigms of neural architecture search. However, it largely suffers from the well-known performance collapse issue due to the aggregation of skip connections. It is thought to have overly benefited from the residual structure which accelerates the information flow. To weaken this impact, we propose to inject unbiased random noise to impede the flow. We name this novel approach NoisyDARTS. In effect, a network optimizer should perceive this difficulty at each training step and refrain from overshooting, especially on skip connections. In the long run, since we add no bias to the gradient in terms of expectation, it is still likely to converge to the right solution area. We also prove that the injected noise plays a role in smoothing the loss landscape, which makes the optimization easier. Our method features extreme simplicity and acts as a new strong baseline. We perform extensive experiments across various search spaces, datasets, and tasks, where we robustly achieve state-of-the-art results. Our code is available at https://github.com/xiaomi-automl/NoisyDARTS.
1611.03899
Richard Mann
Richard P. Mann and Dirk Helbing
Optimal incentives for collective intelligence
null
PNAS 2017 114 (20) 5077-5082
10.1073/pnas.1618722114
null
cs.GT math.DS stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate one's peers. We investigate the role incentives play in maintaining useful diversity through an evolutionary game-theoretic model of collective prediction. We show that market-based incentive systems produce herding effects, reduce information available to the group and suppress collective intelligence. In response, we propose a new incentive scheme that rewards accurate minority predictions, and show that this produces optimal diversity and collective predictive accuracy. We conclude that real-world systems should reward those who have demonstrated accuracy when majority opinion has been in error.
[ { "created": "Fri, 11 Nov 2016 22:21:24 GMT", "version": "v1" }, { "created": "Tue, 17 Oct 2017 12:18:41 GMT", "version": "v2" } ]
2017-10-18
[ [ "Mann", "Richard P.", "" ], [ "Helbing", "Dirk", "" ] ]
Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate one's peers. We investigate the role incentives play in maintaining useful diversity through an evolutionary game-theoretic model of collective prediction. We show that market-based incentive systems produce herding effects, reduce information available to the group and suppress collective intelligence. In response, we propose a new incentive scheme that rewards accurate minority predictions, and show that this produces optimal diversity and collective predictive accuracy. We conclude that real-world systems should reward those who have demonstrated accuracy when majority opinion has been in error.
1910.11547
Yiheng Liu
Yiheng Liu, Wengang Zhou, Jianzhuang Liu, Guojun Qi, Qi Tian, Houqiang Li
An End-to-End Foreground-Aware Network for Person Re-Identification
Accepted to IEEE Transactions on Image Processing (TIP), 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person re-identification is a crucial task of identifying pedestrians of interest across multiple surveillance camera views. In person re-identification, a pedestrian is usually represented with features extracted from a rectangular image region that inevitably contains the scene background, which incurs ambiguity to distinguish different pedestrians and degrades the accuracy. To this end, we propose an end-to-end foreground-aware network to discriminate foreground from background by learning a soft mask for person re-identification. In our method, in addition to the pedestrian ID as supervision for foreground, we introduce the camera ID of each pedestrian image for background modeling. The foreground branch and the background branch are optimized collaboratively. By presenting a target attention loss, the pedestrian features extracted from the foreground branch become more insensitive to the backgrounds, which greatly reduces the negative impacts of changing backgrounds on matching an identical across different camera views. Notably, in contrast to existing methods, our approach does not require any additional dataset to train a human landmark detector or a segmentation model for locating the background regions. The experimental results conducted on three challenging datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17, demonstrate the effectiveness of our approach.
[ { "created": "Fri, 25 Oct 2019 06:43:19 GMT", "version": "v1" }, { "created": "Sat, 6 Mar 2021 09:42:43 GMT", "version": "v2" } ]
2021-03-09
[ [ "Liu", "Yiheng", "" ], [ "Zhou", "Wengang", "" ], [ "Liu", "Jianzhuang", "" ], [ "Qi", "Guojun", "" ], [ "Tian", "Qi", "" ], [ "Li", "Houqiang", "" ] ]
Person re-identification is a crucial task of identifying pedestrians of interest across multiple surveillance camera views. In person re-identification, a pedestrian is usually represented with features extracted from a rectangular image region that inevitably contains the scene background, which incurs ambiguity to distinguish different pedestrians and degrades the accuracy. To this end, we propose an end-to-end foreground-aware network to discriminate foreground from background by learning a soft mask for person re-identification. In our method, in addition to the pedestrian ID as supervision for foreground, we introduce the camera ID of each pedestrian image for background modeling. The foreground branch and the background branch are optimized collaboratively. By presenting a target attention loss, the pedestrian features extracted from the foreground branch become more insensitive to the backgrounds, which greatly reduces the negative impacts of changing backgrounds on matching an identical across different camera views. Notably, in contrast to existing methods, our approach does not require any additional dataset to train a human landmark detector or a segmentation model for locating the background regions. The experimental results conducted on three challenging datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17, demonstrate the effectiveness of our approach.
1710.00675
Martin Chmel\'ik
Krishnendu Chatterjee, Martin Chmelik, Ufuk Topcu
Sensor Synthesis for POMDPs with Reachability Objectives
arXiv admin note: text overlap with arXiv:1511.08456
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partially observable Markov decision processes (POMDPs) are widely used in probabilistic planning problems in which an agent interacts with an environment using noisy and imprecise sensors. We study a setting in which the sensors are only partially defined and the goal is to synthesize "weakest" additional sensors, such that in the resulting POMDP, there is a small-memory policy for the agent that almost-surely (with probability~1) satisfies a reachability objective. We show that the problem is NP-complete, and present a symbolic algorithm by encoding the problem into SAT instances. We illustrate trade-offs between the amount of memory of the policy and the number of additional sensors on a simple example. We have implemented our approach and consider three classical POMDP examples from the literature, and show that in all the examples the number of sensors can be significantly decreased (as compared to the existing solutions in the literature) without increasing the complexity of the policies.
[ { "created": "Fri, 29 Sep 2017 08:27:24 GMT", "version": "v1" } ]
2017-10-03
[ [ "Chatterjee", "Krishnendu", "" ], [ "Chmelik", "Martin", "" ], [ "Topcu", "Ufuk", "" ] ]
Partially observable Markov decision processes (POMDPs) are widely used in probabilistic planning problems in which an agent interacts with an environment using noisy and imprecise sensors. We study a setting in which the sensors are only partially defined and the goal is to synthesize "weakest" additional sensors, such that in the resulting POMDP, there is a small-memory policy for the agent that almost-surely (with probability~1) satisfies a reachability objective. We show that the problem is NP-complete, and present a symbolic algorithm by encoding the problem into SAT instances. We illustrate trade-offs between the amount of memory of the policy and the number of additional sensors on a simple example. We have implemented our approach and consider three classical POMDP examples from the literature, and show that in all the examples the number of sensors can be significantly decreased (as compared to the existing solutions in the literature) without increasing the complexity of the policies.
2211.10193
Ibrahim Alabdulmohsin
Amr Khalifa, Michael C. Mozer, Hanie Sedghi, Behnam Neyshabur, Ibrahim Alabdulmohsin
Layer-Stack Temperature Scaling
10 pages, 7 figures, 3 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works demonstrate that early layers in a neural network contain useful information for prediction. Inspired by this, we show that extending temperature scaling across all layers improves both calibration and accuracy. We call this procedure "layer-stack temperature scaling" (LATES). Informally, LATES grants each layer a weighted vote during inference. We evaluate it on five popular convolutional neural network architectures both in- and out-of-distribution and observe a consistent improvement over temperature scaling in terms of accuracy, calibration, and AUC. All conclusions are supported by comprehensive statistical analyses. Since LATES neither retrains the architecture nor introduces many more parameters, its advantages can be reaped without requiring additional data beyond what is used in temperature scaling. Finally, we show that combining LATES with Monte Carlo Dropout matches state-of-the-art results on CIFAR10/100.
[ { "created": "Fri, 18 Nov 2022 12:34:00 GMT", "version": "v1" } ]
2022-11-21
[ [ "Khalifa", "Amr", "" ], [ "Mozer", "Michael C.", "" ], [ "Sedghi", "Hanie", "" ], [ "Neyshabur", "Behnam", "" ], [ "Alabdulmohsin", "Ibrahim", "" ] ]
Recent works demonstrate that early layers in a neural network contain useful information for prediction. Inspired by this, we show that extending temperature scaling across all layers improves both calibration and accuracy. We call this procedure "layer-stack temperature scaling" (LATES). Informally, LATES grants each layer a weighted vote during inference. We evaluate it on five popular convolutional neural network architectures both in- and out-of-distribution and observe a consistent improvement over temperature scaling in terms of accuracy, calibration, and AUC. All conclusions are supported by comprehensive statistical analyses. Since LATES neither retrains the architecture nor introduces many more parameters, its advantages can be reaped without requiring additional data beyond what is used in temperature scaling. Finally, we show that combining LATES with Monte Carlo Dropout matches state-of-the-art results on CIFAR10/100.
2310.10106
Can Cui
Can Cui (MULTISPEECH), Imran Ahamad Sheikh, Mostafa Sadeghi (MULTISPEECH), Emmanuel Vincent (MULTISPEECH)
End-to-end Multichannel Speaker-Attributed ASR: Speaker Guided Decoder and Input Feature Analysis
2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2023), Dec 2023, Taipei, Taiwan
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an end-to-end multichannel speaker-attributed automatic speech recognition (MC-SA-ASR) system that combines a Conformer-based encoder with multi-frame crosschannel attention and a speaker-attributed Transformer-based decoder. To the best of our knowledge, this is the first model that efficiently integrates ASR and speaker identification modules in a multichannel setting. On simulated mixtures of LibriSpeech data, our system reduces the word error rate (WER) by up to 12% and 16% relative compared to previously proposed single-channel and multichannel approaches, respectively. Furthermore, we investigate the impact of different input features, including multichannel magnitude and phase information, on the ASR performance. Finally, our experiments on the AMI corpus confirm the effectiveness of our system for real-world multichannel meeting transcription.
[ { "created": "Mon, 16 Oct 2023 06:40:18 GMT", "version": "v1" } ]
2023-10-17
[ [ "Cui", "Can", "", "MULTISPEECH" ], [ "Sheikh", "Imran Ahamad", "", "MULTISPEECH" ], [ "Sadeghi", "Mostafa", "", "MULTISPEECH" ], [ "Vincent", "Emmanuel", "", "MULTISPEECH" ] ]
We present an end-to-end multichannel speaker-attributed automatic speech recognition (MC-SA-ASR) system that combines a Conformer-based encoder with multi-frame crosschannel attention and a speaker-attributed Transformer-based decoder. To the best of our knowledge, this is the first model that efficiently integrates ASR and speaker identification modules in a multichannel setting. On simulated mixtures of LibriSpeech data, our system reduces the word error rate (WER) by up to 12% and 16% relative compared to previously proposed single-channel and multichannel approaches, respectively. Furthermore, we investigate the impact of different input features, including multichannel magnitude and phase information, on the ASR performance. Finally, our experiments on the AMI corpus confirm the effectiveness of our system for real-world multichannel meeting transcription.
1908.09072
Zhaobing Kang
Zhaobing Kang, Wei Zou and Zheng Zhu
Camera Pose Correction in SLAM Based on Bias Values of Map Points
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate camera pose estimation result is essential for visual SLAM (VSLAM). This paper presents a novel pose correction method to improve the accuracy of the VSLAM system. Firstly, the relationship between the camera pose estimation error and bias values of map points is derived based on the optimized function in VSLAM. Secondly, the bias value of the map point is calculated by a statistical method. Finally, the camera pose estimation error is compensated according to the first derived relationship. After the pose correction, procedures of the original system, such as the bundle adjustment (BA) optimization, can be executed as before. Compared with existing methods, our algorithm is compact and effective and can be easily generalized to different VSLAM systems. Additionally, the robustness to system noise of our method is better than feature selection methods, due to all original system information is preserved in our algorithm while only a subset is employed in the latter. Experimental results on benchmark datasets show that our approach leads to considerable improvements over state-of-the-art algorithms for absolute pose estimation.
[ { "created": "Sat, 24 Aug 2019 02:07:51 GMT", "version": "v1" } ]
2019-08-27
[ [ "Kang", "Zhaobing", "" ], [ "Zou", "Wei", "" ], [ "Zhu", "Zheng", "" ] ]
Accurate camera pose estimation result is essential for visual SLAM (VSLAM). This paper presents a novel pose correction method to improve the accuracy of the VSLAM system. Firstly, the relationship between the camera pose estimation error and bias values of map points is derived based on the optimized function in VSLAM. Secondly, the bias value of the map point is calculated by a statistical method. Finally, the camera pose estimation error is compensated according to the first derived relationship. After the pose correction, procedures of the original system, such as the bundle adjustment (BA) optimization, can be executed as before. Compared with existing methods, our algorithm is compact and effective and can be easily generalized to different VSLAM systems. Additionally, the robustness to system noise of our method is better than feature selection methods, due to all original system information is preserved in our algorithm while only a subset is employed in the latter. Experimental results on benchmark datasets show that our approach leads to considerable improvements over state-of-the-art algorithms for absolute pose estimation.
1403.1974
Jaspinder Pal
Jaspinder Pal Singh
Designing an FPGA Synthesizable Computer Vision Algorithm to Detect the Greening of Potatoes
5 pages, 8 figures, 2 tables, "Published with International Journal of Engineering Trends and Technology (IJETT)" ISSN:2231-5381. http://www.ijettjournal.org. published by seventh sense research group
International Journal of Engineering Trends and Technology(IJETT), V8(8),438-442 February 2014
10.14445/22315381/IJETT-V8P275
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Potato quality control has improved in the last years thanks to automation techniques like machine vision, mainly making the classification task between different quality degrees faster, safer and less subjective. In our study we are going to design a computer vision algorithm for grading of potatoes according to the greening of the surface color of potato. The ratio of green pixels to the total number of pixels of the potato surface is found. The higher the ratio the worse is the potato. First the image is converted into serial data and then processing is done in RGB colour space. Green part of the potato is also shown by de-serializing the output. The same algorithm is then synthesized on FPGA and the result shows thousand times speed improvement in case of hardware synthesis.
[ { "created": "Sat, 8 Mar 2014 14:59:46 GMT", "version": "v1" } ]
2014-03-11
[ [ "Singh", "Jaspinder Pal", "" ] ]
Potato quality control has improved in the last years thanks to automation techniques like machine vision, mainly making the classification task between different quality degrees faster, safer and less subjective. In our study we are going to design a computer vision algorithm for grading of potatoes according to the greening of the surface color of potato. The ratio of green pixels to the total number of pixels of the potato surface is found. The higher the ratio the worse is the potato. First the image is converted into serial data and then processing is done in RGB colour space. Green part of the potato is also shown by de-serializing the output. The same algorithm is then synthesized on FPGA and the result shows thousand times speed improvement in case of hardware synthesis.
2301.10874
Rohitash Chandra
Tianyi Wang, Rodney Beard, John Hawkins, Rohitash Chandra
Recursive deep learning framework for forecasting the decadal world economic outlook
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Gross domestic product (GDP) is the most widely used indicator in macroeconomics and the main tool for measuring a country's economic ouput. Due to the diversity and complexity of the world economy, a wide range of models have been used, but there are challenges in making decadal GDP forecasts given unexpected changes such as pandemics and wars. Deep learning models are well suited for modeling temporal sequences have been applied for time series forecasting. In this paper, we develop a deep learning framework to forecast the GDP growth rate of the world economy over a decade. We use Penn World Table as the source of our data, taking data from 1980 to 2019, across 13 countries, such as Australia, China, India, the United States and so on. We test multiple deep learning models, LSTM, BD-LSTM, ED-LSTM and CNN, and compared their results with the traditional time series model (ARIMA,VAR). Our results indicate that ED-LSTM is the best performing model. We present a recursive deep learning framework to predict the GDP growth rate in the next ten years. We predict that most countries will experience economic growth slowdown, stagnation or even recession within five years; only China, France and India are predicted to experience stable, or increasing, GDP growth.
[ { "created": "Wed, 25 Jan 2023 23:47:34 GMT", "version": "v1" } ]
2023-01-30
[ [ "Wang", "Tianyi", "" ], [ "Beard", "Rodney", "" ], [ "Hawkins", "John", "" ], [ "Chandra", "Rohitash", "" ] ]
Gross domestic product (GDP) is the most widely used indicator in macroeconomics and the main tool for measuring a country's economic ouput. Due to the diversity and complexity of the world economy, a wide range of models have been used, but there are challenges in making decadal GDP forecasts given unexpected changes such as pandemics and wars. Deep learning models are well suited for modeling temporal sequences have been applied for time series forecasting. In this paper, we develop a deep learning framework to forecast the GDP growth rate of the world economy over a decade. We use Penn World Table as the source of our data, taking data from 1980 to 2019, across 13 countries, such as Australia, China, India, the United States and so on. We test multiple deep learning models, LSTM, BD-LSTM, ED-LSTM and CNN, and compared their results with the traditional time series model (ARIMA,VAR). Our results indicate that ED-LSTM is the best performing model. We present a recursive deep learning framework to predict the GDP growth rate in the next ten years. We predict that most countries will experience economic growth slowdown, stagnation or even recession within five years; only China, France and India are predicted to experience stable, or increasing, GDP growth.
2004.03688
Juan Banda
Juan M. Banda, Ramya Tekumalla, Guanyu Wang, Jingyuan Yu, Tuo Liu, Yuning Ding, Katya Artemova, Elena Tutubalina, Gerardo Chowell
A large-scale COVID-19 Twitter chatter dataset for open scientific research -- an international collaboration
8 pages, 1 figure 2 table. Update: new version of paper with up-to-date statistics and new co-authors
null
10.3390/epidemiologia2030024
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the COVID-19 pandemic continues its march around the world, an unprecedented amount of open data is being generated for genetics and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated in the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique world-wide event into biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 152 million tweets, growing daily, related to COVID-19 chatter generated from January 1st to April 4th at the time of writing. This open dataset will allow researchers to conduct a number of research projects relating to the emotional and mental responses to social distancing measures, the identification of sources of misinformation, and the stratified measurement of sentiment towards the pandemic in near real time.
[ { "created": "Tue, 7 Apr 2020 20:25:26 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 16:20:38 GMT", "version": "v2" } ]
2021-08-10
[ [ "Banda", "Juan M.", "" ], [ "Tekumalla", "Ramya", "" ], [ "Wang", "Guanyu", "" ], [ "Yu", "Jingyuan", "" ], [ "Liu", "Tuo", "" ], [ "Ding", "Yuning", "" ], [ "Artemova", "Katya", "" ], [ "Tutubalina", "Elena", "" ], [ "Chowell", "Gerardo", "" ] ]
As the COVID-19 pandemic continues its march around the world, an unprecedented amount of open data is being generated for genetics and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated in the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique world-wide event into biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 152 million tweets, growing daily, related to COVID-19 chatter generated from January 1st to April 4th at the time of writing. This open dataset will allow researchers to conduct a number of research projects relating to the emotional and mental responses to social distancing measures, the identification of sources of misinformation, and the stratified measurement of sentiment towards the pandemic in near real time.
1101.5687
Alex Bronstein
Jonathan Pokrass, Alexander M. Bronstein, Michael M. Bronstein
A correspondence-less approach to matching of deformable shapes
Preprint submitted to Intl. Conference on Scale Space and Variational Methods (SSVM'11)
null
null
null
cs.CV cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding a match between partially available deformable shapes is a challenging problem with numerous applications. The problem is usually approached by computing local descriptors on a pair of shapes and then establishing a point-wise correspondence between the two. In this paper, we introduce an alternative correspondence-less approach to matching fragments to an entire shape undergoing a non-rigid deformation. We use diffusion geometric descriptors and optimize over the integration domains on which the integral descriptors of the two parts match. The problem is regularized using the Mumford-Shah functional. We show an efficient discretization based on the Ambrosio-Tortorelli approximation generalized to triangular meshes. Experiments demonstrating the success of the proposed method are presented.
[ { "created": "Sat, 29 Jan 2011 10:37:23 GMT", "version": "v1" } ]
2011-02-01
[ [ "Pokrass", "Jonathan", "" ], [ "Bronstein", "Alexander M.", "" ], [ "Bronstein", "Michael M.", "" ] ]
Finding a match between partially available deformable shapes is a challenging problem with numerous applications. The problem is usually approached by computing local descriptors on a pair of shapes and then establishing a point-wise correspondence between the two. In this paper, we introduce an alternative correspondence-less approach to matching fragments to an entire shape undergoing a non-rigid deformation. We use diffusion geometric descriptors and optimize over the integration domains on which the integral descriptors of the two parts match. The problem is regularized using the Mumford-Shah functional. We show an efficient discretization based on the Ambrosio-Tortorelli approximation generalized to triangular meshes. Experiments demonstrating the success of the proposed method are presented.
1008.0047
Yong Cheng
Yong Cheng (Student Member, IEEE), Vincent K. N. Lau (Senior Member, IEEE), and Yi Long
A Scalable Limited Feedback Design for Network MIMO using Per-Cell Product Codebook
11 pages, 5 figures, Accepted to the IEEE transactions on Wireless Communication
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/3.0/
In network MIMO systems, channel state information is required at the transmitter side to multiplex users in the spatial domain. Since perfect channel knowledge is difficult to obtain in practice, \emph{limited feedback} is a widely accepted solution. The {\em dynamic number of cooperating BSs} and {\em heterogeneous path loss effects} of network MIMO systems pose new challenges on limited feedback design. In this paper, we propose a scalable limited feedback design for network MIMO systems with multiple base stations, multiple users and multiple data streams for each user. We propose a {\em limited feedback framework using per-cell product codebooks}, along with a {\em low-complexity feedback indices selection algorithm}. We show that the proposed per-cell product codebook limited feedback design can asymptotically achieve the same performance as the joint-cell codebook approach. We also derive an asymptotic \emph{per-user throughput loss} due to limited feedback with per-cell product codebooks. Based on that, we show that when the number of per-user feedback-bits $B_{k}$ is $\mathcal{O}\big( Nn_{T}n_{R}\log_{2}(\rho g_{k}^{sum})\big)$, the system operates in the \emph{noise-limited} regime in which the per-user throughput is $\mathcal{O} \left( n_{R} \log_{2} \big( \frac{n_{R}\rho g_{k}^{sum}}{Nn_{T}} \big) \right)$. On the other hand, when the number of per-user feedback-bits $B_{k}$ does not scale with the \emph{system SNR} $\rho$, the system operates in the \emph{interference-limited} regime where the per-user throughput is $\mathcal{O}\left( \frac{n_{R}B_{k}}{(Nn_{T})^{2}} \right)$. Numerical results show that the proposed design is very flexible to accommodate dynamic number of cooperating BSs and achieves much better performance compared with other baselines (such as the Givens rotation approach).
[ { "created": "Sat, 31 Jul 2010 03:49:42 GMT", "version": "v1" } ]
2012-07-07
[ [ "Cheng", "Yong", "", "Student Member, IEEE" ], [ "Lau", "Vincent K. N.", "", "Senior Member,\n IEEE" ], [ "Long", "Yi", "" ] ]
In network MIMO systems, channel state information is required at the transmitter side to multiplex users in the spatial domain. Since perfect channel knowledge is difficult to obtain in practice, \emph{limited feedback} is a widely accepted solution. The {\em dynamic number of cooperating BSs} and {\em heterogeneous path loss effects} of network MIMO systems pose new challenges on limited feedback design. In this paper, we propose a scalable limited feedback design for network MIMO systems with multiple base stations, multiple users and multiple data streams for each user. We propose a {\em limited feedback framework using per-cell product codebooks}, along with a {\em low-complexity feedback indices selection algorithm}. We show that the proposed per-cell product codebook limited feedback design can asymptotically achieve the same performance as the joint-cell codebook approach. We also derive an asymptotic \emph{per-user throughput loss} due to limited feedback with per-cell product codebooks. Based on that, we show that when the number of per-user feedback-bits $B_{k}$ is $\mathcal{O}\big( Nn_{T}n_{R}\log_{2}(\rho g_{k}^{sum})\big)$, the system operates in the \emph{noise-limited} regime in which the per-user throughput is $\mathcal{O} \left( n_{R} \log_{2} \big( \frac{n_{R}\rho g_{k}^{sum}}{Nn_{T}} \big) \right)$. On the other hand, when the number of per-user feedback-bits $B_{k}$ does not scale with the \emph{system SNR} $\rho$, the system operates in the \emph{interference-limited} regime where the per-user throughput is $\mathcal{O}\left( \frac{n_{R}B_{k}}{(Nn_{T})^{2}} \right)$. Numerical results show that the proposed design is very flexible to accommodate dynamic number of cooperating BSs and achieves much better performance compared with other baselines (such as the Givens rotation approach).
1811.00602
Leonhard Spiegelberg
Lorenzo De Stefani, Leonhard F. Spiegelberg, Tim Kraska, Eli Upfal
VizRec: A framework for secure data exploration via visual representation
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual representations of data (visualizations) are tools of great importance and widespread use in data analytics as they provide users visual insight to patterns in the observed data in a simple and effective way. However, since visualizations tools are applied to sample data, there is a a risk of visualizing random fluctuations in the sample rather than a true pattern in the data. This problem is even more significant when visualization is used to identify interesting patterns among many possible possibilities, or to identify an interesting deviation in a pair of observations among many possible pairs, as commonly done in visual recommendation systems. We present VizRec, a framework for improving the performance of visual recommendation systems by quantifying the statistical significance of recommended visualizations. The proposed methodology allows to control the probability of misleading visual recommendations using both classical statistical testing procedures and a novel application of the Vapnik Chervonenkis (VC) dimension method which is a fundamental concept in statistical learning theory.
[ { "created": "Thu, 1 Nov 2018 19:35:11 GMT", "version": "v1" } ]
2018-11-05
[ [ "De Stefani", "Lorenzo", "" ], [ "Spiegelberg", "Leonhard F.", "" ], [ "Kraska", "Tim", "" ], [ "Upfal", "Eli", "" ] ]
Visual representations of data (visualizations) are tools of great importance and widespread use in data analytics as they provide users visual insight to patterns in the observed data in a simple and effective way. However, since visualizations tools are applied to sample data, there is a a risk of visualizing random fluctuations in the sample rather than a true pattern in the data. This problem is even more significant when visualization is used to identify interesting patterns among many possible possibilities, or to identify an interesting deviation in a pair of observations among many possible pairs, as commonly done in visual recommendation systems. We present VizRec, a framework for improving the performance of visual recommendation systems by quantifying the statistical significance of recommended visualizations. The proposed methodology allows to control the probability of misleading visual recommendations using both classical statistical testing procedures and a novel application of the Vapnik Chervonenkis (VC) dimension method which is a fundamental concept in statistical learning theory.
1603.07410
Erkang Zhu
Erkang Zhu, Fatemeh Nargesian, Ken Q. Pu, Ren\'ee J. Miller
LSH Ensemble: Internet-Scale Domain Search
To appear in VLDB 2016
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of domain search where a domain is a set of distinct values from an unspecified universe. We use Jaccard set containment, defined as $|Q \cap X|/|Q|$, as the relevance measure of a domain $X$ to a query domain $Q$. Our choice of Jaccard set containment over Jaccard similarity makes our work particularly suitable for searching Open Data and data on the web, as Jaccard similarity is known to have poor performance over sets with large differences in their domain sizes. We demonstrate that the domains found in several real-life Open Data and web data repositories show a power-law distribution over their domain sizes. We present a new index structure, Locality Sensitive Hashing (LSH) Ensemble, that solves the domain search problem using set containment at Internet scale. Our index structure and search algorithm cope with the data volume and skew by means of data sketches (MinHash) and domain partitioning. Our index structure does not assume a prescribed set of values. We construct a cost model that describes the accuracy of LSH Ensemble with any given partitioning. This allows us to formulate the partitioning for LSH Ensemble as an optimization problem. We prove that there exists an optimal partitioning for any distribution. Furthermore, for datasets following a power-law distribution, as observed in Open Data and Web data corpora, we show that the optimal partitioning can be approximated using equi-depth, making it efficient to use in practice. We evaluate our algorithm using real data (Canadian Open Data and WDC Web Tables) containing up over 262 M domains. The experiments demonstrate that our index consistently outperforms other leading alternatives in accuracy and performance. The improvements are most dramatic for data with large skew in the domain sizes. Even at 262 M domains, our index sustains query performance with under 3 seconds response time.
[ { "created": "Thu, 24 Mar 2016 01:43:28 GMT", "version": "v1" }, { "created": "Wed, 30 Mar 2016 00:52:45 GMT", "version": "v2" }, { "created": "Mon, 4 Apr 2016 18:54:13 GMT", "version": "v3" }, { "created": "Sat, 23 Jul 2016 04:47:58 GMT", "version": "v4" } ]
2016-07-26
[ [ "Zhu", "Erkang", "" ], [ "Nargesian", "Fatemeh", "" ], [ "Pu", "Ken Q.", "" ], [ "Miller", "Renée J.", "" ] ]
We study the problem of domain search where a domain is a set of distinct values from an unspecified universe. We use Jaccard set containment, defined as $|Q \cap X|/|Q|$, as the relevance measure of a domain $X$ to a query domain $Q$. Our choice of Jaccard set containment over Jaccard similarity makes our work particularly suitable for searching Open Data and data on the web, as Jaccard similarity is known to have poor performance over sets with large differences in their domain sizes. We demonstrate that the domains found in several real-life Open Data and web data repositories show a power-law distribution over their domain sizes. We present a new index structure, Locality Sensitive Hashing (LSH) Ensemble, that solves the domain search problem using set containment at Internet scale. Our index structure and search algorithm cope with the data volume and skew by means of data sketches (MinHash) and domain partitioning. Our index structure does not assume a prescribed set of values. We construct a cost model that describes the accuracy of LSH Ensemble with any given partitioning. This allows us to formulate the partitioning for LSH Ensemble as an optimization problem. We prove that there exists an optimal partitioning for any distribution. Furthermore, for datasets following a power-law distribution, as observed in Open Data and Web data corpora, we show that the optimal partitioning can be approximated using equi-depth, making it efficient to use in practice. We evaluate our algorithm using real data (Canadian Open Data and WDC Web Tables) containing up over 262 M domains. The experiments demonstrate that our index consistently outperforms other leading alternatives in accuracy and performance. The improvements are most dramatic for data with large skew in the domain sizes. Even at 262 M domains, our index sustains query performance with under 3 seconds response time.
1101.6038
Jose Antonio Martin H
Jose Antonio Martin H
A polynomial 3-colorability algorithm with automatic generation of NO 3-colorability (i.e. Co-NP) short proofs
null
null
null
null
cs.DM cs.CC cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, an algorithm for determining 3-colorability, i.e. the decision problem (YES/NO), in planar graphs is presented. The algorithm, although not exact (it could produce false positives) has two very important features: (i) it has polynomial complexity and (ii) for every "NO" answer, a "short" proof is generated, which is of much interest since 3-colorability is a NP-complete problem and thus its complementary problem is in Co-NP. Hence the algorithm is exact when it determines that a given planar graph is not 3-colorable since this is verifiable via an automatic generation of short formal proofs (also human-readable).
[ { "created": "Mon, 31 Jan 2011 17:47:39 GMT", "version": "v1" } ]
2011-02-01
[ [ "H", "Jose Antonio Martin", "" ] ]
In this paper, an algorithm for determining 3-colorability, i.e. the decision problem (YES/NO), in planar graphs is presented. The algorithm, although not exact (it could produce false positives) has two very important features: (i) it has polynomial complexity and (ii) for every "NO" answer, a "short" proof is generated, which is of much interest since 3-colorability is a NP-complete problem and thus its complementary problem is in Co-NP. Hence the algorithm is exact when it determines that a given planar graph is not 3-colorable since this is verifiable via an automatic generation of short formal proofs (also human-readable).
1504.01782
Abbas Kiani
Abbas Kiani and Nirwan Ansari
Profit Maximization for Geographical Dispersed Green Data Centers
null
IEEE Transactions on Smart Grid, 2016
10.1109/TSG.2016.2562565
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims at maximizing the profit associated with running geographically dispersed green data centers, which offer multiple classes of service. To this end, we formulate an optimization framework which relies on the accuracy of the G/D/1 queue in characterizing the workload distribution, and taps on the merits of the workload decomposition into green and brown workload served by green and brown energy resources. Moreover, we take into account of not only the Service Level Agreements (SLAs) between the data centers and clients but also different deregulated electricity markets of data centers located at different regions. We prove the convexity of our optimization problem and the performance of the proposed workload distribution strategy is evaluated via simulations.
[ { "created": "Tue, 7 Apr 2015 23:39:59 GMT", "version": "v1" }, { "created": "Tue, 25 Aug 2015 00:05:21 GMT", "version": "v2" }, { "created": "Wed, 25 Nov 2015 20:21:43 GMT", "version": "v3" } ]
2018-02-07
[ [ "Kiani", "Abbas", "" ], [ "Ansari", "Nirwan", "" ] ]
This paper aims at maximizing the profit associated with running geographically dispersed green data centers, which offer multiple classes of service. To this end, we formulate an optimization framework which relies on the accuracy of the G/D/1 queue in characterizing the workload distribution, and taps on the merits of the workload decomposition into green and brown workload served by green and brown energy resources. Moreover, we take into account of not only the Service Level Agreements (SLAs) between the data centers and clients but also different deregulated electricity markets of data centers located at different regions. We prove the convexity of our optimization problem and the performance of the proposed workload distribution strategy is evaluated via simulations.
2008.00017
Kevin Fu
Kevin Fu, Tadayoshi Kohno, Daniel Lopresti, Elizabeth Mynatt, Klara Nahrstedt, Shwetak Patel, Debra Richardson, and Ben Zorn
Safety, Security, and Privacy Threats Posed by Accelerating Trends in the Internet of Things
A Computing Community Consortium (CCC) white paper, 9 pages
null
null
null
cs.CY cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Internet of Things (IoT) is already transforming industries, cities, and homes. The economic value of this transformation across all industries is estimated to be trillions of dollars and the societal impact on energy efficiency, health, and productivity are enormous. Alongside potential benefits of interconnected smart devices comes increased risk and potential for abuse when embedding sensing and intelligence into every device. One of the core problems with the increasing number of IoT devices is the increased complexity that is required to operate them safely and securely. This increased complexity creates new safety, security, privacy, and usability challenges far beyond the difficult challenges individuals face just securing a single device. We highlight some of the negative trends that smart devices and collections of devices cause and we argue that issues related to security, physical safety, privacy, and usability are tightly interconnected and solutions that address all four simultaneously are needed. Tight safety and security standards for individual devices based on existing technology are needed. Likewise research that determines the best way for individuals to confidently manage collections of devices must guide the future deployments of such systems.
[ { "created": "Fri, 31 Jul 2020 18:04:20 GMT", "version": "v1" } ]
2020-08-04
[ [ "Fu", "Kevin", "" ], [ "Kohno", "Tadayoshi", "" ], [ "Lopresti", "Daniel", "" ], [ "Mynatt", "Elizabeth", "" ], [ "Nahrstedt", "Klara", "" ], [ "Patel", "Shwetak", "" ], [ "Richardson", "Debra", "" ], [ "Zorn", "Ben", "" ] ]
The Internet of Things (IoT) is already transforming industries, cities, and homes. The economic value of this transformation across all industries is estimated to be trillions of dollars and the societal impact on energy efficiency, health, and productivity are enormous. Alongside potential benefits of interconnected smart devices comes increased risk and potential for abuse when embedding sensing and intelligence into every device. One of the core problems with the increasing number of IoT devices is the increased complexity that is required to operate them safely and securely. This increased complexity creates new safety, security, privacy, and usability challenges far beyond the difficult challenges individuals face just securing a single device. We highlight some of the negative trends that smart devices and collections of devices cause and we argue that issues related to security, physical safety, privacy, and usability are tightly interconnected and solutions that address all four simultaneously are needed. Tight safety and security standards for individual devices based on existing technology are needed. Likewise research that determines the best way for individuals to confidently manage collections of devices must guide the future deployments of such systems.
2406.15396
Pi Wei Chen
Jerry Chun-Wei Lin, Pi-Wei Chen, Chao-Chun Chen
Feature Purified Transformer With Cross-level Feature Guiding Decoder For Multi-class OOD and Anomaly Deteciton
12 pages
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstruction networks are prevalently used in unsupervised anomaly and Out-of-Distribution (OOD) detection due to their independence from labeled anomaly data. However, in multi-class datasets, the effectiveness of anomaly detection is often compromised by the models' generalized reconstruction capabilities, which allow anomalies to blend within the expanded boundaries of normality resulting from the added categories, thereby reducing detection accuracy. We introduce the FUTUREG framework, which incorporates two innovative modules: the Feature Purification Module (FPM) and the CFG Decoder. The FPM constrains the normality boundary within the latent space to effectively filter out anomalous features, while the CFG Decoder uses layer-wise encoder representations to guide the reconstruction of filtered features, preserving fine-grained details. Together, these modules enhance the reconstruction error for anomalies, ensuring high-quality reconstructions for normal samples. Our results demonstrate that FUTUREG achieves state-of-the-art performance in multi-class OOD settings and remains competitive in industrial anomaly detection scenarios.
[ { "created": "Tue, 30 Apr 2024 16:45:51 GMT", "version": "v1" } ]
2024-06-25
[ [ "Lin", "Jerry Chun-Wei", "" ], [ "Chen", "Pi-Wei", "" ], [ "Chen", "Chao-Chun", "" ] ]
Reconstruction networks are prevalently used in unsupervised anomaly and Out-of-Distribution (OOD) detection due to their independence from labeled anomaly data. However, in multi-class datasets, the effectiveness of anomaly detection is often compromised by the models' generalized reconstruction capabilities, which allow anomalies to blend within the expanded boundaries of normality resulting from the added categories, thereby reducing detection accuracy. We introduce the FUTUREG framework, which incorporates two innovative modules: the Feature Purification Module (FPM) and the CFG Decoder. The FPM constrains the normality boundary within the latent space to effectively filter out anomalous features, while the CFG Decoder uses layer-wise encoder representations to guide the reconstruction of filtered features, preserving fine-grained details. Together, these modules enhance the reconstruction error for anomalies, ensuring high-quality reconstructions for normal samples. Our results demonstrate that FUTUREG achieves state-of-the-art performance in multi-class OOD settings and remains competitive in industrial anomaly detection scenarios.
2107.12566
Nicholas Springer
Nicholas Springer (1), Wu-chang Feng (1) ((1) Portland State University)
Thunder CTF: Learning Cloud Security on a Dime
5 pages, 3 figures
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Organizations have rapidly shifted infrastructure and applications over to public cloud computing services such as AWS (Amazon Web Services), Google Cloud Platform, and Azure. Unfortunately, such services have security models that are substantially different and more complex than traditional enterprise security models. As a result, misconfiguration errors in cloud deployments have led to dozens of well-publicized breaches. This paper describes Thunder CTF, a scaffolded, scenario-based CTF (Capture-the-Flag) for helping students learn about and practice cloud security skills. Thunder CTF is easily deployed at minimal cost and is highly extensible to allow for crowd-sourced development of new levels as security issues evolve in the cloud.
[ { "created": "Tue, 27 Jul 2021 03:01:47 GMT", "version": "v1" } ]
2021-07-28
[ [ "Springer", "Nicholas", "" ], [ "Feng", "Wu-chang", "" ] ]
Organizations have rapidly shifted infrastructure and applications over to public cloud computing services such as AWS (Amazon Web Services), Google Cloud Platform, and Azure. Unfortunately, such services have security models that are substantially different and more complex than traditional enterprise security models. As a result, misconfiguration errors in cloud deployments have led to dozens of well-publicized breaches. This paper describes Thunder CTF, a scaffolded, scenario-based CTF (Capture-the-Flag) for helping students learn about and practice cloud security skills. Thunder CTF is easily deployed at minimal cost and is highly extensible to allow for crowd-sourced development of new levels as security issues evolve in the cloud.
1804.11207
Pei Li
Pei Li, Bingyu Shen, Weishan Dong
An Anti-fraud System for Car Insurance Claim Based on Visual Evidence
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatically scene understanding using machine learning algorithms has been widely applied to different industries to reduce the cost of manual labor. Nowadays, insurance companies launch express vehicle insurance claim and settlement by allowing customers uploading pictures taken by mobile devices. This kind of insurance claim is treated as small claim and can be processed either manually or automatically in a quick fashion. However, due to the increasing amount of claims every day, system or people are likely to be fooled by repeated claims for identical case leading to big lost to insurance companies.Thus, an anti-fraud checking before processing the claim is necessary. We create the first data set of car damage images collected from internet and local parking lots. In addition, we proposed an approach to generate robust deep features by locating the damages accurately and efficiently in the images. The state-of-the-art real-time object detector YOLO \cite{redmon2016you}is modified to train and discover damage region as an important part of the pipeline. Both local and global deep features are extracted using VGG model\cite{Simonyan14c}, which are fused later for more robust system performance. Experiments show our approach is effective in preventing fraud claims as well as meet the requirement to speed up the insurance claim prepossessing.
[ { "created": "Mon, 30 Apr 2018 14:03:22 GMT", "version": "v1" } ]
2018-05-01
[ [ "Li", "Pei", "" ], [ "Shen", "Bingyu", "" ], [ "Dong", "Weishan", "" ] ]
Automatically scene understanding using machine learning algorithms has been widely applied to different industries to reduce the cost of manual labor. Nowadays, insurance companies launch express vehicle insurance claim and settlement by allowing customers uploading pictures taken by mobile devices. This kind of insurance claim is treated as small claim and can be processed either manually or automatically in a quick fashion. However, due to the increasing amount of claims every day, system or people are likely to be fooled by repeated claims for identical case leading to big lost to insurance companies.Thus, an anti-fraud checking before processing the claim is necessary. We create the first data set of car damage images collected from internet and local parking lots. In addition, we proposed an approach to generate robust deep features by locating the damages accurately and efficiently in the images. The state-of-the-art real-time object detector YOLO \cite{redmon2016you}is modified to train and discover damage region as an important part of the pipeline. Both local and global deep features are extracted using VGG model\cite{Simonyan14c}, which are fused later for more robust system performance. Experiments show our approach is effective in preventing fraud claims as well as meet the requirement to speed up the insurance claim prepossessing.
1511.01306
J\'er\'emy Emile Cohen
Jeremy E. Cohen
About Notations in Multiway Array Processing
null
null
null
null
cs.NA
http://creativecommons.org/licenses/by-sa/4.0/
This paper gives an overview of notations used in multiway array processing. We redefine the vectorization and matricization operators to comply with some properties of the Kronecker product. The tensor product and Kronecker product are also represented with two different symbols, and it is shown how these notations lead to clearer expressions for multiway array operations. Finally, the paper recalls the useful yet widely unknown properties of the array normal law with suggested notations.
[ { "created": "Wed, 4 Nov 2015 12:38:56 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2016 16:46:59 GMT", "version": "v2" } ]
2016-02-04
[ [ "Cohen", "Jeremy E.", "" ] ]
This paper gives an overview of notations used in multiway array processing. We redefine the vectorization and matricization operators to comply with some properties of the Kronecker product. The tensor product and Kronecker product are also represented with two different symbols, and it is shown how these notations lead to clearer expressions for multiway array operations. Finally, the paper recalls the useful yet widely unknown properties of the array normal law with suggested notations.
2002.11616
Xiaoyu Xiang
Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, Chenliang Xu
Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
This work is accepted in CVPR 2020. The source code and pre-trained model are available on https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020. 12 pages, 10 figures
null
null
null
cs.CV cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we explore the space-time video super-resolution task, which aims to generate a high-resolution (HR) slow-motion video from a low frame rate (LFR), low-resolution (LR) video. A simple solution is to split it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR). However, temporal interpolation and spatial super-resolution are intra-related in this task. Two-stage methods cannot fully take advantage of the natural property. In addition, state-of-the-art VFI or VSR networks require a large frame-synthesis or reconstruction module for predicting high-quality video frames, which makes the two-stage methods have large model sizes and thus be time-consuming. To overcome the problems, we propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video. Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network. Then, we propose a deformable ConvLSTM to align and aggregate temporal information simultaneously for better leveraging global temporal contexts. Finally, a deep reconstruction network is adopted to predict HR slow-motion video frames. Extensive experiments on benchmark datasets demonstrate that the proposed method not only achieves better quantitative and qualitative performance but also is more than three times faster than recent two-stage state-of-the-art methods, e.g., DAIN+EDVR and DAIN+RBPN.
[ { "created": "Wed, 26 Feb 2020 16:59:48 GMT", "version": "v1" } ]
2020-02-27
[ [ "Xiang", "Xiaoyu", "" ], [ "Tian", "Yapeng", "" ], [ "Zhang", "Yulun", "" ], [ "Fu", "Yun", "" ], [ "Allebach", "Jan P.", "" ], [ "Xu", "Chenliang", "" ] ]
In this paper, we explore the space-time video super-resolution task, which aims to generate a high-resolution (HR) slow-motion video from a low frame rate (LFR), low-resolution (LR) video. A simple solution is to split it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR). However, temporal interpolation and spatial super-resolution are intra-related in this task. Two-stage methods cannot fully take advantage of the natural property. In addition, state-of-the-art VFI or VSR networks require a large frame-synthesis or reconstruction module for predicting high-quality video frames, which makes the two-stage methods have large model sizes and thus be time-consuming. To overcome the problems, we propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video. Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network. Then, we propose a deformable ConvLSTM to align and aggregate temporal information simultaneously for better leveraging global temporal contexts. Finally, a deep reconstruction network is adopted to predict HR slow-motion video frames. Extensive experiments on benchmark datasets demonstrate that the proposed method not only achieves better quantitative and qualitative performance but also is more than three times faster than recent two-stage state-of-the-art methods, e.g., DAIN+EDVR and DAIN+RBPN.
2011.09563
Hanqing Chao
Jiajin Zhang, Hanqing Chao, Pingkun Yan
Robustified Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain with different data distribution. While extensive studies attested that deep learning models are vulnerable to adversarial attacks, the adversarial robustness of models in domain adaptation application has largely been overlooked. This paper points out that the inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain. To address the problem, we propose a novel Class-consistent Unsupervised Robust Domain Adaptation (CURDA) framework for training robust UDA models. With the introduced contrastive robust training and source anchored adversarial contrastive losses, our proposed CURDA framework can effectively robustify UDA models by simultaneously minimizing the data distribution deviation and the distance between target domain clean-adversarial pairs without creating classification confusion. Experiments on several public benchmarks show that CURDA can significantly improve model robustness in the target domain with only minor cost of accuracy on the clean samples.
[ { "created": "Wed, 18 Nov 2020 22:21:54 GMT", "version": "v1" }, { "created": "Wed, 24 Mar 2021 21:18:25 GMT", "version": "v2" } ]
2021-03-26
[ [ "Zhang", "Jiajin", "" ], [ "Chao", "Hanqing", "" ], [ "Yan", "Pingkun", "" ] ]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain with different data distribution. While extensive studies attested that deep learning models are vulnerable to adversarial attacks, the adversarial robustness of models in domain adaptation application has largely been overlooked. This paper points out that the inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain. To address the problem, we propose a novel Class-consistent Unsupervised Robust Domain Adaptation (CURDA) framework for training robust UDA models. With the introduced contrastive robust training and source anchored adversarial contrastive losses, our proposed CURDA framework can effectively robustify UDA models by simultaneously minimizing the data distribution deviation and the distance between target domain clean-adversarial pairs without creating classification confusion. Experiments on several public benchmarks show that CURDA can significantly improve model robustness in the target domain with only minor cost of accuracy on the clean samples.
1702.01844
Tianyi Pan
Tianyi Pan, Alan Kuhnle, Xiang Li and My T. Thai
Popular Topics Spread Faster: New Dimension for Influence Propagation in Online Social Networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information can propagate among Online Social Network (OSN) users at a high speed, which makes the OSNs become important platforms for viral marketing. Although the viral marketing related problems in OSNs have been extensively studied in the past decade, the existing works all assume known propagation rates and are not able to solve the scenario when the rates may dynamically increase for popular topics. In this paper, we propose a novel model, Dynamic Influence Propagation (DIP), which allows propagation rates to change during the diffusion and can be used for describing information propagation in OSNs more realistically. Based on DIP, we define a new research problem: Threshold Activation Problem under DIP (TAP-DIP). TAP-DIP is more generalized than TAP and can be used for studying the DIP model. However, it adds another layer of complexity over the already \#P-hard TAP problem. Despite it hardness, we are able to approximate TAP-DIP with $O(\log|V|)$ ratio. Our solution consists of two major parts: 1) the Lipschitz optimization technique and 2) a novel solution to the general version of TAP, the Multi-TAP problem. We experimentally test our solution Using various real OSN datasets, and demonstrate that our solution not only generates high-quality yet much smaller seed sets when being aware of the rate increase, but also is scalable. In addition, considering DIP or not has a significant difference in seed set selection.
[ { "created": "Tue, 7 Feb 2017 01:59:03 GMT", "version": "v1" }, { "created": "Fri, 25 Aug 2017 01:48:39 GMT", "version": "v2" } ]
2017-08-28
[ [ "Pan", "Tianyi", "" ], [ "Kuhnle", "Alan", "" ], [ "Li", "Xiang", "" ], [ "Thai", "My T.", "" ] ]
Information can propagate among Online Social Network (OSN) users at a high speed, which makes the OSNs become important platforms for viral marketing. Although the viral marketing related problems in OSNs have been extensively studied in the past decade, the existing works all assume known propagation rates and are not able to solve the scenario when the rates may dynamically increase for popular topics. In this paper, we propose a novel model, Dynamic Influence Propagation (DIP), which allows propagation rates to change during the diffusion and can be used for describing information propagation in OSNs more realistically. Based on DIP, we define a new research problem: Threshold Activation Problem under DIP (TAP-DIP). TAP-DIP is more generalized than TAP and can be used for studying the DIP model. However, it adds another layer of complexity over the already \#P-hard TAP problem. Despite it hardness, we are able to approximate TAP-DIP with $O(\log|V|)$ ratio. Our solution consists of two major parts: 1) the Lipschitz optimization technique and 2) a novel solution to the general version of TAP, the Multi-TAP problem. We experimentally test our solution Using various real OSN datasets, and demonstrate that our solution not only generates high-quality yet much smaller seed sets when being aware of the rate increase, but also is scalable. In addition, considering DIP or not has a significant difference in seed set selection.
2110.05371
Pablo Moriano
Jonathan Bryan and Pablo Moriano
Graph-Based Machine Learning Improves Just-in-Time Defect Prediction
22 pages, 2 figures, 4 tables; references added; expanded results to match baseline conditions
PLoS ONE 18(4): e0284077, 2023
10.1371/journal.pone.0284077
null
cs.SE cs.LG cs.SI stat.ML
http://creativecommons.org/licenses/by/4.0/
The increasing complexity of today's software requires the contribution of thousands of developers. This complex collaboration structure makes developers more likely to introduce defect-prone changes that lead to software faults. Determining when these defect-prone changes are introduced has proven challenging, and using traditional machine learning (ML) methods to make these determinations seems to have reached a plateau. In this work, we build contribution graphs consisting of developers and source files to capture the nuanced complexity of changes required to build software. By leveraging these contribution graphs, our research shows the potential of using graph-based ML to improve Just-In-Time (JIT) defect prediction. We hypothesize that features extracted from the contribution graphs may be better predictors of defect-prone changes than intrinsic features derived from software characteristics. We corroborate our hypothesis using graph-based ML for classifying edges that represent defect-prone changes. This new framing of the JIT defect prediction problem leads to remarkably better results. We test our approach on 14 open-source projects and show that our best model can predict whether or not a code change will lead to a defect with an F1 score as high as 77.55% and a Matthews correlation coefficient (MCC) as high as 53.16%. This represents a 152% higher F1 score and a 3% higher MCC over the state-of-the-art JIT defect prediction. We describe limitations, open challenges, and how this method can be used for operational JIT defect prediction.
[ { "created": "Mon, 11 Oct 2021 16:00:02 GMT", "version": "v1" }, { "created": "Tue, 28 Jun 2022 13:39:57 GMT", "version": "v2" }, { "created": "Fri, 14 Apr 2023 16:02:35 GMT", "version": "v3" } ]
2023-04-17
[ [ "Bryan", "Jonathan", "" ], [ "Moriano", "Pablo", "" ] ]
The increasing complexity of today's software requires the contribution of thousands of developers. This complex collaboration structure makes developers more likely to introduce defect-prone changes that lead to software faults. Determining when these defect-prone changes are introduced has proven challenging, and using traditional machine learning (ML) methods to make these determinations seems to have reached a plateau. In this work, we build contribution graphs consisting of developers and source files to capture the nuanced complexity of changes required to build software. By leveraging these contribution graphs, our research shows the potential of using graph-based ML to improve Just-In-Time (JIT) defect prediction. We hypothesize that features extracted from the contribution graphs may be better predictors of defect-prone changes than intrinsic features derived from software characteristics. We corroborate our hypothesis using graph-based ML for classifying edges that represent defect-prone changes. This new framing of the JIT defect prediction problem leads to remarkably better results. We test our approach on 14 open-source projects and show that our best model can predict whether or not a code change will lead to a defect with an F1 score as high as 77.55% and a Matthews correlation coefficient (MCC) as high as 53.16%. This represents a 152% higher F1 score and a 3% higher MCC over the state-of-the-art JIT defect prediction. We describe limitations, open challenges, and how this method can be used for operational JIT defect prediction.
2306.15644
Chiori Hori Ph.D.
Chiori Hori, Puyuan Peng, David Harwath, Xinyu Liu, Kei Ota, Siddarth Jain, Radu Corcodel, Devesh Jha, Diego Romeres, Jonathan Le Roux
Style-transfer based Speech and Audio-visual Scene Understanding for Robot Action Sequence Acquisition from Videos
Accepted to Interspeech2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To realize human-robot collaboration, robots need to execute actions for new tasks according to human instructions given finite prior knowledge. Human experts can share their knowledge of how to perform a task with a robot through multi-modal instructions in their demonstrations, showing a sequence of short-horizon steps to achieve a long-horizon goal. This paper introduces a method for robot action sequence generation from instruction videos using (1) an audio-visual Transformer that converts audio-visual features and instruction speech to a sequence of robot actions called dynamic movement primitives (DMPs) and (2) style-transfer-based training that employs multi-task learning with video captioning and weakly-supervised learning with a semantic classifier to exploit unpaired video-action data. We built a system that accomplishes various cooking actions, where an arm robot executes a DMP sequence acquired from a cooking video using the audio-visual Transformer. Experiments with Epic-Kitchen-100, YouCookII, QuerYD, and in-house instruction video datasets show that the proposed method improves the quality of DMP sequences by 2.3 times the METEOR score obtained with a baseline video-to-action Transformer. The model achieved 32% of the task success rate with the task knowledge of the object.
[ { "created": "Tue, 27 Jun 2023 17:37:53 GMT", "version": "v1" } ]
2023-06-28
[ [ "Hori", "Chiori", "" ], [ "Peng", "Puyuan", "" ], [ "Harwath", "David", "" ], [ "Liu", "Xinyu", "" ], [ "Ota", "Kei", "" ], [ "Jain", "Siddarth", "" ], [ "Corcodel", "Radu", "" ], [ "Jha", "Devesh", "" ], [ "Romeres", "Diego", "" ], [ "Roux", "Jonathan Le", "" ] ]
To realize human-robot collaboration, robots need to execute actions for new tasks according to human instructions given finite prior knowledge. Human experts can share their knowledge of how to perform a task with a robot through multi-modal instructions in their demonstrations, showing a sequence of short-horizon steps to achieve a long-horizon goal. This paper introduces a method for robot action sequence generation from instruction videos using (1) an audio-visual Transformer that converts audio-visual features and instruction speech to a sequence of robot actions called dynamic movement primitives (DMPs) and (2) style-transfer-based training that employs multi-task learning with video captioning and weakly-supervised learning with a semantic classifier to exploit unpaired video-action data. We built a system that accomplishes various cooking actions, where an arm robot executes a DMP sequence acquired from a cooking video using the audio-visual Transformer. Experiments with Epic-Kitchen-100, YouCookII, QuerYD, and in-house instruction video datasets show that the proposed method improves the quality of DMP sequences by 2.3 times the METEOR score obtained with a baseline video-to-action Transformer. The model achieved 32% of the task success rate with the task knowledge of the object.
0805.0893
EDA Publishing Association
T. Veijola, Giorgio De Pasquale, Aurelio Som\`a
Comparison Between Damping Coefficients of Measured Perforated Micromechanical Test Structures and Compact Models
Submitted on behalf of EDA Publishing Association (http://irevues.inist.fr/handle/2042/16838)
Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2008, Nice : France (2008)
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Measured damping coefficients of six different perforated micromechanical test structures are compared with damping coefficients given by published compact models. The motion of the perforated plates is almost translational, the surface shape is rectangular, and the perforation is uniform validating the assumptions made for compact models. In the structures, the perforation ratio varies from 24% - 59%. The study of the structure shows that the compressibility and inertia do not contribute to the damping at the frequencies used (130kHz - 220kHz). The damping coefficients given by all four compact models underestimate the measured damping coefficient by approximately 20%. The reasons for this underestimation are discussed by studying the various flow components in the models.
[ { "created": "Wed, 7 May 2008 09:17:47 GMT", "version": "v1" } ]
2008-12-18
[ [ "Veijola", "T.", "" ], [ "De Pasquale", "Giorgio", "" ], [ "Somà", "Aurelio", "" ] ]
Measured damping coefficients of six different perforated micromechanical test structures are compared with damping coefficients given by published compact models. The motion of the perforated plates is almost translational, the surface shape is rectangular, and the perforation is uniform validating the assumptions made for compact models. In the structures, the perforation ratio varies from 24% - 59%. The study of the structure shows that the compressibility and inertia do not contribute to the damping at the frequencies used (130kHz - 220kHz). The damping coefficients given by all four compact models underestimate the measured damping coefficient by approximately 20%. The reasons for this underestimation are discussed by studying the various flow components in the models.
2111.04703
Tajuddin Manhar Mohammed
Tajuddin Manhar Mohammed, Lakshmanan Nataraj, Satish Chikkagoudar, Shivkumar Chandrasekaran, B.S. Manjunath
HAPSSA: Holistic Approach to PDF Malware Detection Using Signal and Statistical Analysis
Submitted version - MILCOM 2021 IEEE Military Communications Conference
null
null
null
cs.CR cs.LG eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Malicious PDF documents present a serious threat to various security organizations that require modern threat intelligence platforms to effectively analyze and characterize the identity and behavior of PDF malware. State-of-the-art approaches use machine learning (ML) to learn features that characterize PDF malware. However, ML models are often susceptible to evasion attacks, in which an adversary obfuscates the malware code to avoid being detected by an Antivirus. In this paper, we derive a simple yet effective holistic approach to PDF malware detection that leverages signal and statistical analysis of malware binaries. This includes combining orthogonal feature space models from various static and dynamic malware detection methods to enable generalized robustness when faced with code obfuscations. Using a dataset of nearly 30,000 PDF files containing both malware and benign samples, we show that our holistic approach maintains a high detection rate (99.92%) of PDF malware and even detects new malicious files created by simple methods that remove the obfuscation conducted by malware authors to hide their malware, which are undetected by most antiviruses.
[ { "created": "Mon, 8 Nov 2021 18:32:47 GMT", "version": "v1" } ]
2021-11-09
[ [ "Mohammed", "Tajuddin Manhar", "" ], [ "Nataraj", "Lakshmanan", "" ], [ "Chikkagoudar", "Satish", "" ], [ "Chandrasekaran", "Shivkumar", "" ], [ "Manjunath", "B. S.", "" ] ]
Malicious PDF documents present a serious threat to various security organizations that require modern threat intelligence platforms to effectively analyze and characterize the identity and behavior of PDF malware. State-of-the-art approaches use machine learning (ML) to learn features that characterize PDF malware. However, ML models are often susceptible to evasion attacks, in which an adversary obfuscates the malware code to avoid being detected by an Antivirus. In this paper, we derive a simple yet effective holistic approach to PDF malware detection that leverages signal and statistical analysis of malware binaries. This includes combining orthogonal feature space models from various static and dynamic malware detection methods to enable generalized robustness when faced with code obfuscations. Using a dataset of nearly 30,000 PDF files containing both malware and benign samples, we show that our holistic approach maintains a high detection rate (99.92%) of PDF malware and even detects new malicious files created by simple methods that remove the obfuscation conducted by malware authors to hide their malware, which are undetected by most antiviruses.
1804.09650
Dinh-Luan Nguyen
Dinh-Luan Nguyen, Kai Cao and Anil K. Jain
Automatic Latent Fingerprint Segmentation
Accepted (Oral) in BTAS 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a simple but effective method for automatic latent fingerprint segmentation, called SegFinNet. SegFinNet takes a latent image as an input and outputs a binary mask highlighting the friction ridge pattern. Our algorithm combines fully convolutional neural network and detection-based approaches to process the entire input latent image in one shot instead of using latent patches. Experimental results on three different latent databases (i.e. NIST SD27, WVU, and an operational forensic database) show that SegFinNet outperforms both human markup for latents and the state-of-the-art latent segmentation algorithms. We further show that this improved cropping boosts the hit rate of a latent fingerprint matcher.
[ { "created": "Wed, 25 Apr 2018 16:09:02 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2018 02:20:28 GMT", "version": "v2" } ]
2018-09-05
[ [ "Nguyen", "Dinh-Luan", "" ], [ "Cao", "Kai", "" ], [ "Jain", "Anil K.", "" ] ]
We present a simple but effective method for automatic latent fingerprint segmentation, called SegFinNet. SegFinNet takes a latent image as an input and outputs a binary mask highlighting the friction ridge pattern. Our algorithm combines fully convolutional neural network and detection-based approaches to process the entire input latent image in one shot instead of using latent patches. Experimental results on three different latent databases (i.e. NIST SD27, WVU, and an operational forensic database) show that SegFinNet outperforms both human markup for latents and the state-of-the-art latent segmentation algorithms. We further show that this improved cropping boosts the hit rate of a latent fingerprint matcher.
1511.01568
EPTCS
Jaap Boender (Middlesex University), Florian Kamm\"uller (Middlesex University), Rajagopal Nagarajan (Middlesex University)
Formalization of Quantum Protocols using Coq
In Proceedings QPL 2015, arXiv:1511.01181
EPTCS 195, 2015, pp. 71-83
10.4204/EPTCS.195.6
null
cs.LO cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum Information Processing, which is an exciting area of research at the intersection of physics and computer science, has great potential for influencing the future development of information processing systems. The building of practical, general purpose Quantum Computers may be some years into the future. However, Quantum Communication and Quantum Cryptography are well developed. Commercial Quantum Key Distribution systems are easily available and several QKD networks have been built in various parts of the world. The security of the protocols used in these implementations rely on information-theoretic proofs, which may or may not reflect actual system behaviour. Moreover, testing of implementations cannot guarantee the absence of bugs and errors. This paper presents a novel framework for modelling and verifying quantum protocols and their implementations using the proof assistant Coq. We provide a Coq library for quantum bits (qubits), quantum gates, and quantum measurement. As a step towards verifying practical quantum communication and security protocols such as Quantum Key Distribution, we support multiple qubits, communication and entanglement. We illustrate these concepts by modelling the Quantum Teleportation Protocol, which communicates the state of an unknown quantum bit using only a classical channel.
[ { "created": "Thu, 5 Nov 2015 01:42:01 GMT", "version": "v1" } ]
2015-11-06
[ [ "Boender", "Jaap", "", "Middlesex University" ], [ "Kammüller", "Florian", "", "Middlesex\n University" ], [ "Nagarajan", "Rajagopal", "", "Middlesex University" ] ]
Quantum Information Processing, which is an exciting area of research at the intersection of physics and computer science, has great potential for influencing the future development of information processing systems. The building of practical, general purpose Quantum Computers may be some years into the future. However, Quantum Communication and Quantum Cryptography are well developed. Commercial Quantum Key Distribution systems are easily available and several QKD networks have been built in various parts of the world. The security of the protocols used in these implementations rely on information-theoretic proofs, which may or may not reflect actual system behaviour. Moreover, testing of implementations cannot guarantee the absence of bugs and errors. This paper presents a novel framework for modelling and verifying quantum protocols and their implementations using the proof assistant Coq. We provide a Coq library for quantum bits (qubits), quantum gates, and quantum measurement. As a step towards verifying practical quantum communication and security protocols such as Quantum Key Distribution, we support multiple qubits, communication and entanglement. We illustrate these concepts by modelling the Quantum Teleportation Protocol, which communicates the state of an unknown quantum bit using only a classical channel.
1902.05052
Peeter Laud
Aivo Toots and Reedik Tuuling and Maksym Yerokhin and Marlon Dumas and Luciano Garc\'ia-Ba\~nuelos and Peeter Laud and Raimundas Matulevi\v{c}ius and Alisa Pankova and Martin Pettai and Pille Pullonen and Jake Tom
Business Process Privacy Analysis in Pleak
Appears at 22nd International Conference on Fundamental Approaches to Software Engineering (FASE), April 2019
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pleak is a tool to capture and analyze privacy-enhanced business process models to characterize and quantify to what extent the outputs of a process leak information about its inputs. Pleak incorporates an extensible set of analysis plugins, which enable users to inspect potential leakages at multiple levels of detail.
[ { "created": "Wed, 13 Feb 2019 18:32:01 GMT", "version": "v1" } ]
2019-02-14
[ [ "Toots", "Aivo", "" ], [ "Tuuling", "Reedik", "" ], [ "Yerokhin", "Maksym", "" ], [ "Dumas", "Marlon", "" ], [ "García-Bañuelos", "Luciano", "" ], [ "Laud", "Peeter", "" ], [ "Matulevičius", "Raimundas", "" ], [ "Pankova", "Alisa", "" ], [ "Pettai", "Martin", "" ], [ "Pullonen", "Pille", "" ], [ "Tom", "Jake", "" ] ]
Pleak is a tool to capture and analyze privacy-enhanced business process models to characterize and quantify to what extent the outputs of a process leak information about its inputs. Pleak incorporates an extensible set of analysis plugins, which enable users to inspect potential leakages at multiple levels of detail.
2310.11626
Brenda Praggastis PhD
Brenda Praggastis, Sinan Aksoy, Dustin Arendt, Mark Bonicillo, Cliff Joslyn, Emilie Purvine, Madelyn Shapiro, Ji Young Yun
HyperNetX: A Python package for modeling complex network data as hypergraphs
3 pages, 2 figures
null
null
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
HyperNetX (HNX) is an open source Python library for the analysis and visualization of complex network data modeled as hypergraphs. Initially released in 2019, HNX facilitates exploratory data analysis of complex networks using algebraic topology, combinatorics, and generalized hypergraph and graph theoretical methods on structured data inputs. With its 2023 release, the library supports attaching metadata, numerical and categorical, to nodes (vertices) and hyperedges, as well as to node-hyperedge pairings (incidences). HNX has a customizable Matplotlib-based visualization module as well as HypernetX-Widget, its JavaScript addon for interactive exploration and visualization of hypergraphs within Jupyter Notebooks. Both packages are available on GitHub and PyPI. With a growing community of users and collaborators, HNX has become a preeminent tool for hypergraph analysis.
[ { "created": "Tue, 17 Oct 2023 23:24:11 GMT", "version": "v1" } ]
2023-10-19
[ [ "Praggastis", "Brenda", "" ], [ "Aksoy", "Sinan", "" ], [ "Arendt", "Dustin", "" ], [ "Bonicillo", "Mark", "" ], [ "Joslyn", "Cliff", "" ], [ "Purvine", "Emilie", "" ], [ "Shapiro", "Madelyn", "" ], [ "Yun", "Ji Young", "" ] ]
HyperNetX (HNX) is an open source Python library for the analysis and visualization of complex network data modeled as hypergraphs. Initially released in 2019, HNX facilitates exploratory data analysis of complex networks using algebraic topology, combinatorics, and generalized hypergraph and graph theoretical methods on structured data inputs. With its 2023 release, the library supports attaching metadata, numerical and categorical, to nodes (vertices) and hyperedges, as well as to node-hyperedge pairings (incidences). HNX has a customizable Matplotlib-based visualization module as well as HypernetX-Widget, its JavaScript addon for interactive exploration and visualization of hypergraphs within Jupyter Notebooks. Both packages are available on GitHub and PyPI. With a growing community of users and collaborators, HNX has become a preeminent tool for hypergraph analysis.
2311.13764
Christoph Grunau
Mohsen Ghaffari and Christoph Grunau and V\'aclav Rozho\v{n}
Work-Efficient Parallel Derandomization I: Chernoff-like Concentrations via Pairwise Independence
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We present a novel technique for work-efficient parallel derandomization, for algorithms that rely on the concentration of measure bounds such as Chernoff, Hoeffding, and Bernstein inequalities. Our method increases the algorithm's computational work and depth by only polylogarithmic factors. Before our work, the only known method to obtain parallel derandomization with such strong concentrations was by the results of [Motwani, Naor, and Naor FOCS'89; Berger and Rompel FOCS'89], which perform a binary search in a $k$-wise independent space for $k=poly(\log n)$. However, that method blows up the computational work by a high $poly(n)$ factor and does not yield work-efficient parallel algorithms. Their method was an extension of the approach of [Luby FOCS'88], which gave a work-efficient derandomization but was limited to algorithms analyzed with only pairwise independence. Pushing the method from pairwise to the higher $k$-wise analysis resulted in the $poly(n)$ factor computational work blow-up. Our work can be viewed as an alternative extension from the pairwise case, which yields the desired strong concentrations while retaining work efficiency up to logarithmic factors. Our approach works by casting the problem of determining the random variables as an iterative process with $poly(\log n)$ iterations, where different iterations have independent randomness. This is done so that for the desired concentrations, we need only pairwise independence inside each iteration. In particular, we model each binary random variable as a result of a gradual random walk, and our method shows that the desired Chernoff-like concentrations about the endpoints of these walks can be boiled down to some pairwise analysis on the steps of these random walks in each iteration (while having independence across iterations).
[ { "created": "Thu, 23 Nov 2023 01:30:25 GMT", "version": "v1" } ]
2023-11-27
[ [ "Ghaffari", "Mohsen", "" ], [ "Grunau", "Christoph", "" ], [ "Rozhoň", "Václav", "" ] ]
We present a novel technique for work-efficient parallel derandomization, for algorithms that rely on the concentration of measure bounds such as Chernoff, Hoeffding, and Bernstein inequalities. Our method increases the algorithm's computational work and depth by only polylogarithmic factors. Before our work, the only known method to obtain parallel derandomization with such strong concentrations was by the results of [Motwani, Naor, and Naor FOCS'89; Berger and Rompel FOCS'89], which perform a binary search in a $k$-wise independent space for $k=poly(\log n)$. However, that method blows up the computational work by a high $poly(n)$ factor and does not yield work-efficient parallel algorithms. Their method was an extension of the approach of [Luby FOCS'88], which gave a work-efficient derandomization but was limited to algorithms analyzed with only pairwise independence. Pushing the method from pairwise to the higher $k$-wise analysis resulted in the $poly(n)$ factor computational work blow-up. Our work can be viewed as an alternative extension from the pairwise case, which yields the desired strong concentrations while retaining work efficiency up to logarithmic factors. Our approach works by casting the problem of determining the random variables as an iterative process with $poly(\log n)$ iterations, where different iterations have independent randomness. This is done so that for the desired concentrations, we need only pairwise independence inside each iteration. In particular, we model each binary random variable as a result of a gradual random walk, and our method shows that the desired Chernoff-like concentrations about the endpoints of these walks can be boiled down to some pairwise analysis on the steps of these random walks in each iteration (while having independence across iterations).
1607.07570
Mark Newman
Xiao Zhang, Cristopher Moore, and M. E. J. Newman
Random graph models for dynamic networks
15 pages, four figures
Eur. Phys. J. B 90, 200 (2017)
10.1140/epjb/e2017-80122-8
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose generalizations of a number of standard network models, including the classic random graph, the configuration model, and the stochastic block model, to the case of time-varying networks. We assume that the presence and absence of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. In addition to computing equilibrium properties of these models, we demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data. This allows us, for instance, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate our methods with a selection of applications, both to computer-generated test networks and real-world examples.
[ { "created": "Tue, 26 Jul 2016 07:37:25 GMT", "version": "v1" } ]
2018-05-02
[ [ "Zhang", "Xiao", "" ], [ "Moore", "Cristopher", "" ], [ "Newman", "M. E. J.", "" ] ]
We propose generalizations of a number of standard network models, including the classic random graph, the configuration model, and the stochastic block model, to the case of time-varying networks. We assume that the presence and absence of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. In addition to computing equilibrium properties of these models, we demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data. This allows us, for instance, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate our methods with a selection of applications, both to computer-generated test networks and real-world examples.
2303.01377
Daniel Sens
Daniel Sens and Ario Sadafi, Francesco Paolo Casale, Nassir Navab, Carsten Marr
BEL: A Bag Embedding Loss for Transformer enhances Multiple Instance Whole Slide Image Classification
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple Instance Learning (MIL) has become the predominant approach for classification tasks on gigapixel histopathology whole slide images (WSIs). Within the MIL framework, single WSIs (bags) are decomposed into patches (instances), with only WSI-level annotation available. Recent MIL approaches produce highly informative bag level representations by utilizing the transformer architecture's ability to model the dependencies between instances. However, when applied to high magnification datasets, problems emerge due to the large number of instances and the weak supervisory learning signal. To address this problem, we propose to additionally train transformers with a novel Bag Embedding Loss (BEL). BEL forces the model to learn a discriminative bag-level representation by minimizing the distance between bag embeddings of the same class and maximizing the distance between different classes. We evaluate BEL with the Transformer architecture TransMIL on two publicly available histopathology datasets, BRACS and CAMELYON17. We show that with BEL, TransMIL outperforms the baseline models on both datasets, thus contributing to the clinically highly relevant AI-based tumor classification of histological patient material.
[ { "created": "Thu, 2 Mar 2023 16:02:55 GMT", "version": "v1" } ]
2023-03-03
[ [ "Sens", "Daniel", "" ], [ "Sadafi", "Ario", "" ], [ "Casale", "Francesco Paolo", "" ], [ "Navab", "Nassir", "" ], [ "Marr", "Carsten", "" ] ]
Multiple Instance Learning (MIL) has become the predominant approach for classification tasks on gigapixel histopathology whole slide images (WSIs). Within the MIL framework, single WSIs (bags) are decomposed into patches (instances), with only WSI-level annotation available. Recent MIL approaches produce highly informative bag level representations by utilizing the transformer architecture's ability to model the dependencies between instances. However, when applied to high magnification datasets, problems emerge due to the large number of instances and the weak supervisory learning signal. To address this problem, we propose to additionally train transformers with a novel Bag Embedding Loss (BEL). BEL forces the model to learn a discriminative bag-level representation by minimizing the distance between bag embeddings of the same class and maximizing the distance between different classes. We evaluate BEL with the Transformer architecture TransMIL on two publicly available histopathology datasets, BRACS and CAMELYON17. We show that with BEL, TransMIL outperforms the baseline models on both datasets, thus contributing to the clinically highly relevant AI-based tumor classification of histological patient material.
2305.07299
Yanmin Wu
Yanmin Wu, Yunzhou Zhang, Delong Zhu, Zhiqiang Deng, Wenkai Sun, Xin Chen, Jian Zhang
An Object SLAM Framework for Association, Mapping, and High-Level Tasks
Accepted by IEEE Transactions on Robotics(T-RO)
IEEE Transactions on Robotics, vol. 39, no. 4, pp. 2912-2932, Aug. 2023
10.1109/TRO.2023.3273180
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object SLAM is considered increasingly significant for robot high-level perception and decision-making. Existing studies fall short in terms of data association, object representation, and semantic mapping and frequently rely on additional assumptions, limiting their performance. In this paper, we present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks. First, we propose an ensemble data association approach for associating objects in complicated conditions by incorporating parametric and nonparametric statistic testing. In addition, we suggest an outlier-robust centroid and scale estimation algorithm for modeling objects based on the iForest and line alignment. Then a lightweight and object-oriented map is represented by estimated general object models. Taking into consideration the semantic invariance of objects, we convert the object map to a topological map to provide semantic descriptors to enable multi-map matching. Finally, we suggest an object-driven active exploration strategy to achieve autonomous mapping in the grasping scenario. A range of public datasets and real-world results in mapping, augmented reality, scene matching, relocalization, and robotic manipulation have been used to evaluate the proposed object SLAM framework for its efficient performance.
[ { "created": "Fri, 12 May 2023 08:10:14 GMT", "version": "v1" } ]
2023-10-09
[ [ "Wu", "Yanmin", "" ], [ "Zhang", "Yunzhou", "" ], [ "Zhu", "Delong", "" ], [ "Deng", "Zhiqiang", "" ], [ "Sun", "Wenkai", "" ], [ "Chen", "Xin", "" ], [ "Zhang", "Jian", "" ] ]
Object SLAM is considered increasingly significant for robot high-level perception and decision-making. Existing studies fall short in terms of data association, object representation, and semantic mapping and frequently rely on additional assumptions, limiting their performance. In this paper, we present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks. First, we propose an ensemble data association approach for associating objects in complicated conditions by incorporating parametric and nonparametric statistic testing. In addition, we suggest an outlier-robust centroid and scale estimation algorithm for modeling objects based on the iForest and line alignment. Then a lightweight and object-oriented map is represented by estimated general object models. Taking into consideration the semantic invariance of objects, we convert the object map to a topological map to provide semantic descriptors to enable multi-map matching. Finally, we suggest an object-driven active exploration strategy to achieve autonomous mapping in the grasping scenario. A range of public datasets and real-world results in mapping, augmented reality, scene matching, relocalization, and robotic manipulation have been used to evaluate the proposed object SLAM framework for its efficient performance.
1911.07104
Farzaneh Khoshnevisan
Farzaneh Khoshnevisan, Zhewen Fan
RSM-GAN: A Convolutional Recurrent GAN for Anomaly Detection in Contaminated Seasonal Multivariate Time Series
8 pages, 5 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust anomaly detection is a requirement for monitoring complex modern systems with applications such as cyber-security, fraud prevention, and maintenance. These systems generate multiple correlated time series that are highly seasonal and noisy. This paper presents a novel unsupervised deep learning architecture for multivariate time series anomaly detection, called Robust Seasonal Multivariate Generative Adversarial Network (RSM-GAN). It extends recent advancements in GANs with adoption of convolutional-LSTM layers and an attention mechanism to produce state-of-the-art performance. We conduct extensive experiments to demonstrate the strength of our architecture in adjusting for complex seasonality patterns and handling severe levels of training data contamination. We also propose a novel anomaly score assignment and causal inference framework. We compare RSM-GAN with existing classical and deep-learning based anomaly detection models, and the results show that our architecture is associated with the lowest false positive rate and improves precision by 30% and 16% in real-world and synthetic data, respectively. Furthermore, we report the superiority of RSM-GAN regarding accurate root cause identification and NAB scores in all data settings.
[ { "created": "Sat, 16 Nov 2019 21:45:38 GMT", "version": "v1" } ]
2019-11-19
[ [ "Khoshnevisan", "Farzaneh", "" ], [ "Fan", "Zhewen", "" ] ]
Robust anomaly detection is a requirement for monitoring complex modern systems with applications such as cyber-security, fraud prevention, and maintenance. These systems generate multiple correlated time series that are highly seasonal and noisy. This paper presents a novel unsupervised deep learning architecture for multivariate time series anomaly detection, called Robust Seasonal Multivariate Generative Adversarial Network (RSM-GAN). It extends recent advancements in GANs with adoption of convolutional-LSTM layers and an attention mechanism to produce state-of-the-art performance. We conduct extensive experiments to demonstrate the strength of our architecture in adjusting for complex seasonality patterns and handling severe levels of training data contamination. We also propose a novel anomaly score assignment and causal inference framework. We compare RSM-GAN with existing classical and deep-learning based anomaly detection models, and the results show that our architecture is associated with the lowest false positive rate and improves precision by 30% and 16% in real-world and synthetic data, respectively. Furthermore, we report the superiority of RSM-GAN regarding accurate root cause identification and NAB scores in all data settings.
1908.02548
Will Nash
W.T. Nash, C.J. Powell, T. Drummond, N. Birbilis
Automated Corrosion Detection Using Crowd Sourced Training for Deep Learning
presubmission, computer vision, deep learning
null
null
null
cs.HC eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The automated detection of corrosion from images (i.e., photographs) or video (i.e., drone footage) presents significant advantages in terms of corrosion monitoring. Such advantages include access to remote locations, mitigation of risk to inspectors, cost savings and monitoring speed. The automated detection of corrosion requires deep learning to approach human level artificial intelligence (A.I.). The training of a deep learning model requires intensive image labelling, and in order to generate a large database of labelled images, crowd sourced labelling via a dedicated website was sought. The website (corrosiondetector.com) permits any user to label images, with such labelling then contributing to the training of a cloud based A.I. model - with such a cloud-based model then capable of assessing any fresh (or uploaded) image for the presence of corrosion. In other words, the website includes both the crowd sourced training process, but also the end use of the evolving model. Herein, the results and findings from the website (corrosiondetector.com) over the period of approximately one month, are reported.
[ { "created": "Sun, 4 Aug 2019 00:44:15 GMT", "version": "v1" } ]
2019-08-08
[ [ "Nash", "W. T.", "" ], [ "Powell", "C. J.", "" ], [ "Drummond", "T.", "" ], [ "Birbilis", "N.", "" ] ]
The automated detection of corrosion from images (i.e., photographs) or video (i.e., drone footage) presents significant advantages in terms of corrosion monitoring. Such advantages include access to remote locations, mitigation of risk to inspectors, cost savings and monitoring speed. The automated detection of corrosion requires deep learning to approach human level artificial intelligence (A.I.). The training of a deep learning model requires intensive image labelling, and in order to generate a large database of labelled images, crowd sourced labelling via a dedicated website was sought. The website (corrosiondetector.com) permits any user to label images, with such labelling then contributing to the training of a cloud based A.I. model - with such a cloud-based model then capable of assessing any fresh (or uploaded) image for the presence of corrosion. In other words, the website includes both the crowd sourced training process, but also the end use of the evolving model. Herein, the results and findings from the website (corrosiondetector.com) over the period of approximately one month, are reported.
2205.01906
Xue Bin Peng
Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, Sanja Fidler
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters
null
null
10.1145/3528223.3530110
null
cs.GR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The incredible feats of athleticism demonstrated by humans are made possible in part by a vast repertoire of general-purpose motor skills, acquired through years of practice and experience. These skills not only enable humans to perform complex tasks, but also provide powerful priors for guiding their behaviors when learning new tasks. This is in stark contrast to what is common practice in physics-based character animation, where control policies are most typically trained from scratch for each task. In this work, we present a large-scale data-driven framework for learning versatile and reusable skill embeddings for physically simulated characters. Our approach combines techniques from adversarial imitation learning and unsupervised reinforcement learning to develop skill embeddings that produce life-like behaviors, while also providing an easy to control representation for use on new downstream tasks. Our models can be trained using large datasets of unstructured motion clips, without requiring any task-specific annotation or segmentation of the motion data. By leveraging a massively parallel GPU-based simulator, we are able to train skill embeddings using over a decade of simulated experiences, enabling our model to learn a rich and versatile repertoire of skills. We show that a single pre-trained model can be effectively applied to perform a diverse set of new tasks. Our system also allows users to specify tasks through simple reward functions, and the skill embedding then enables the character to automatically synthesize complex and naturalistic strategies in order to achieve the task objectives.
[ { "created": "Wed, 4 May 2022 06:13:28 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 17:25:14 GMT", "version": "v2" } ]
2022-05-06
[ [ "Peng", "Xue Bin", "" ], [ "Guo", "Yunrong", "" ], [ "Halper", "Lina", "" ], [ "Levine", "Sergey", "" ], [ "Fidler", "Sanja", "" ] ]
The incredible feats of athleticism demonstrated by humans are made possible in part by a vast repertoire of general-purpose motor skills, acquired through years of practice and experience. These skills not only enable humans to perform complex tasks, but also provide powerful priors for guiding their behaviors when learning new tasks. This is in stark contrast to what is common practice in physics-based character animation, where control policies are most typically trained from scratch for each task. In this work, we present a large-scale data-driven framework for learning versatile and reusable skill embeddings for physically simulated characters. Our approach combines techniques from adversarial imitation learning and unsupervised reinforcement learning to develop skill embeddings that produce life-like behaviors, while also providing an easy to control representation for use on new downstream tasks. Our models can be trained using large datasets of unstructured motion clips, without requiring any task-specific annotation or segmentation of the motion data. By leveraging a massively parallel GPU-based simulator, we are able to train skill embeddings using over a decade of simulated experiences, enabling our model to learn a rich and versatile repertoire of skills. We show that a single pre-trained model can be effectively applied to perform a diverse set of new tasks. Our system also allows users to specify tasks through simple reward functions, and the skill embedding then enables the character to automatically synthesize complex and naturalistic strategies in order to achieve the task objectives.
1411.4972
An Zeng
Hao Liao, An Zeng, Yi-Cheng Zhang
Towards an objective ranking in online reputation systems: the effect of the rating projection
6 pages, 4 figures, 3 tables
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online reputation systems are commonly used by e-commerce providers nowadays. In order to generate an objective ranking of online items' quality according to users' ratings, many sophisticated algorithms have been proposed in the literature. In this paper, instead of proposing new algorithms we focus on a more fundamental problem: the rating projection. The basic idea is that even though the rating values given by users are linearly separated, the real preference of users to items between different values gave is nonlinear. We thus design an approach to project the original ratings of users to more representative values. This approach can be regarded as a data pretreatment method. Simulation in both artificial and real networks shows that the performance of the ranking algorithms can be improved when the projected ratings are used.
[ { "created": "Sat, 15 Nov 2014 15:22:08 GMT", "version": "v1" } ]
2014-11-19
[ [ "Liao", "Hao", "" ], [ "Zeng", "An", "" ], [ "Zhang", "Yi-Cheng", "" ] ]
Online reputation systems are commonly used by e-commerce providers nowadays. In order to generate an objective ranking of online items' quality according to users' ratings, many sophisticated algorithms have been proposed in the literature. In this paper, instead of proposing new algorithms we focus on a more fundamental problem: the rating projection. The basic idea is that even though the rating values given by users are linearly separated, the real preference of users to items between different values gave is nonlinear. We thus design an approach to project the original ratings of users to more representative values. This approach can be regarded as a data pretreatment method. Simulation in both artificial and real networks shows that the performance of the ranking algorithms can be improved when the projected ratings are used.
1910.03227
Soumyabrata Dev
Soumyabrata Dev, Hossein Javidnia, Murhaf Hossari, Matthew Nicholson, Killian McCabe, Atul Nautiyal, Clare Conran, Jian Tang, Wei Xu, and Fran\c{c}ois Piti\'e
Identifying Candidate Spaces for Advert Implantation
Published in Proc. IEEE 7th International Conference on Computer Science and Network Technology, 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual advertising is an important and promising feature in the area of online advertising. It involves integrating adverts onto live or recorded videos for product placements and targeted advertisements. Such integration of adverts is primarily done by video editors in the post-production stage, which is cumbersome and time-consuming. Therefore, it is important to automatically identify candidate spaces in a video frame, wherein new adverts can be implanted. The candidate space should match the scene perspective, and also have a high quality of experience according to human subjective judgment. In this paper, we propose the use of a bespoke neural net that can assist the video editors in identifying candidate spaces. We benchmark our approach against several deep-learning architectures on a large-scale image dataset of candidate spaces of outdoor scenes. Our work is the first of its kind in this area of multimedia and augmented reality applications, and achieves the best results.
[ { "created": "Tue, 8 Oct 2019 06:12:53 GMT", "version": "v1" } ]
2019-10-09
[ [ "Dev", "Soumyabrata", "" ], [ "Javidnia", "Hossein", "" ], [ "Hossari", "Murhaf", "" ], [ "Nicholson", "Matthew", "" ], [ "McCabe", "Killian", "" ], [ "Nautiyal", "Atul", "" ], [ "Conran", "Clare", "" ], [ "Tang", "Jian", "" ], [ "Xu", "Wei", "" ], [ "Pitié", "François", "" ] ]
Virtual advertising is an important and promising feature in the area of online advertising. It involves integrating adverts onto live or recorded videos for product placements and targeted advertisements. Such integration of adverts is primarily done by video editors in the post-production stage, which is cumbersome and time-consuming. Therefore, it is important to automatically identify candidate spaces in a video frame, wherein new adverts can be implanted. The candidate space should match the scene perspective, and also have a high quality of experience according to human subjective judgment. In this paper, we propose the use of a bespoke neural net that can assist the video editors in identifying candidate spaces. We benchmark our approach against several deep-learning architectures on a large-scale image dataset of candidate spaces of outdoor scenes. Our work is the first of its kind in this area of multimedia and augmented reality applications, and achieves the best results.
1708.03421
Leila Kosseim
Andre Cianflone and Leila Kosseim
N-gram and Neural Language Models for Discriminating Similar Languages
8 pages
Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3). A workshop of the 26th International Conference on Computational Linguistics (COLING 2016, Osaka, Japan), pp 243-250 (2016)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes our submission (named clac) to the 2016 Discriminating Similar Languages (DSL) shared task. We participated in the closed Sub-task 1 (Set A) with two separate machine learning techniques. The first approach is a character based Convolution Neural Network with a bidirectional long short term memory (BiLSTM) layer (CLSTM), which achieved an accuracy of 78.45% with minimal tuning. The second approach is a character-based n-gram model. This last approach achieved an accuracy of 88.45% which is close to the accuracy of 89.38% achieved by the best submission, and allowed us to rank #7 overall.
[ { "created": "Fri, 11 Aug 2017 02:27:26 GMT", "version": "v1" } ]
2017-08-14
[ [ "Cianflone", "Andre", "" ], [ "Kosseim", "Leila", "" ] ]
This paper describes our submission (named clac) to the 2016 Discriminating Similar Languages (DSL) shared task. We participated in the closed Sub-task 1 (Set A) with two separate machine learning techniques. The first approach is a character based Convolution Neural Network with a bidirectional long short term memory (BiLSTM) layer (CLSTM), which achieved an accuracy of 78.45% with minimal tuning. The second approach is a character-based n-gram model. This last approach achieved an accuracy of 88.45% which is close to the accuracy of 89.38% achieved by the best submission, and allowed us to rank #7 overall.
2404.14435
Shixuan Gu
Shixuan Gu, Jason Ken Adhinarta, Mikhail Bessmeltsev, Jiancheng Yang, Jessica Zhang, Daniel Berger, Jeff W. Lichtman, Hanspeter Pfister, and Donglai Wei
FreSeg: Frenet-Frame-based Part Segmentation for 3D Curvilinear Structures
10 pages, 4 figures
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Part segmentation is a crucial task for 3D curvilinear structures like neuron dendrites and blood vessels, enabling the analysis of dendritic spines and aneurysms with scientific and clinical significance. However, their diversely winded morphology poses a generalization challenge to existing deep learning methods, which leads to labor-intensive manual correction. In this work, we propose FreSeg, a framework of part segmentation tasks for 3D curvilinear structures. With Frenet-Frame-based point cloud transformation, it enables the models to learn more generalizable features and have significant performance improvements on tasks involving elongated and curvy geometries. We evaluate FreSeg on 2 datasets: 1) DenSpineEM, an in-house dataset for dendritic spine segmentation, and 2) IntrA, a public 3D dataset for intracranial aneurysm segmentation. Further, we will release the DenSpineEM dataset, which includes roughly 6,000 spines from 69 dendrites from 3 public electron microscopy (EM) datasets, to foster the development of effective dendritic spine instance extraction methods and, consequently, large-scale connectivity analysis to better understand mammalian brains.
[ { "created": "Fri, 19 Apr 2024 16:40:24 GMT", "version": "v1" } ]
2024-04-24
[ [ "Gu", "Shixuan", "" ], [ "Adhinarta", "Jason Ken", "" ], [ "Bessmeltsev", "Mikhail", "" ], [ "Yang", "Jiancheng", "" ], [ "Zhang", "Jessica", "" ], [ "Berger", "Daniel", "" ], [ "Lichtman", "Jeff W.", "" ], [ "Pfister", "Hanspeter", "" ], [ "Wei", "Donglai", "" ] ]
Part segmentation is a crucial task for 3D curvilinear structures like neuron dendrites and blood vessels, enabling the analysis of dendritic spines and aneurysms with scientific and clinical significance. However, their diversely winded morphology poses a generalization challenge to existing deep learning methods, which leads to labor-intensive manual correction. In this work, we propose FreSeg, a framework of part segmentation tasks for 3D curvilinear structures. With Frenet-Frame-based point cloud transformation, it enables the models to learn more generalizable features and have significant performance improvements on tasks involving elongated and curvy geometries. We evaluate FreSeg on 2 datasets: 1) DenSpineEM, an in-house dataset for dendritic spine segmentation, and 2) IntrA, a public 3D dataset for intracranial aneurysm segmentation. Further, we will release the DenSpineEM dataset, which includes roughly 6,000 spines from 69 dendrites from 3 public electron microscopy (EM) datasets, to foster the development of effective dendritic spine instance extraction methods and, consequently, large-scale connectivity analysis to better understand mammalian brains.
2407.20463
Rakesh Mundlamuri
Rakesh Mundlamuri, Rajeev Gangula, Florian Kaltenberger, Raymond Knopp
5G NR Positioning with OpenAirInterface: Tools and Methodologies
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
The fifth-generation new radio (5G NR) technology is expected to provide precise and reliable positioning capabilities along with high data rates. The Third Generation Partnership Project (3GPP) has started introducing positioning techniques from Release-16 based on time, angle, and signal strength using reference signals. However, validating these techniques with experimental prototypes is crucial before successful real-world deployment. This work provides useful tools and implementation details that are required in performing 5G positioning experiments with OpenAirInterface (OAI). As an example use case, we present an round trip time (RTT) estimation test-bed based on OAI and discusses the real-word experiment and measurement process.
[ { "created": "Mon, 29 Jul 2024 23:42:12 GMT", "version": "v1" } ]
2024-07-31
[ [ "Mundlamuri", "Rakesh", "" ], [ "Gangula", "Rajeev", "" ], [ "Kaltenberger", "Florian", "" ], [ "Knopp", "Raymond", "" ] ]
The fifth-generation new radio (5G NR) technology is expected to provide precise and reliable positioning capabilities along with high data rates. The Third Generation Partnership Project (3GPP) has started introducing positioning techniques from Release-16 based on time, angle, and signal strength using reference signals. However, validating these techniques with experimental prototypes is crucial before successful real-world deployment. This work provides useful tools and implementation details that are required in performing 5G positioning experiments with OpenAirInterface (OAI). As an example use case, we present an round trip time (RTT) estimation test-bed based on OAI and discusses the real-word experiment and measurement process.
2310.17164
Mohana Prasad Sathya Moorthy
Long-Huei Chen, Mohana Prasad Sathya Moorthy, and Pratyaksh Sharma
Bridging Phylogeny and Taxonomy with Protein-protein Interaction Networks
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The protein-protein interaction (PPI) network provides an overview of the complex biological reactions vital to an organism's metabolism and survival. Even though in the past PPI network were compared across organisms in detail, there has not been large-scale research on how individual PPI networks reflect on the species relationships. In this study we aim to increase our understanding of the tree of life and taxonomy by gleaming information from the PPI networks. We successful created (1) a predictor of network statistics based on known traits of existing species in the phylogeny, and (2) a taxonomic classifier of organism using the known protein network statistics, whether experimentally determined or predicted de novo. With the knowledge of protein interactions at its core, our two models effectively connects two field with widely diverging methodologies - the phylogeny and taxonomy of species.
[ { "created": "Thu, 26 Oct 2023 05:32:33 GMT", "version": "v1" } ]
2023-10-27
[ [ "Chen", "Long-Huei", "" ], [ "Moorthy", "Mohana Prasad Sathya", "" ], [ "Sharma", "Pratyaksh", "" ] ]
The protein-protein interaction (PPI) network provides an overview of the complex biological reactions vital to an organism's metabolism and survival. Even though in the past PPI network were compared across organisms in detail, there has not been large-scale research on how individual PPI networks reflect on the species relationships. In this study we aim to increase our understanding of the tree of life and taxonomy by gleaming information from the PPI networks. We successful created (1) a predictor of network statistics based on known traits of existing species in the phylogeny, and (2) a taxonomic classifier of organism using the known protein network statistics, whether experimentally determined or predicted de novo. With the knowledge of protein interactions at its core, our two models effectively connects two field with widely diverging methodologies - the phylogeny and taxonomy of species.
1712.01640
Dawit Mureja Argaw
Malinda Vania and Dawit Mureja and Deukhee Lee
Automatic Spine Segmentation using Convolutional Neural Network via Redundant Generation of Class Labels for 3D Spine Modeling
18 pages, 5 figures, 3 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a significant increase from 2010 to 2016 in the number of people suffering from spine problems. The automatic image segmentation of the spine obtained from a computed tomography (CT) image is important for diagnosing spine conditions and for performing surgery with computer-assisted surgery systems. The spine has a complex anatomy that consists of 33 vertebrae, 23 intervertebral disks, the spinal cord, and connecting ribs. As a result, the spinal surgeon is faced with the challenge of needing a robust algorithm to segment and create a model of the spine. In this study, we developed an automatic segmentation method to segment the spine, and we compared our segmentation results with reference segmentations obtained by experts. We developed a fully automatic approach for spine segmentation from CT based on a hybrid method. This method combines the convolutional neural network (CNN) and fully convolutional network (FCN), and utilizes class redundancy as a soft constraint to greatly improve the segmentation results. The proposed method was found to significantly enhance the accuracy of the segmentation results and the system processing time. Our comparison was based on 12 measurements: the Dice coefficient (94%), Jaccard index (93%), volumetric similarity (96%), sensitivity (97%), specificity (99%), precision (over segmentation; 8.3 and under segmentation 2.6), accuracy (99%), Matthews correlation coefficient (0.93), mean surface distance (0.16 mm), Hausdorff distance (7.4 mm), and global consistency error (0.02). We experimented with CT images from 32 patients, and the experimental results demonstrated the efficiency of the proposed method.
[ { "created": "Wed, 29 Nov 2017 16:19:07 GMT", "version": "v1" } ]
2017-12-06
[ [ "Vania", "Malinda", "" ], [ "Mureja", "Dawit", "" ], [ "Lee", "Deukhee", "" ] ]
There has been a significant increase from 2010 to 2016 in the number of people suffering from spine problems. The automatic image segmentation of the spine obtained from a computed tomography (CT) image is important for diagnosing spine conditions and for performing surgery with computer-assisted surgery systems. The spine has a complex anatomy that consists of 33 vertebrae, 23 intervertebral disks, the spinal cord, and connecting ribs. As a result, the spinal surgeon is faced with the challenge of needing a robust algorithm to segment and create a model of the spine. In this study, we developed an automatic segmentation method to segment the spine, and we compared our segmentation results with reference segmentations obtained by experts. We developed a fully automatic approach for spine segmentation from CT based on a hybrid method. This method combines the convolutional neural network (CNN) and fully convolutional network (FCN), and utilizes class redundancy as a soft constraint to greatly improve the segmentation results. The proposed method was found to significantly enhance the accuracy of the segmentation results and the system processing time. Our comparison was based on 12 measurements: the Dice coefficient (94%), Jaccard index (93%), volumetric similarity (96%), sensitivity (97%), specificity (99%), precision (over segmentation; 8.3 and under segmentation 2.6), accuracy (99%), Matthews correlation coefficient (0.93), mean surface distance (0.16 mm), Hausdorff distance (7.4 mm), and global consistency error (0.02). We experimented with CT images from 32 patients, and the experimental results demonstrated the efficiency of the proposed method.
1802.10303
Abolfazl Asudeh
Abolfazl Asudeh and Azade Nazi and Nan Zhang and Gautam Das and H. V. Jagadish
RRR: Rank-Regret Representative
null
null
10.1145/3299869.3300080
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selecting the best items in a dataset is a common task in data exploration. However, the concept of "best" lies in the eyes of the beholder: different users may consider different attributes more important, and hence arrive at different rankings. Nevertheless, one can remove "dominated" items and create a "representative" subset of the data set, comprising the "best items" in it. A Pareto-optimal representative is guaranteed to contain the best item of each possible ranking, but it can be almost as big as the full data. Representative can be found if we relax the requirement to include the best item for every possible user, and instead just limit the users' "regret". Existing work defines regret as the loss in score by limiting consideration to the representative instead of the full data set, for any chosen ranking function. However, the score is often not a meaningful number and users may not understand its absolute value. Sometimes small ranges in score can include large fractions of the data set. In contrast, users do understand the notion of rank ordering. Therefore, alternatively, we consider the position of the items in the ranked list for defining the regret and propose the {\em rank-regret representative} as the minimal subset of the data containing at least one of the top-$k$ of any possible ranking function. This problem is NP-complete. We use the geometric interpretation of items to bound their ranks on ranges of functions and to utilize combinatorial geometry notions for developing effective and efficient approximation algorithms for the problem. Experiments on real datasets demonstrate that we can efficiently find small subsets with small rank-regrets.
[ { "created": "Wed, 28 Feb 2018 08:24:02 GMT", "version": "v1" }, { "created": "Sat, 3 Mar 2018 17:14:31 GMT", "version": "v2" } ]
2023-04-27
[ [ "Asudeh", "Abolfazl", "" ], [ "Nazi", "Azade", "" ], [ "Zhang", "Nan", "" ], [ "Das", "Gautam", "" ], [ "Jagadish", "H. V.", "" ] ]
Selecting the best items in a dataset is a common task in data exploration. However, the concept of "best" lies in the eyes of the beholder: different users may consider different attributes more important, and hence arrive at different rankings. Nevertheless, one can remove "dominated" items and create a "representative" subset of the data set, comprising the "best items" in it. A Pareto-optimal representative is guaranteed to contain the best item of each possible ranking, but it can be almost as big as the full data. Representative can be found if we relax the requirement to include the best item for every possible user, and instead just limit the users' "regret". Existing work defines regret as the loss in score by limiting consideration to the representative instead of the full data set, for any chosen ranking function. However, the score is often not a meaningful number and users may not understand its absolute value. Sometimes small ranges in score can include large fractions of the data set. In contrast, users do understand the notion of rank ordering. Therefore, alternatively, we consider the position of the items in the ranked list for defining the regret and propose the {\em rank-regret representative} as the minimal subset of the data containing at least one of the top-$k$ of any possible ranking function. This problem is NP-complete. We use the geometric interpretation of items to bound their ranks on ranges of functions and to utilize combinatorial geometry notions for developing effective and efficient approximation algorithms for the problem. Experiments on real datasets demonstrate that we can efficiently find small subsets with small rank-regrets.
2106.01135
Abdellah Aznag
Abdellah Aznag, Vineet Goyal and Noemie Perivier
MNL-Bandit with Knapsacks: a near-optimal algorithm
Improved the regret bound/assumptions. Corrected the abstract
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a dynamic assortment selection problem where a seller has a fixed inventory of $N$ substitutable products and faces an unknown demand that arrives sequentially over $T$ periods. In each period, the seller needs to decide on the assortment of products (satisfying certain constraints) to offer to the customers. The customer's response follows an unknown multinomial logit model (MNL) with parameter $\boldsymbol{v}$. If customer selects product $i \in [N]$, the seller receives revenue $r_i$. The goal of the seller is to maximize the total expected revenue from the $T$ customers given the fixed initial inventory of $N$ products. We present MNLwK-UCB, a UCB-based algorithm and characterize its regret under different regimes of inventory size. We show that when the inventory size grows quasi-linearly in time, MNLwK-UCB achieves a $\tilde{O}(N + \sqrt{NT})$ regret bound. We also show that for a smaller inventory (with growth $\sim T^{\alpha}$, $\alpha < 1$), MNLwK-UCB achieves a $\tilde{O}(N(1 + T^{\frac{1 - \alpha}{2}}) + \sqrt{NT})$. In particular, over a long time horizon $T$, the rate $\tilde{O}(\sqrt{NT})$ is always achieved regardless of the constraints and the size of the inventory.
[ { "created": "Wed, 2 Jun 2021 13:05:34 GMT", "version": "v1" }, { "created": "Fri, 17 Feb 2023 22:18:42 GMT", "version": "v2" }, { "created": "Sat, 20 Jan 2024 17:23:05 GMT", "version": "v3" }, { "created": "Tue, 23 Jan 2024 16:52:16 GMT", "version": "v4" }, { "created": "Wed, 24 Jan 2024 14:56:44 GMT", "version": "v5" } ]
2024-01-25
[ [ "Aznag", "Abdellah", "" ], [ "Goyal", "Vineet", "" ], [ "Perivier", "Noemie", "" ] ]
We consider a dynamic assortment selection problem where a seller has a fixed inventory of $N$ substitutable products and faces an unknown demand that arrives sequentially over $T$ periods. In each period, the seller needs to decide on the assortment of products (satisfying certain constraints) to offer to the customers. The customer's response follows an unknown multinomial logit model (MNL) with parameter $\boldsymbol{v}$. If customer selects product $i \in [N]$, the seller receives revenue $r_i$. The goal of the seller is to maximize the total expected revenue from the $T$ customers given the fixed initial inventory of $N$ products. We present MNLwK-UCB, a UCB-based algorithm and characterize its regret under different regimes of inventory size. We show that when the inventory size grows quasi-linearly in time, MNLwK-UCB achieves a $\tilde{O}(N + \sqrt{NT})$ regret bound. We also show that for a smaller inventory (with growth $\sim T^{\alpha}$, $\alpha < 1$), MNLwK-UCB achieves a $\tilde{O}(N(1 + T^{\frac{1 - \alpha}{2}}) + \sqrt{NT})$. In particular, over a long time horizon $T$, the rate $\tilde{O}(\sqrt{NT})$ is always achieved regardless of the constraints and the size of the inventory.
1602.05256
Wichai Shanklin
Wichai Shanklin
2D SEM images turn into 3D object models
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scanning electron microscopy (SEM) is probably one the most fascinating examination approach that has been used since more than two decades to detailed inspection of micro scale objects. Most of the scanning electron microscopes could only produce 2D images that could not assist operational analysis of microscopic surface properties. Computer vision algorithms combined with very advanced geometry and mathematical approaches turn any SEM into a full 3D measurement device. This work focuses on a methodical literature review for automatic 3D surface reconstruction of scanning electron microscope images.
[ { "created": "Wed, 17 Feb 2016 00:41:58 GMT", "version": "v1" } ]
2016-02-18
[ [ "Shanklin", "Wichai", "" ] ]
The scanning electron microscopy (SEM) is probably one the most fascinating examination approach that has been used since more than two decades to detailed inspection of micro scale objects. Most of the scanning electron microscopes could only produce 2D images that could not assist operational analysis of microscopic surface properties. Computer vision algorithms combined with very advanced geometry and mathematical approaches turn any SEM into a full 3D measurement device. This work focuses on a methodical literature review for automatic 3D surface reconstruction of scanning electron microscope images.