id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2312.02051
Shuhuai Ren
Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, Lu Hou
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
CVPR 2024 camera-ready version, code is available at https://github.com/RenShuhuai-Andy/TimeChat
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes TimeChat, a time-sensitive multimodal large language model specifically designed for long video understanding. Our model incorporates two key architectural contributions: (1) a timestamp-aware frame encoder that binds visual content with the timestamp of each frame, and (2) a sliding video Q-Former that produces a video token sequence of varying lengths to accommodate videos of various durations. Additionally, we construct an instruction-tuning dataset, encompassing 6 tasks and a total of 125K instances, to further enhance TimeChat's instruction-following performance. Experiment results across various video understanding tasks, such as dense captioning, temporal grounding, and highlight detection, demonstrate TimeChat's strong zero-shot temporal localization and reasoning capabilities. For example, it achieves +9.2 F1 score and +2.8 CIDEr on YouCook2, +5.8 HIT@1 on QVHighlights, and +27.5 R@1 (IoU=0.5) on Charades-STA, compared to state-of-the-art video large language models, holding the potential to serve as a versatile video assistant for long-form video comprehension tasks and satisfy realistic user requirements.
[ { "created": "Mon, 4 Dec 2023 17:09:52 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2024 12:41:14 GMT", "version": "v2" } ]
2024-03-29
[ [ "Ren", "Shuhuai", "" ], [ "Yao", "Linli", "" ], [ "Li", "Shicheng", "" ], [ "Sun", "Xu", "" ], [ "Hou", "Lu", "" ] ]
This work proposes TimeChat, a time-sensitive multimodal large language model specifically designed for long video understanding. Our model incorporates two key architectural contributions: (1) a timestamp-aware frame encoder that binds visual content with the timestamp of each frame, and (2) a sliding video Q-Former that produces a video token sequence of varying lengths to accommodate videos of various durations. Additionally, we construct an instruction-tuning dataset, encompassing 6 tasks and a total of 125K instances, to further enhance TimeChat's instruction-following performance. Experiment results across various video understanding tasks, such as dense captioning, temporal grounding, and highlight detection, demonstrate TimeChat's strong zero-shot temporal localization and reasoning capabilities. For example, it achieves +9.2 F1 score and +2.8 CIDEr on YouCook2, +5.8 HIT@1 on QVHighlights, and +27.5 R@1 (IoU=0.5) on Charades-STA, compared to state-of-the-art video large language models, holding the potential to serve as a versatile video assistant for long-form video comprehension tasks and satisfy realistic user requirements.
2202.04270
Kenjiro Tadakuma
Tomoya Takahashi, Masahiro Watanabe, Kenjiro Tadakuma, Naoto Saiki, Kazuki Abe Masashi Konyo and Satoshi Tadokoro
Inflated Bendable Eversion Cantilever Mechanism with Inner Skeleton for Increased Payload Holding
This article is consist of 8 pages and 15 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inflatable structures used in soft robotics applications exhibit unique characteristics. In particular, the tip-extension structure, which grows from the tip, can grow without friction against the environment. However, these inflatable structures are inferior to rigid mechanisms in terms of their load-bearing capacity. The stiffness of the tip-extension structure can be increased by pressurization, but the structure cannot maintain its curved shape and compliance. In this study, we proposed a mechanism that combines a skeleton structure consisting of multi-joint links with functions to increase rigidity while keeping low pressure and realizing the functions of bending and shape fixation. We devised a design method for rigid articulated links and combined it with a membrane structure that utilizes the advantages of the tip-extension structure. The experimental results show that the payload of the designed structure increases compared to that of the membrane-only structure. The findings of this research can be applied to long robots that can be extended in the air without drooping and to mechanisms that can wrap around the human body.
[ { "created": "Wed, 9 Feb 2022 04:35:40 GMT", "version": "v1" } ]
2022-02-10
[ [ "Takahashi", "Tomoya", "" ], [ "Watanabe", "Masahiro", "" ], [ "Tadakuma", "Kenjiro", "" ], [ "Saiki", "Naoto", "" ], [ "Konyo", "Kazuki Abe Masashi", "" ], [ "Tadokoro", "Satoshi", "" ] ]
Inflatable structures used in soft robotics applications exhibit unique characteristics. In particular, the tip-extension structure, which grows from the tip, can grow without friction against the environment. However, these inflatable structures are inferior to rigid mechanisms in terms of their load-bearing capacity. The stiffness of the tip-extension structure can be increased by pressurization, but the structure cannot maintain its curved shape and compliance. In this study, we proposed a mechanism that combines a skeleton structure consisting of multi-joint links with functions to increase rigidity while keeping low pressure and realizing the functions of bending and shape fixation. We devised a design method for rigid articulated links and combined it with a membrane structure that utilizes the advantages of the tip-extension structure. The experimental results show that the payload of the designed structure increases compared to that of the membrane-only structure. The findings of this research can be applied to long robots that can be extended in the air without drooping and to mechanisms that can wrap around the human body.
2306.09848
Ege Gursoy
Ege Gursoy, Sonny Tarbouriech, Andrea Cherubini
Can robots mold soft plastic materials by shaping depth images?
Accepted to IEEE Transactions on Robotics (T-RO)
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Can robots mold soft plastic materials by shaping depth images? The short answer is no: current day robots can't. In this article, we address the problem of shaping plastic material with an anthropomorphic arm/hand robot, which observes the material with a fixed depth camera. Robots capable of molding could assist humans in many tasks, such as cooking, scooping or gardening. Yet, the problem is complex, due to its high-dimensionality at both perception and control levels. To address it, we design three alternative data-based methods for predicting the effect of robot actions on the material. Then, the robot can plan the sequence of actions and their positions, to mold the material into a desired shape. To make the prediction problem tractable, we rely on two original ideas. First, we prove that under reasonable assumptions, the shaping problem can be mapped from point cloud to depth image space, with many benefits (simpler processing, no need for registration, lower computation time and memory requirements). Second, we design a novel, simple metric for quickly measuring the distance between two depth images. The metric is based on the inherent point cloud representation of depth images, which enables direct and consistent comparison of image pairs through a non-uniform scaling approach, and therefore opens promising perspectives for designing \textit{depth image -- based} robot controllers. We assess our approach in a series of unprecedented experiments, where a robotic arm/hand molds flour from initial to final shapes, either with its own dataset, or by transfer learning from a human dataset. We conclude the article by discussing the limitations of our framework and those of current day hardware, which make human-like robot molding a challenging open research problem.
[ { "created": "Fri, 16 Jun 2023 13:46:15 GMT", "version": "v1" } ]
2023-06-19
[ [ "Gursoy", "Ege", "" ], [ "Tarbouriech", "Sonny", "" ], [ "Cherubini", "Andrea", "" ] ]
Can robots mold soft plastic materials by shaping depth images? The short answer is no: current day robots can't. In this article, we address the problem of shaping plastic material with an anthropomorphic arm/hand robot, which observes the material with a fixed depth camera. Robots capable of molding could assist humans in many tasks, such as cooking, scooping or gardening. Yet, the problem is complex, due to its high-dimensionality at both perception and control levels. To address it, we design three alternative data-based methods for predicting the effect of robot actions on the material. Then, the robot can plan the sequence of actions and their positions, to mold the material into a desired shape. To make the prediction problem tractable, we rely on two original ideas. First, we prove that under reasonable assumptions, the shaping problem can be mapped from point cloud to depth image space, with many benefits (simpler processing, no need for registration, lower computation time and memory requirements). Second, we design a novel, simple metric for quickly measuring the distance between two depth images. The metric is based on the inherent point cloud representation of depth images, which enables direct and consistent comparison of image pairs through a non-uniform scaling approach, and therefore opens promising perspectives for designing \textit{depth image -- based} robot controllers. We assess our approach in a series of unprecedented experiments, where a robotic arm/hand molds flour from initial to final shapes, either with its own dataset, or by transfer learning from a human dataset. We conclude the article by discussing the limitations of our framework and those of current day hardware, which make human-like robot molding a challenging open research problem.
1305.3102
Bart M. P. Jansen
Michael R. Fellows and Bart M. P. Jansen
FPT is Characterized by Useful Obstruction Sets
Extended abstract with appendix, as accepted to WG 2013
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many graph problems were first shown to be fixed-parameter tractable using the results of Robertson and Seymour on graph minors. We show that the combination of finite, computable, obstruction sets and efficient order tests is not just one way of obtaining strongly uniform FPT algorithms, but that all of FPT may be captured in this way. Our new characterization of FPT has a strong connection to the theory of kernelization, as we prove that problems with polynomial kernels can be characterized by obstruction sets whose elements have polynomial size. Consequently we investigate the interplay between the sizes of problem kernels and the sizes of the elements of such obstruction sets, obtaining several examples of how results in one area yield new insights in the other. We show how exponential-size minor-minimal obstructions for pathwidth k form the crucial ingredient in a novel OR-cross-composition for k-Pathwidth, complementing the trivial AND-composition that is known for this problem. In the other direction, we show that OR-cross-compositions into a parameterized problem can be used to rule out the existence of efficiently generated quasi-orders on its instances that characterize the NO-instances by polynomial-size obstructions.
[ { "created": "Tue, 14 May 2013 10:43:00 GMT", "version": "v1" } ]
2013-05-15
[ [ "Fellows", "Michael R.", "" ], [ "Jansen", "Bart M. P.", "" ] ]
Many graph problems were first shown to be fixed-parameter tractable using the results of Robertson and Seymour on graph minors. We show that the combination of finite, computable, obstruction sets and efficient order tests is not just one way of obtaining strongly uniform FPT algorithms, but that all of FPT may be captured in this way. Our new characterization of FPT has a strong connection to the theory of kernelization, as we prove that problems with polynomial kernels can be characterized by obstruction sets whose elements have polynomial size. Consequently we investigate the interplay between the sizes of problem kernels and the sizes of the elements of such obstruction sets, obtaining several examples of how results in one area yield new insights in the other. We show how exponential-size minor-minimal obstructions for pathwidth k form the crucial ingredient in a novel OR-cross-composition for k-Pathwidth, complementing the trivial AND-composition that is known for this problem. In the other direction, we show that OR-cross-compositions into a parameterized problem can be used to rule out the existence of efficiently generated quasi-orders on its instances that characterize the NO-instances by polynomial-size obstructions.
2205.08820
Antonio Longa
Antonio Longa, Giulia Cencetti, Sune Lehmann, Andrea Passerini and Bruno Lepri
Generating fine-grained surrogate temporal networks
null
null
null
null
cs.SI cs.CY physics.data-an physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Temporal networks are essential for modeling and understanding systems whose behavior varies in time, from social interactions to biological systems. Often, however, real-world data are prohibitively expensive to collect in a large scale or unshareable due to privacy concerns. A promising way to bypass the problem consists in generating arbitrarily large and anonymized synthetic graphs with the properties of real-world networks, namely `surrogate networks'. Until now, the generation of realistic surrogate temporal networks has remained an open problem, due to the difficulty of capturing both the temporal and topological properties of the input network, as well as their correlations, in a scalable model. Here, we propose a novel and simple method for generating surrogate temporal networks. Our method decomposes the input network into star-like structures evolving in time. Then those structures are used as building blocks to generate a surrogate temporal network. Our model vastly outperforms current methods across multiple examples of temporal networks in terms of both topological and dynamical similarity. We further show that beyond generating realistic interaction patterns, our method is able to capture intrinsic temporal periodicity of temporal networks, all with an execution time lower than competing methods by multiple orders of magnitude. The simplicity of our algorithm makes it easily interpretable, extendable and algorithmically scalable.
[ { "created": "Wed, 18 May 2022 09:38:22 GMT", "version": "v1" }, { "created": "Tue, 22 Aug 2023 17:35:58 GMT", "version": "v2" } ]
2023-08-23
[ [ "Longa", "Antonio", "" ], [ "Cencetti", "Giulia", "" ], [ "Lehmann", "Sune", "" ], [ "Passerini", "Andrea", "" ], [ "Lepri", "Bruno", "" ] ]
Temporal networks are essential for modeling and understanding systems whose behavior varies in time, from social interactions to biological systems. Often, however, real-world data are prohibitively expensive to collect in a large scale or unshareable due to privacy concerns. A promising way to bypass the problem consists in generating arbitrarily large and anonymized synthetic graphs with the properties of real-world networks, namely `surrogate networks'. Until now, the generation of realistic surrogate temporal networks has remained an open problem, due to the difficulty of capturing both the temporal and topological properties of the input network, as well as their correlations, in a scalable model. Here, we propose a novel and simple method for generating surrogate temporal networks. Our method decomposes the input network into star-like structures evolving in time. Then those structures are used as building blocks to generate a surrogate temporal network. Our model vastly outperforms current methods across multiple examples of temporal networks in terms of both topological and dynamical similarity. We further show that beyond generating realistic interaction patterns, our method is able to capture intrinsic temporal periodicity of temporal networks, all with an execution time lower than competing methods by multiple orders of magnitude. The simplicity of our algorithm makes it easily interpretable, extendable and algorithmically scalable.
2309.05352
Rohan V Kashyap
Pavan Karjol, Rohan Kashyap, Prathosh A P
Neural Discovery of Permutation Subgroups
null
In International Conference on Artificial Intelligence and Statistics, pp. 4668-4678. Volume 206. PMLR, 2023
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We consider the problem of discovering subgroup $H$ of permutation group $S_{n}$. Unlike the traditional $H$-invariant networks wherein $H$ is assumed to be known, we present a method to discover the underlying subgroup, given that it satisfies certain conditions. Our results show that one could discover any subgroup of type $S_{k} (k \leq n)$ by learning an $S_{n}$-invariant function and a linear transformation. We also prove similar results for cyclic and dihedral subgroups. Finally, we provide a general theorem that can be extended to discover other subgroups of $S_{n}$. We also demonstrate the applicability of our results through numerical experiments on image-digit sum and symmetric polynomial regression tasks.
[ { "created": "Mon, 11 Sep 2023 09:53:28 GMT", "version": "v1" } ]
2023-09-12
[ [ "Karjol", "Pavan", "" ], [ "Kashyap", "Rohan", "" ], [ "P", "Prathosh A", "" ] ]
We consider the problem of discovering subgroup $H$ of permutation group $S_{n}$. Unlike the traditional $H$-invariant networks wherein $H$ is assumed to be known, we present a method to discover the underlying subgroup, given that it satisfies certain conditions. Our results show that one could discover any subgroup of type $S_{k} (k \leq n)$ by learning an $S_{n}$-invariant function and a linear transformation. We also prove similar results for cyclic and dihedral subgroups. Finally, we provide a general theorem that can be extended to discover other subgroups of $S_{n}$. We also demonstrate the applicability of our results through numerical experiments on image-digit sum and symmetric polynomial regression tasks.
2303.10991
Jinyoung Jun
Jinyoung Jun, Jae-Han Lee, and Chang-Su Kim
Versatile Depth Estimator Based on Common Relative Depth Estimation and Camera-Specific Relative-to-Metric Depth Conversion
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A typical monocular depth estimator is trained for a single camera, so its performance drops severely on images taken with different cameras. To address this issue, we propose a versatile depth estimator (VDE), composed of a common relative depth estimator (CRDE) and multiple relative-to-metric converters (R2MCs). The CRDE extracts relative depth information, and each R2MC converts the relative information to predict metric depths for a specific camera. The proposed VDE can cope with diverse scenes, including both indoor and outdoor scenes, with only a 1.12\% parameter increase per camera. Experimental results demonstrate that VDE supports multiple cameras effectively and efficiently and also achieves state-of-the-art performance in the conventional single-camera scenario.
[ { "created": "Mon, 20 Mar 2023 10:19:50 GMT", "version": "v1" } ]
2023-03-21
[ [ "Jun", "Jinyoung", "" ], [ "Lee", "Jae-Han", "" ], [ "Kim", "Chang-Su", "" ] ]
A typical monocular depth estimator is trained for a single camera, so its performance drops severely on images taken with different cameras. To address this issue, we propose a versatile depth estimator (VDE), composed of a common relative depth estimator (CRDE) and multiple relative-to-metric converters (R2MCs). The CRDE extracts relative depth information, and each R2MC converts the relative information to predict metric depths for a specific camera. The proposed VDE can cope with diverse scenes, including both indoor and outdoor scenes, with only a 1.12\% parameter increase per camera. Experimental results demonstrate that VDE supports multiple cameras effectively and efficiently and also achieves state-of-the-art performance in the conventional single-camera scenario.
2309.04782
Feng Zhou
Feng Zhou, Antonio Cicone, Haomin Zhou
RRCNN$^{+}$: An Enhanced Residual Recursive Convolutional Neural Network for Non-stationary Signal Decomposition
8 pages, 4 figure
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-frequency analysis is an important and challenging task in many applications. Fourier and wavelet analysis are two classic methods that have achieved remarkable success in many fields. They also exhibit limitations when applied to nonlinear and non-stationary signals. To address this challenge, a series of nonlinear and adaptive methods, pioneered by the empirical mode decomposition method have been proposed. Their aim is to decompose a non-stationary signal into quasi-stationary components which reveal better features in the time-frequency analysis. Recently, inspired by deep learning, we proposed a novel method called residual recursive convolutional neural network (RRCNN). Not only RRCNN can achieve more stable decomposition than existing methods while batch processing large-scale signals with low computational cost, but also deep learning provides a unique perspective for non-stationary signal decomposition. In this study, we aim to further improve RRCNN with the help of several nimble techniques from deep learning and optimization to ameliorate the method and overcome some of the limitations of this technique.
[ { "created": "Sat, 9 Sep 2023 13:00:30 GMT", "version": "v1" } ]
2023-09-12
[ [ "Zhou", "Feng", "" ], [ "Cicone", "Antonio", "" ], [ "Zhou", "Haomin", "" ] ]
Time-frequency analysis is an important and challenging task in many applications. Fourier and wavelet analysis are two classic methods that have achieved remarkable success in many fields. They also exhibit limitations when applied to nonlinear and non-stationary signals. To address this challenge, a series of nonlinear and adaptive methods, pioneered by the empirical mode decomposition method have been proposed. Their aim is to decompose a non-stationary signal into quasi-stationary components which reveal better features in the time-frequency analysis. Recently, inspired by deep learning, we proposed a novel method called residual recursive convolutional neural network (RRCNN). Not only RRCNN can achieve more stable decomposition than existing methods while batch processing large-scale signals with low computational cost, but also deep learning provides a unique perspective for non-stationary signal decomposition. In this study, we aim to further improve RRCNN with the help of several nimble techniques from deep learning and optimization to ameliorate the method and overcome some of the limitations of this technique.
2311.18495
Avery Ma
Avery Ma, Amir-massoud Farahmand, Yangchen Pan, Philip Torr, Jindong Gu
Improving Adversarial Transferability via Model Alignment
Accepted at the European Conference on Computer Vision (ECCV) 2024. Code: https://github.com/averyma/model-alignment
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Neural networks are susceptible to adversarial perturbations that are transferable across different models. In this paper, we introduce a novel model alignment technique aimed at improving a given source model's ability in generating transferable adversarial perturbations. During the alignment process, the parameters of the source model are fine-tuned to minimize an alignment loss. This loss measures the divergence in the predictions between the source model and another, independently trained model, referred to as the witness model. To understand the effect of model alignment, we conduct a geometric analysis of the resulting changes in the loss landscape. Extensive experiments on the ImageNet dataset, using a variety of model architectures, demonstrate that perturbations generated from aligned source models exhibit significantly higher transferability than those from the original source model.
[ { "created": "Thu, 30 Nov 2023 12:15:49 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 11:45:09 GMT", "version": "v2" } ]
2024-07-18
[ [ "Ma", "Avery", "" ], [ "Farahmand", "Amir-massoud", "" ], [ "Pan", "Yangchen", "" ], [ "Torr", "Philip", "" ], [ "Gu", "Jindong", "" ] ]
Neural networks are susceptible to adversarial perturbations that are transferable across different models. In this paper, we introduce a novel model alignment technique aimed at improving a given source model's ability in generating transferable adversarial perturbations. During the alignment process, the parameters of the source model are fine-tuned to minimize an alignment loss. This loss measures the divergence in the predictions between the source model and another, independently trained model, referred to as the witness model. To understand the effect of model alignment, we conduct a geometric analysis of the resulting changes in the loss landscape. Extensive experiments on the ImageNet dataset, using a variety of model architectures, demonstrate that perturbations generated from aligned source models exhibit significantly higher transferability than those from the original source model.
2102.08628
Essam Rashed
Essam A. Rashed, Sachiko Kodera, Hidenobu Shirakami, Ryotetsu Kawaguchi, Kazuhiro Watanabe, Akimasa Hirata
Knowledge discovery from emergency ambulance dispatch during COVID-19: A case study of Nagoya City, Japan
15 pages, 12 figures, 2 tables
Journal of Biomedical Informatics, 2021
10.1016/j.jbi.2021.103743
null
cs.AI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate forecasting of medical service requirements is an important big data problem that is crucial for resource management in critical times such as natural disasters and pandemics. With the global spread of coronavirus disease 2019 (COVID-19), several concerns have been raised regarding the ability of medical systems to handle sudden changes in the daily routines of healthcare providers. One significant problem is the management of ambulance dispatch and control during a pandemic. To help address this problem, we first analyze ambulance dispatch data records from April 2014 to August 2020 for Nagoya City, Japan. Significant changes were observed in the data during the pandemic, including the state of emergency (SoE) declared across Japan. In this study, we propose a deep learning framework based on recurrent neural networks to estimate the number of emergency ambulance dispatches (EADs) during a SoE. The fusion of data includes environmental factors, the localization data of mobile phone users, and the past history of EADs, thereby providing a general framework for knowledge discovery and better resource management. The results indicate that the proposed blend of training data can be used efficiently in a real-world estimation of EAD requirements during periods of high uncertainties such as pandemics.
[ { "created": "Wed, 17 Feb 2021 08:37:05 GMT", "version": "v1" } ]
2021-03-23
[ [ "Rashed", "Essam A.", "" ], [ "Kodera", "Sachiko", "" ], [ "Shirakami", "Hidenobu", "" ], [ "Kawaguchi", "Ryotetsu", "" ], [ "Watanabe", "Kazuhiro", "" ], [ "Hirata", "Akimasa", "" ] ]
Accurate forecasting of medical service requirements is an important big data problem that is crucial for resource management in critical times such as natural disasters and pandemics. With the global spread of coronavirus disease 2019 (COVID-19), several concerns have been raised regarding the ability of medical systems to handle sudden changes in the daily routines of healthcare providers. One significant problem is the management of ambulance dispatch and control during a pandemic. To help address this problem, we first analyze ambulance dispatch data records from April 2014 to August 2020 for Nagoya City, Japan. Significant changes were observed in the data during the pandemic, including the state of emergency (SoE) declared across Japan. In this study, we propose a deep learning framework based on recurrent neural networks to estimate the number of emergency ambulance dispatches (EADs) during a SoE. The fusion of data includes environmental factors, the localization data of mobile phone users, and the past history of EADs, thereby providing a general framework for knowledge discovery and better resource management. The results indicate that the proposed blend of training data can be used efficiently in a real-world estimation of EAD requirements during periods of high uncertainties such as pandemics.
2312.10371
Wei Chen
Wei Chen, Gang Zhao, Xiaojin Zhang, Xiang Bai, Xuanjing Huang, Zhongyu Wei
K-ESConv: Knowledge Injection for Emotional Support Dialogue Systems via Prompt Learning
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic psychological counseling requires mass of professional knowledge that can be found in online counseling forums. Motivated by this, we propose K-ESConv, a novel prompt learning based knowledge injection method for emotional support dialogue system, transferring forum knowledge to response generation. We evaluate our model on an emotional support dataset ESConv, where the model retrieves and incorporates knowledge from external professional emotional Q\&A forum. Experiment results show that the proposed method outperforms existing baselines on both automatic evaluation and human evaluation, which shows that our approach significantly improves the correlation and diversity of responses and provides more comfort and better suggestion for the seeker.
[ { "created": "Sat, 16 Dec 2023 08:10:10 GMT", "version": "v1" } ]
2023-12-19
[ [ "Chen", "Wei", "" ], [ "Zhao", "Gang", "" ], [ "Zhang", "Xiaojin", "" ], [ "Bai", "Xiang", "" ], [ "Huang", "Xuanjing", "" ], [ "Wei", "Zhongyu", "" ] ]
Automatic psychological counseling requires mass of professional knowledge that can be found in online counseling forums. Motivated by this, we propose K-ESConv, a novel prompt learning based knowledge injection method for emotional support dialogue system, transferring forum knowledge to response generation. We evaluate our model on an emotional support dataset ESConv, where the model retrieves and incorporates knowledge from external professional emotional Q\&A forum. Experiment results show that the proposed method outperforms existing baselines on both automatic evaluation and human evaluation, which shows that our approach significantly improves the correlation and diversity of responses and provides more comfort and better suggestion for the seeker.
2405.00154
Aleksandr Katrutsa
Alexander Demin, Yuriy Dorn, Aleksandr Katrutsa, Daniil Kazantsev, Ilgam Latypov, Yulia Maximlyuk, Denis Ponomaryov
EEvA: Fast Expert-Based Algorithms for Buffer Page Replacement
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Optimal page replacement is an important problem in efficient buffer management. The range of replacement strategies known in the literature varies from simple but efficient FIFO-based algorithms to more accurate but potentially costly methods tailored to specific data access patterns. The principal issue in adopting a pattern-specific replacement logic in a DB buffer manager is to guarantee non-degradation in general high-load regimes. In this paper, we propose a new family of page replacement algorithms for DB buffer manager which demonstrate a superior performance wrt competitors on custom data access patterns and imply a low computational overhead on TPC-C. We provide theoretical foundations and an extensive experimental study on the proposed algorithms which covers synthetic benchmarks and an implementation in an open-source DB kernel evaluated on TPC-C.
[ { "created": "Tue, 30 Apr 2024 19:04:53 GMT", "version": "v1" } ]
2024-05-02
[ [ "Demin", "Alexander", "" ], [ "Dorn", "Yuriy", "" ], [ "Katrutsa", "Aleksandr", "" ], [ "Kazantsev", "Daniil", "" ], [ "Latypov", "Ilgam", "" ], [ "Maximlyuk", "Yulia", "" ], [ "Ponomaryov", "Denis", "" ] ]
Optimal page replacement is an important problem in efficient buffer management. The range of replacement strategies known in the literature varies from simple but efficient FIFO-based algorithms to more accurate but potentially costly methods tailored to specific data access patterns. The principal issue in adopting a pattern-specific replacement logic in a DB buffer manager is to guarantee non-degradation in general high-load regimes. In this paper, we propose a new family of page replacement algorithms for DB buffer manager which demonstrate a superior performance wrt competitors on custom data access patterns and imply a low computational overhead on TPC-C. We provide theoretical foundations and an extensive experimental study on the proposed algorithms which covers synthetic benchmarks and an implementation in an open-source DB kernel evaluated on TPC-C.
2403.10338
Priyanka Sukumaran
Priyanka Sukumaran, Conor Houghton, Nina Kazanina
Investigating grammatical abstraction in language models using few-shot learning of novel noun gender
EACL 2024; Findings of the Association for Computational Linguistics
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Humans can learn a new word and infer its grammatical properties from very few examples. They have an abstract notion of linguistic properties like grammatical gender and agreement rules that can be applied to novel syntactic contexts and words. Drawing inspiration from psycholinguistics, we conduct a noun learning experiment to assess whether an LSTM and a decoder-only transformer can achieve human-like abstraction of grammatical gender in French. Language models were tasked with learning the gender of a novel noun embedding from a few examples in one grammatical agreement context and predicting agreement in another, unseen context. We find that both language models effectively generalise novel noun gender from one to two learning examples and apply the learnt gender across agreement contexts, albeit with a bias for the masculine gender category. Importantly, the few-shot updates were only applied to the embedding layers, demonstrating that models encode sufficient gender information within the word embedding space. While the generalisation behaviour of models suggests that they represent grammatical gender as an abstract category, like humans, further work is needed to explore the details of how exactly this is implemented. For a comparative perspective with human behaviour, we conducted an analogous one-shot novel noun gender learning experiment, which revealed that native French speakers, like language models, also exhibited a masculine gender bias and are not excellent one-shot learners either.
[ { "created": "Fri, 15 Mar 2024 14:25:59 GMT", "version": "v1" } ]
2024-03-18
[ [ "Sukumaran", "Priyanka", "" ], [ "Houghton", "Conor", "" ], [ "Kazanina", "Nina", "" ] ]
Humans can learn a new word and infer its grammatical properties from very few examples. They have an abstract notion of linguistic properties like grammatical gender and agreement rules that can be applied to novel syntactic contexts and words. Drawing inspiration from psycholinguistics, we conduct a noun learning experiment to assess whether an LSTM and a decoder-only transformer can achieve human-like abstraction of grammatical gender in French. Language models were tasked with learning the gender of a novel noun embedding from a few examples in one grammatical agreement context and predicting agreement in another, unseen context. We find that both language models effectively generalise novel noun gender from one to two learning examples and apply the learnt gender across agreement contexts, albeit with a bias for the masculine gender category. Importantly, the few-shot updates were only applied to the embedding layers, demonstrating that models encode sufficient gender information within the word embedding space. While the generalisation behaviour of models suggests that they represent grammatical gender as an abstract category, like humans, further work is needed to explore the details of how exactly this is implemented. For a comparative perspective with human behaviour, we conducted an analogous one-shot novel noun gender learning experiment, which revealed that native French speakers, like language models, also exhibited a masculine gender bias and are not excellent one-shot learners either.
2003.10670
Xuesong Li
Xuesong Li, Jose Guivant, Subhan Khan
Real-time 3D object proposal generation and classification under limited processing resources
null
Robotics and Autonomous Systems, 130 (2020) 103557
10.1016/j.robot.2020.103557
2-s2.0-85084829367
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of detecting 3D objects is important to various robotic applications. The existing deep learning-based detection techniques have achieved impressive performance. However, these techniques are limited to run with a graphics processing unit (GPU) in a real-time environment. To achieve real-time 3D object detection with limited computational resources for robots, we propose an efficient detection method consisting of 3D proposal generation and classification. The proposal generation is mainly based on point segmentation, while the proposal classification is performed by a lightweight convolution neural network (CNN) model. To validate our method, KITTI datasets are utilized. The experimental results demonstrate the capability of proposed real-time 3D object detection method from the point cloud with a competitive performance of object recall and classification.
[ { "created": "Tue, 24 Mar 2020 05:36:53 GMT", "version": "v1" } ]
2020-08-14
[ [ "Li", "Xuesong", "" ], [ "Guivant", "Jose", "" ], [ "Khan", "Subhan", "" ] ]
The task of detecting 3D objects is important to various robotic applications. The existing deep learning-based detection techniques have achieved impressive performance. However, these techniques are limited to run with a graphics processing unit (GPU) in a real-time environment. To achieve real-time 3D object detection with limited computational resources for robots, we propose an efficient detection method consisting of 3D proposal generation and classification. The proposal generation is mainly based on point segmentation, while the proposal classification is performed by a lightweight convolution neural network (CNN) model. To validate our method, KITTI datasets are utilized. The experimental results demonstrate the capability of proposed real-time 3D object detection method from the point cloud with a competitive performance of object recall and classification.
2310.15642
Andr\'e Silva
Nuno Saavedra, Andr\'e Silva, Martin Monperrus
GitBug-Actions: Building Reproducible Bug-Fix Benchmarks with GitHub Actions
Accepted to ICSE 2024 Demo
Proceedings of ICSE Tool, 2024
10.1145/3639478.3640023
null
cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
Bug-fix benchmarks are fundamental in advancing various sub-fields of software engineering such as automatic program repair (APR) and fault localization (FL). A good benchmark must include recent examples that accurately reflect technologies and development practices of today. To be executable in the long term, a benchmark must feature test suites that do not degrade overtime due to, for example, dependencies that are no longer available. Existing benchmarks fail in meeting both criteria. For instance, Defects4J, one of the foremost Java benchmarks, last received an update in 2020. Moreover, full-reproducibility has been neglected by the majority of existing benchmarks. In this paper, we present GitBug-Actions: a novel tool for building bug-fix benchmarks with modern and fully-reproducible bug-fixes. GitBug-Actions relies on the most popular CI platform, GitHub Actions, to detect bug-fixes and smartly locally execute the CI pipeline in a controlled and reproducible environment. To the best of our knowledge, we are the first to rely on GitHub Actions to collect bug-fixes. To demonstrate our toolchain, we deploy GitBug-Actions to build a proof-of-concept Go bug-fix benchmark containing executable, fully-reproducible bug-fixes from different repositories. A video demonstrating GitBug-Actions is available at: https://youtu.be/aBWwa1sJYBs.
[ { "created": "Tue, 24 Oct 2023 09:04:14 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 13:25:08 GMT", "version": "v2" }, { "created": "Sun, 21 Jan 2024 12:01:33 GMT", "version": "v3" } ]
2024-03-15
[ [ "Saavedra", "Nuno", "" ], [ "Silva", "André", "" ], [ "Monperrus", "Martin", "" ] ]
Bug-fix benchmarks are fundamental in advancing various sub-fields of software engineering such as automatic program repair (APR) and fault localization (FL). A good benchmark must include recent examples that accurately reflect technologies and development practices of today. To be executable in the long term, a benchmark must feature test suites that do not degrade overtime due to, for example, dependencies that are no longer available. Existing benchmarks fail in meeting both criteria. For instance, Defects4J, one of the foremost Java benchmarks, last received an update in 2020. Moreover, full-reproducibility has been neglected by the majority of existing benchmarks. In this paper, we present GitBug-Actions: a novel tool for building bug-fix benchmarks with modern and fully-reproducible bug-fixes. GitBug-Actions relies on the most popular CI platform, GitHub Actions, to detect bug-fixes and smartly locally execute the CI pipeline in a controlled and reproducible environment. To the best of our knowledge, we are the first to rely on GitHub Actions to collect bug-fixes. To demonstrate our toolchain, we deploy GitBug-Actions to build a proof-of-concept Go bug-fix benchmark containing executable, fully-reproducible bug-fixes from different repositories. A video demonstrating GitBug-Actions is available at: https://youtu.be/aBWwa1sJYBs.
1507.02531
Robert Koenighofer
Roderick Bloem and Ruediger Ehlers and Robert Koenighofer
Cooperative Reactive Synthesis
18 pages, 3 figures. This is an extended version of [7], featuring an additional appendix
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A modern approach to engineering correct-by-construction systems is to synthesize them automatically from formal specifications. Oftentimes, a system can only satisfy its guarantees if certain environment assumptions hold, which motivates their inclusion in the system specification. Experience with modern synthesis approaches shows that synthesized systems tend to satisfy their specifications by actively working towards the violation of the assumptions rather than satisfying assumptions and guarantees together. Such uncooperative behavior is undesirable because it violates the aim of synthesis: the system should try to satisfy its guarantees and use the assumptions only when needed. Also, the assumptions often describe the valid behavior of other components in a bigger system, which should not be obstructed unnecessarily. In this paper, we present a hierarchy of cooperation levels between system and environment. Each level describes how well the system enforces both the assumptions and guarantees. We show how to synthesize systems that achieve the highest possible cooperation level for a given specification in Linear Temporal Logic (LTL). The synthesized systems can also exploit cooperative environment behavior during operation to reach a higher cooperation level that is not enforceable by the system initially. The worst-case time complexity of our synthesis procedure is doubly-exponential, which matches the complexity of standard LTL synthesis. This is an extended version of [7] that features an additional appendix.
[ { "created": "Thu, 9 Jul 2015 14:39:25 GMT", "version": "v1" } ]
2015-07-10
[ [ "Bloem", "Roderick", "" ], [ "Ehlers", "Ruediger", "" ], [ "Koenighofer", "Robert", "" ] ]
A modern approach to engineering correct-by-construction systems is to synthesize them automatically from formal specifications. Oftentimes, a system can only satisfy its guarantees if certain environment assumptions hold, which motivates their inclusion in the system specification. Experience with modern synthesis approaches shows that synthesized systems tend to satisfy their specifications by actively working towards the violation of the assumptions rather than satisfying assumptions and guarantees together. Such uncooperative behavior is undesirable because it violates the aim of synthesis: the system should try to satisfy its guarantees and use the assumptions only when needed. Also, the assumptions often describe the valid behavior of other components in a bigger system, which should not be obstructed unnecessarily. In this paper, we present a hierarchy of cooperation levels between system and environment. Each level describes how well the system enforces both the assumptions and guarantees. We show how to synthesize systems that achieve the highest possible cooperation level for a given specification in Linear Temporal Logic (LTL). The synthesized systems can also exploit cooperative environment behavior during operation to reach a higher cooperation level that is not enforceable by the system initially. The worst-case time complexity of our synthesis procedure is doubly-exponential, which matches the complexity of standard LTL synthesis. This is an extended version of [7] that features an additional appendix.
2404.17094
Yufeng Li
Yufeng Li, Yiwei Ci, Qiusong Yang
TIUP: Effective Processor Verification with Tautology-Induced Universal Properties
Accepted by ASP-DAC 2024, please note that this is not the final camera-ready version
null
10.1109/ASP-DAC58780.2024.10473912.
null
cs.LO cs.AR cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Design verification is a complex and costly task, especially for large and intricate processor projects. Formal verification techniques provide advantages by thoroughly examining design behaviors, but they require extensive labor and expertise in property formulation. Recent research focuses on verifying designs using the self-consistency universal property, reducing verification difficulty as it is design-independent. However, the single self-consistency property faces false positives and scalability issues due to exponential state space growth. To tackle these challenges, this paper introduces TIUP, a technique using tautologies as universal properties. We show how TIUP effectively uses tautologies as abstract specifications, covering processor data and control paths. TIUP simplifies and streamlines verification for engineers, enabling efficient formal processor verification.
[ { "created": "Fri, 26 Apr 2024 01:05:36 GMT", "version": "v1" } ]
2024-04-29
[ [ "Li", "Yufeng", "" ], [ "Ci", "Yiwei", "" ], [ "Yang", "Qiusong", "" ] ]
Design verification is a complex and costly task, especially for large and intricate processor projects. Formal verification techniques provide advantages by thoroughly examining design behaviors, but they require extensive labor and expertise in property formulation. Recent research focuses on verifying designs using the self-consistency universal property, reducing verification difficulty as it is design-independent. However, the single self-consistency property faces false positives and scalability issues due to exponential state space growth. To tackle these challenges, this paper introduces TIUP, a technique using tautologies as universal properties. We show how TIUP effectively uses tautologies as abstract specifications, covering processor data and control paths. TIUP simplifies and streamlines verification for engineers, enabling efficient formal processor verification.
2007.04118
Xiao Yang
Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
RobFR: Benchmarking Adversarial Robustness on Face Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face recognition (FR) has recently made substantial progress and achieved high accuracy on standard benchmarks. However, it has raised security concerns in enormous FR applications because deep CNNs are unusually vulnerable to adversarial examples, and it is still lack of a comprehensive robustness evaluation before a FR model is deployed in safety-critical scenarios. To facilitate a better understanding of the adversarial vulnerability on FR, we develop an adversarial robustness evaluation library on FR named \textbf{RobFR}, which serves as a reference for evaluating the robustness of downstream tasks. Specifically, RobFR involves 15 popular naturally trained FR models, 9 models with representative defense mechanisms and 2 commercial FR API services, to perform the robustness evaluation by using various adversarial attacks as an important surrogate. The evaluations are conducted under diverse adversarial settings in terms of dodging and impersonation, $\ell_2$ and $\ell_\infty$, as well as white-box and black-box attacks. We further propose a landmark-guided cutout (LGC) attack method to improve the transferability of adversarial examples for black-box attacks by considering the special characteristics of FR. Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models. RobFR is open-source and maintains all extendable modules, i.e., \emph{Datasets}, \emph{FR Models}, \emph{Attacks\&Defenses}, and \emph{Evaluations} at \url{https://github.com/ShawnXYang/Face-Robustness-Benchmark}, which will be continuously updated to promote future research on robust FR.
[ { "created": "Wed, 8 Jul 2020 13:39:22 GMT", "version": "v1" }, { "created": "Wed, 29 Sep 2021 08:01:13 GMT", "version": "v2" } ]
2021-09-30
[ [ "Yang", "Xiao", "" ], [ "Yang", "Dingcheng", "" ], [ "Dong", "Yinpeng", "" ], [ "Su", "Hang", "" ], [ "Yu", "Wenjian", "" ], [ "Zhu", "Jun", "" ] ]
Face recognition (FR) has recently made substantial progress and achieved high accuracy on standard benchmarks. However, it has raised security concerns in enormous FR applications because deep CNNs are unusually vulnerable to adversarial examples, and it is still lack of a comprehensive robustness evaluation before a FR model is deployed in safety-critical scenarios. To facilitate a better understanding of the adversarial vulnerability on FR, we develop an adversarial robustness evaluation library on FR named \textbf{RobFR}, which serves as a reference for evaluating the robustness of downstream tasks. Specifically, RobFR involves 15 popular naturally trained FR models, 9 models with representative defense mechanisms and 2 commercial FR API services, to perform the robustness evaluation by using various adversarial attacks as an important surrogate. The evaluations are conducted under diverse adversarial settings in terms of dodging and impersonation, $\ell_2$ and $\ell_\infty$, as well as white-box and black-box attacks. We further propose a landmark-guided cutout (LGC) attack method to improve the transferability of adversarial examples for black-box attacks by considering the special characteristics of FR. Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models. RobFR is open-source and maintains all extendable modules, i.e., \emph{Datasets}, \emph{FR Models}, \emph{Attacks\&Defenses}, and \emph{Evaluations} at \url{https://github.com/ShawnXYang/Face-Robustness-Benchmark}, which will be continuously updated to promote future research on robust FR.
1703.07387
Tamal Dey
Tamal K. Dey, Facundo Memoli, Yusu Wang
Topological Analysis of Nerves, Reeb Spaces, Mappers, and Multiscale Mappers
Full version of the paper appearing in International Symposium on Computational Geometry, 2017
null
null
null
cs.CG math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data analysis often concerns not only the space where data come from, but also various types of maps attached to data. In recent years, several related structures have been used to study maps on data, including Reeb spaces, mappers and multiscale mappers. The construction of these structures also relies on the so-called \emph{nerve} of a cover of the domain. In this paper, we aim to analyze the topological information encoded in these structures in order to provide better understanding of these structures and facilitate their practical usage. More specifically, we show that the one-dimensional homology of the nerve complex $N(\mathcal{U})$ of a path-connected cover $\mathcal{U}$ of a domain $X$ cannot be richer than that of the domain $X$ itself. Intuitively, this result means that no new $H_1$-homology class can be "created" under a natural map from $X$ to the nerve complex $N(\mathcal{U})$. Equipping $X$ with a pseudometric $d$, we further refine this result and characterize the classes of $H_1(X)$ that may survive in the nerve complex using the notion of \emph{size} of the covering elements in $\mathcal{U}$. These fundamental results about nerve complexes then lead to an analysis of the $H_1$-homology of Reeb spaces, mappers and multiscale mappers. The analysis of $H_1$-homology groups unfortunately does not extend to higher dimensions. Nevertheless, by using a map-induced metric, establishing a Gromov-Hausdorff convergence result between mappers and the domain, and interleaving relevant modules, we can still analyze the persistent homology groups of (multiscale) mappers to establish a connection to Reeb spaces.
[ { "created": "Tue, 21 Mar 2017 18:50:24 GMT", "version": "v1" } ]
2017-03-23
[ [ "Dey", "Tamal K.", "" ], [ "Memoli", "Facundo", "" ], [ "Wang", "Yusu", "" ] ]
Data analysis often concerns not only the space where data come from, but also various types of maps attached to data. In recent years, several related structures have been used to study maps on data, including Reeb spaces, mappers and multiscale mappers. The construction of these structures also relies on the so-called \emph{nerve} of a cover of the domain. In this paper, we aim to analyze the topological information encoded in these structures in order to provide better understanding of these structures and facilitate their practical usage. More specifically, we show that the one-dimensional homology of the nerve complex $N(\mathcal{U})$ of a path-connected cover $\mathcal{U}$ of a domain $X$ cannot be richer than that of the domain $X$ itself. Intuitively, this result means that no new $H_1$-homology class can be "created" under a natural map from $X$ to the nerve complex $N(\mathcal{U})$. Equipping $X$ with a pseudometric $d$, we further refine this result and characterize the classes of $H_1(X)$ that may survive in the nerve complex using the notion of \emph{size} of the covering elements in $\mathcal{U}$. These fundamental results about nerve complexes then lead to an analysis of the $H_1$-homology of Reeb spaces, mappers and multiscale mappers. The analysis of $H_1$-homology groups unfortunately does not extend to higher dimensions. Nevertheless, by using a map-induced metric, establishing a Gromov-Hausdorff convergence result between mappers and the domain, and interleaving relevant modules, we can still analyze the persistent homology groups of (multiscale) mappers to establish a connection to Reeb spaces.
1707.02000
Kamesh Madduri
Humayun Kabir, Kamesh Madduri
Shared-memory Graph Truss Decomposition
10 pages, conference submission
null
null
null
cs.DC cs.DS cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present PKT, a new shared-memory parallel algorithm and OpenMP implementation for the truss decomposition of large sparse graphs. A k-truss is a dense subgraph definition that can be considered a relaxation of a clique. Truss decomposition refers to a partitioning of all the edges in the graph based on their k-truss membership. The truss decomposition of a graph has many applications. We show that our new approach PKT consistently outperforms other truss decomposition approaches for a collection of large sparse graphs and on a 24-core shared-memory server. PKT is based on a recently proposed algorithm for k-core decomposition.
[ { "created": "Fri, 7 Jul 2017 00:09:09 GMT", "version": "v1" } ]
2017-07-10
[ [ "Kabir", "Humayun", "" ], [ "Madduri", "Kamesh", "" ] ]
We present PKT, a new shared-memory parallel algorithm and OpenMP implementation for the truss decomposition of large sparse graphs. A k-truss is a dense subgraph definition that can be considered a relaxation of a clique. Truss decomposition refers to a partitioning of all the edges in the graph based on their k-truss membership. The truss decomposition of a graph has many applications. We show that our new approach PKT consistently outperforms other truss decomposition approaches for a collection of large sparse graphs and on a 24-core shared-memory server. PKT is based on a recently proposed algorithm for k-core decomposition.
1907.07388
Samarth Manoj Brahmbhatt
Samarth Brahmbhatt, Charles C. Kemp and James Hays
Towards Markerless Grasp Capture
Third Workshop on Computer Vision for AR/VR, CVPR 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans excel at grasping objects and manipulating them. Capturing human grasps is important for understanding grasping behavior and reconstructing it realistically in Virtual Reality (VR). However, grasp capture - capturing the pose of a hand grasping an object, and orienting it w.r.t. the object - is difficult because of the complexity and diversity of the human hand, and occlusion. Reflective markers and magnetic trackers traditionally used to mitigate this difficulty introduce undesirable artifacts in images and can interfere with natural grasping behavior. We present preliminary work on a completely marker-less algorithm for grasp capture from a video depicting a grasp. We show how recent advances in 2D hand pose estimation can be used with well-established optimization techniques. Uniquely, our algorithm can also capture hand-object contact in detail and integrate it in the grasp capture process. This is work in progress, find more details at https://contactdb. cc.gatech.edu/grasp_capture.html.
[ { "created": "Wed, 17 Jul 2019 08:41:21 GMT", "version": "v1" } ]
2019-07-18
[ [ "Brahmbhatt", "Samarth", "" ], [ "Kemp", "Charles C.", "" ], [ "Hays", "James", "" ] ]
Humans excel at grasping objects and manipulating them. Capturing human grasps is important for understanding grasping behavior and reconstructing it realistically in Virtual Reality (VR). However, grasp capture - capturing the pose of a hand grasping an object, and orienting it w.r.t. the object - is difficult because of the complexity and diversity of the human hand, and occlusion. Reflective markers and magnetic trackers traditionally used to mitigate this difficulty introduce undesirable artifacts in images and can interfere with natural grasping behavior. We present preliminary work on a completely marker-less algorithm for grasp capture from a video depicting a grasp. We show how recent advances in 2D hand pose estimation can be used with well-established optimization techniques. Uniquely, our algorithm can also capture hand-object contact in detail and integrate it in the grasp capture process. This is work in progress, find more details at https://contactdb. cc.gatech.edu/grasp_capture.html.
2201.09536
Thang X. Vu
Thang X. Vu, Nicola Maturo, Symeon Chatzinotas, Joel Grotz, Tom Christophory, Bj\"orn Ottersten
Dynamic Bandwidth Allocation and Edge Caching Optimization for Nonlinear Content Delivery through Flexible Multibeam Satellites
Accepted to IEEE ICC 2022
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
The next generation multibeam satellites open up a new way to design satellite communication channels with the full flexibility in bandwidth, transmit power and beam coverage management. In this paper, we exploit the flexible multibeam satellite capabilities and the geographical distribution of users to improve the performance of satellite-assisted edge caching systems. Our aim is to jointly optimize the bandwidth allocation in multibeam and caching decisions at the edge nodes to address two important problems: i) cache feeding time minimization and ii) cache hits maximization. To tackle the non-convexity of the joint optimization problem, we transform the original problem into a difference-of-convex (DC) form, which is then solved by the proposed iterative algorithm whose convergence to at least a local optimum is theoretically guaranteed. Furthermore, the effectiveness of the proposed design is evaluated under the realistic beams coverage of the satellite SES-14 and Movielens data set. Numerical results show that our proposed joint design can reduce the caching feeding time by 50\% and increase the cache hit ratio (CHR) by 10\% to 20\% compared to existing solutions. Furthermore, we examine the impact of multispot beam and multicarrier wide-beam on the joint design and discuss potential research directions.
[ { "created": "Mon, 24 Jan 2022 09:07:41 GMT", "version": "v1" } ]
2022-01-25
[ [ "Vu", "Thang X.", "" ], [ "Maturo", "Nicola", "" ], [ "Chatzinotas", "Symeon", "" ], [ "Grotz", "Joel", "" ], [ "Christophory", "Tom", "" ], [ "Ottersten", "Björn", "" ] ]
The next generation multibeam satellites open up a new way to design satellite communication channels with the full flexibility in bandwidth, transmit power and beam coverage management. In this paper, we exploit the flexible multibeam satellite capabilities and the geographical distribution of users to improve the performance of satellite-assisted edge caching systems. Our aim is to jointly optimize the bandwidth allocation in multibeam and caching decisions at the edge nodes to address two important problems: i) cache feeding time minimization and ii) cache hits maximization. To tackle the non-convexity of the joint optimization problem, we transform the original problem into a difference-of-convex (DC) form, which is then solved by the proposed iterative algorithm whose convergence to at least a local optimum is theoretically guaranteed. Furthermore, the effectiveness of the proposed design is evaluated under the realistic beams coverage of the satellite SES-14 and Movielens data set. Numerical results show that our proposed joint design can reduce the caching feeding time by 50\% and increase the cache hit ratio (CHR) by 10\% to 20\% compared to existing solutions. Furthermore, we examine the impact of multispot beam and multicarrier wide-beam on the joint design and discuss potential research directions.
2206.13773
Max Koster
Max Koster
On Relaxation of Dominant Sets
null
null
null
null
cs.DS cs.DM math.CO
http://creativecommons.org/licenses/by/4.0/
In a graph $G = (V,E)$, a k-ruling set $S$ is one in which all vertices $V$ \ $S$ are at most $k$ distance from $S$. Finding a minimum k-ruling set is intrinsically linked to the minimum dominating set problem and maximal independent set problem, which have been extensively studied in graph theory. This paper presents the first known algorithm for solving all k-ruling set problems in conjunction with known minimum dominating set algorithms at only additional polynomial time cost compared to a minimum dominating set. The algorithm further succeeds for $(\alpha, \alpha - 1)$ ruling sets in which $\alpha > 1$, for which constraints exist on the proximity of vertices v $\in S$. This secondary application instead works in conjunction with maximal independent set algorithms.
[ { "created": "Tue, 28 Jun 2022 05:59:51 GMT", "version": "v1" } ]
2022-06-29
[ [ "Koster", "Max", "" ] ]
In a graph $G = (V,E)$, a k-ruling set $S$ is one in which all vertices $V$ \ $S$ are at most $k$ distance from $S$. Finding a minimum k-ruling set is intrinsically linked to the minimum dominating set problem and maximal independent set problem, which have been extensively studied in graph theory. This paper presents the first known algorithm for solving all k-ruling set problems in conjunction with known minimum dominating set algorithms at only additional polynomial time cost compared to a minimum dominating set. The algorithm further succeeds for $(\alpha, \alpha - 1)$ ruling sets in which $\alpha > 1$, for which constraints exist on the proximity of vertices v $\in S$. This secondary application instead works in conjunction with maximal independent set algorithms.
2004.11992
Bram Wallace
Bram Wallace, Bharath Hariharan
Extending and Analyzing Self-Supervised Learning Across Domains
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised representation learning has achieved impressive results in recent years, with experiments primarily coming on ImageNet or other similarly large internet imagery datasets. There has been little to no work with these methods on other smaller domains, such as satellite, textural, or biological imagery. We experiment with several popular methods on an unprecedented variety of domains. We discover, among other findings, that Rotation is by far the most semantically meaningful task, with much of the performance of Jigsaw and Instance Discrimination being attributable to the nature of their induced distribution rather than semantic understanding. Additionally, there are several areas, such as fine-grain classification, where all tasks underperform. We quantitatively and qualitatively diagnose the reasons for these failures and successes via novel experiments studying pretext generalization, random labelings, and implicit dimensionality. Code and models are available at https://github.com/BramSW/Extending_SSRL_Across_Domains/.
[ { "created": "Fri, 24 Apr 2020 21:18:02 GMT", "version": "v1" }, { "created": "Mon, 17 Aug 2020 16:13:46 GMT", "version": "v2" } ]
2020-08-18
[ [ "Wallace", "Bram", "" ], [ "Hariharan", "Bharath", "" ] ]
Self-supervised representation learning has achieved impressive results in recent years, with experiments primarily coming on ImageNet or other similarly large internet imagery datasets. There has been little to no work with these methods on other smaller domains, such as satellite, textural, or biological imagery. We experiment with several popular methods on an unprecedented variety of domains. We discover, among other findings, that Rotation is by far the most semantically meaningful task, with much of the performance of Jigsaw and Instance Discrimination being attributable to the nature of their induced distribution rather than semantic understanding. Additionally, there are several areas, such as fine-grain classification, where all tasks underperform. We quantitatively and qualitatively diagnose the reasons for these failures and successes via novel experiments studying pretext generalization, random labelings, and implicit dimensionality. Code and models are available at https://github.com/BramSW/Extending_SSRL_Across_Domains/.
2110.00821
Elisa Claire Alem\'an Carre\'on
Elisa Claire Alem\'an Carre\'on and Hirofumi Nonaka and Toru Hiraoka
Relation Analysis between Hotel Review Rating Scores and Sentiment Analysis of Reviews by Chinese Tourists Visiting Japan
Translation of the original in Japanese
The Japanese Journal of the Institute of Industrial Applications Engineers (JJIIAE), 2018, Vol. 6, No. 2. pp. 95-99
10.12792/jjiiae.6.2.95
null
cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
In current times, the importance of online hotel review sites has become more and more apparent. Users of these sites reference of reviews strongly influences their purchase behavior and as such, reviews are important to companies and researchers alike. The majority of review sites offer both text reviews and numerical hotel ratings, and both information sources are widely used by researchers as a representation of a customer's sentiment and opinion. However, an opinion is a difficult concept to measure, and as such, depending on the relation these two sources have, it would be apparent whether or not it is safe to consider them equally in research. In this study we utilize an entropy-based Support Vector Machine to classify positive and negative sentiments in hotel reviews from the site Ctrip, then calculating the ratio of positive and negative sentiment in each review and examine their correlation with said review's rating score using Spearman and Kendall Correlation coefficients and Maximal Information Coefficient (MIC).
[ { "created": "Sat, 2 Oct 2021 15:07:46 GMT", "version": "v1" } ]
2021-10-05
[ [ "Carreón", "Elisa Claire Alemán", "" ], [ "Nonaka", "Hirofumi", "" ], [ "Hiraoka", "Toru", "" ] ]
In current times, the importance of online hotel review sites has become more and more apparent. Users of these sites reference of reviews strongly influences their purchase behavior and as such, reviews are important to companies and researchers alike. The majority of review sites offer both text reviews and numerical hotel ratings, and both information sources are widely used by researchers as a representation of a customer's sentiment and opinion. However, an opinion is a difficult concept to measure, and as such, depending on the relation these two sources have, it would be apparent whether or not it is safe to consider them equally in research. In this study we utilize an entropy-based Support Vector Machine to classify positive and negative sentiments in hotel reviews from the site Ctrip, then calculating the ratio of positive and negative sentiment in each review and examine their correlation with said review's rating score using Spearman and Kendall Correlation coefficients and Maximal Information Coefficient (MIC).
2107.07771
Yajing Sun
Yajing Sun, Yue Hu, Luxi Xing, Yuqiang Xie, Xiangpeng Wei
Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for Open-domain Dialogue Generation
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses. Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation. Besides, it is indispensable to control personal knowledge utilization over the conversation level. In this paper, we propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds. First, we consider conversation consistency from multiple views. From the view of the persona profile, we design a novel interaction module that not only iteratively incorporates personalized knowledge into each turn conversation but also captures the personality-related information from conversation to enhance personalized knowledge semantic representation. From the view of speaking style, we introduce the speaking style vector and feed it into the decoder to keep the speaking style consistency. To avoid conversation repetition, we devise a coverage mechanism to keep track of the activation of personal knowledge utilization. Experiments on both automatic and human evaluation verify the superiority of our model over previous models.
[ { "created": "Fri, 16 Jul 2021 08:59:06 GMT", "version": "v1" } ]
2021-07-19
[ [ "Sun", "Yajing", "" ], [ "Hu", "Yue", "" ], [ "Xing", "Luxi", "" ], [ "Xie", "Yuqiang", "" ], [ "Wei", "Xiangpeng", "" ] ]
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses. Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation. Besides, it is indispensable to control personal knowledge utilization over the conversation level. In this paper, we propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds. First, we consider conversation consistency from multiple views. From the view of the persona profile, we design a novel interaction module that not only iteratively incorporates personalized knowledge into each turn conversation but also captures the personality-related information from conversation to enhance personalized knowledge semantic representation. From the view of speaking style, we introduce the speaking style vector and feed it into the decoder to keep the speaking style consistency. To avoid conversation repetition, we devise a coverage mechanism to keep track of the activation of personal knowledge utilization. Experiments on both automatic and human evaluation verify the superiority of our model over previous models.
1708.05125
Feiyun Zhu
Feiyun Zhu
Hyperspectral Unmixing: Ground Truth Labeling, Datasets, Benchmark Performances and Survey
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hyperspectral unmixing (HU) is a very useful and increasingly popular preprocessing step for a wide range of hyperspectral applications. However, the HU research has been constrained a lot by three factors: (a) the number of hyperspectral images (especially the ones with ground truths) are very limited; (b) the ground truths of most hyperspectral images are not shared on the web, which may cause lots of unnecessary troubles for researchers to evaluate their algorithms; (c) the codes of most state-of-the-art methods are not shared, which may also delay the testing of new methods. Accordingly, this paper deals with the above issues from the following three perspectives: (1) as a profound contribution, we provide a general labeling method for the HU. With it, we labeled up to 15 hyperspectral images, providing 18 versions of ground truths. To the best of our knowledge, this is the first paper to summarize and share up to 15 hyperspectral images and their 18 versions of ground truths for the HU. Observing that the hyperspectral classification (HyC) has much more standard datasets (whose ground truths are generally publicly shared) than the HU, we propose an interesting method to transform the HyC datasets for the HU research. (2) To further facilitate the evaluation of HU methods under different conditions, we reviewed and implemented the algorithm to generate a complex synthetic hyperspectral image. By tuning the hyper-parameters in the code, we may verify the HU methods from four perspectives. The code would also be shared on the web. (3) To provide a standard comparison, we reviewed up to 10 state-of-the-art HU algorithms, then selected the 5 most benchmark HU algorithms, and compared them on the 15 real hyperspectral datasets. The experiment results are surely reproducible; the implemented codes would be shared on the web.
[ { "created": "Thu, 17 Aug 2017 03:35:02 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2017 16:22:06 GMT", "version": "v2" } ]
2017-10-12
[ [ "Zhu", "Feiyun", "" ] ]
Hyperspectral unmixing (HU) is a very useful and increasingly popular preprocessing step for a wide range of hyperspectral applications. However, the HU research has been constrained a lot by three factors: (a) the number of hyperspectral images (especially the ones with ground truths) are very limited; (b) the ground truths of most hyperspectral images are not shared on the web, which may cause lots of unnecessary troubles for researchers to evaluate their algorithms; (c) the codes of most state-of-the-art methods are not shared, which may also delay the testing of new methods. Accordingly, this paper deals with the above issues from the following three perspectives: (1) as a profound contribution, we provide a general labeling method for the HU. With it, we labeled up to 15 hyperspectral images, providing 18 versions of ground truths. To the best of our knowledge, this is the first paper to summarize and share up to 15 hyperspectral images and their 18 versions of ground truths for the HU. Observing that the hyperspectral classification (HyC) has much more standard datasets (whose ground truths are generally publicly shared) than the HU, we propose an interesting method to transform the HyC datasets for the HU research. (2) To further facilitate the evaluation of HU methods under different conditions, we reviewed and implemented the algorithm to generate a complex synthetic hyperspectral image. By tuning the hyper-parameters in the code, we may verify the HU methods from four perspectives. The code would also be shared on the web. (3) To provide a standard comparison, we reviewed up to 10 state-of-the-art HU algorithms, then selected the 5 most benchmark HU algorithms, and compared them on the 15 real hyperspectral datasets. The experiment results are surely reproducible; the implemented codes would be shared on the web.
2308.14404
Forouzan Farzinnejad
Forouzan Farzinnejad, Javad Rasti, Navid Khezrian, Jens Grubert
The Effect of an Exergame on the Shadow Play Skill Based on Muscle Memory for Young Female Participants: The Case of Forehand Drive in Table Tennis
9 pages, 6 figures, The 22nd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning and practicing table tennis with traditional methods is a long, tedious process and may even lead to the internalization of incorrect techniques if not supervised by a coach. To overcome these issues, the presented study proposes an exergame with the aim of enhancing young female novice players' performance by boosting muscle memory, making practice more interesting, and decreasing the probability of faulty training. Specifically, we propose an exergame based on skeleton tracking and a virtual avatar to support correct shadow practice to learn forehand drive technique without the presence of a coach. We recruited 44 schoolgirls aged between 8 and 12 years without a background in playing table tennis and divided them into control and experimental groups. We examined their stroke skills (via the Mott-Lockhart test) and the error coefficient of their forehand drives (using a ball machine) in the pretest, post-test, and follow-up tests (10 days after the post-test). Our results showed that the experimental group had progress in the short and long term, while the control group had an improvement only in the short term. Further, the scale of improvement in the experimental group was significantly higher than in the control group. Given that the early stages of learning, particularly in girls children, are important in the internalization of individual skills in would-be athletes, this method could support promoting correct training for young females.
[ { "created": "Mon, 28 Aug 2023 08:39:26 GMT", "version": "v1" } ]
2023-08-29
[ [ "Farzinnejad", "Forouzan", "" ], [ "Rasti", "Javad", "" ], [ "Khezrian", "Navid", "" ], [ "Grubert", "Jens", "" ] ]
Learning and practicing table tennis with traditional methods is a long, tedious process and may even lead to the internalization of incorrect techniques if not supervised by a coach. To overcome these issues, the presented study proposes an exergame with the aim of enhancing young female novice players' performance by boosting muscle memory, making practice more interesting, and decreasing the probability of faulty training. Specifically, we propose an exergame based on skeleton tracking and a virtual avatar to support correct shadow practice to learn forehand drive technique without the presence of a coach. We recruited 44 schoolgirls aged between 8 and 12 years without a background in playing table tennis and divided them into control and experimental groups. We examined their stroke skills (via the Mott-Lockhart test) and the error coefficient of their forehand drives (using a ball machine) in the pretest, post-test, and follow-up tests (10 days after the post-test). Our results showed that the experimental group had progress in the short and long term, while the control group had an improvement only in the short term. Further, the scale of improvement in the experimental group was significantly higher than in the control group. Given that the early stages of learning, particularly in girls children, are important in the internalization of individual skills in would-be athletes, this method could support promoting correct training for young females.
2204.03503
Nicola Strisciuglio
Stefan Haller, Adina Aldea, Christin Seifert, Nicola Strisciuglio
Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers
Under review
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Automated short answer grading (ASAG) has gained attention in education as a means to scale educational tasks to the growing number of students. Recent progress in Natural Language Processing and Machine Learning has largely influenced the field of ASAG, of which we survey the recent research advancements. We complement previous surveys by providing a comprehensive analysis of recently published methods that deploy deep learning approaches. In particular, we focus our analysis on the transition from hand engineered features to representation learning approaches, which learn representative features for the task at hand automatically from large corpora of data. We structure our analysis of deep learning methods along three categories: word embeddings, sequential models, and attention-based methods. Deep learning impacted ASAG differently than other fields of NLP, as we noticed that the learned representations alone do not contribute to achieve the best results, but they rather show to work in a complementary way with hand-engineered features. The best performance are indeed achieved by methods that combine the carefully hand-engineered features with the power of the semantic descriptions provided by the latest models, like transformers architectures. We identify challenges and provide an outlook on research direction that can be addressed in the future
[ { "created": "Fri, 11 Mar 2022 13:47:08 GMT", "version": "v1" } ]
2022-04-08
[ [ "Haller", "Stefan", "" ], [ "Aldea", "Adina", "" ], [ "Seifert", "Christin", "" ], [ "Strisciuglio", "Nicola", "" ] ]
Automated short answer grading (ASAG) has gained attention in education as a means to scale educational tasks to the growing number of students. Recent progress in Natural Language Processing and Machine Learning has largely influenced the field of ASAG, of which we survey the recent research advancements. We complement previous surveys by providing a comprehensive analysis of recently published methods that deploy deep learning approaches. In particular, we focus our analysis on the transition from hand engineered features to representation learning approaches, which learn representative features for the task at hand automatically from large corpora of data. We structure our analysis of deep learning methods along three categories: word embeddings, sequential models, and attention-based methods. Deep learning impacted ASAG differently than other fields of NLP, as we noticed that the learned representations alone do not contribute to achieve the best results, but they rather show to work in a complementary way with hand-engineered features. The best performance are indeed achieved by methods that combine the carefully hand-engineered features with the power of the semantic descriptions provided by the latest models, like transformers architectures. We identify challenges and provide an outlook on research direction that can be addressed in the future
2202.05433
Jieyu Zhang
Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, Alexander Ratner
A Survey on Programmatic Weak Supervision
8 pages
null
null
null
cs.LG cs.AI stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Labeling training data has become one of the major roadblocks to using machine learning. Among various weak supervision paradigms, programmatic weak supervision (PWS) has achieved remarkable success in easing the manual labeling bottleneck by programmatically synthesizing training labels from multiple potentially noisy supervision sources. This paper presents a comprehensive survey of recent advances in PWS. In particular, we give a brief introduction of the PWS learning paradigm, and review representative approaches for each component within PWS's learning workflow. In addition, we discuss complementary learning paradigms for tackling limited labeled data scenarios and how these related approaches can be used in conjunction with PWS. Finally, we identify several critical challenges that remain under-explored in the area to hopefully inspire future research directions in the field.
[ { "created": "Fri, 11 Feb 2022 04:05:38 GMT", "version": "v1" }, { "created": "Mon, 14 Feb 2022 05:45:58 GMT", "version": "v2" } ]
2022-02-15
[ [ "Zhang", "Jieyu", "" ], [ "Hsieh", "Cheng-Yu", "" ], [ "Yu", "Yue", "" ], [ "Zhang", "Chao", "" ], [ "Ratner", "Alexander", "" ] ]
Labeling training data has become one of the major roadblocks to using machine learning. Among various weak supervision paradigms, programmatic weak supervision (PWS) has achieved remarkable success in easing the manual labeling bottleneck by programmatically synthesizing training labels from multiple potentially noisy supervision sources. This paper presents a comprehensive survey of recent advances in PWS. In particular, we give a brief introduction of the PWS learning paradigm, and review representative approaches for each component within PWS's learning workflow. In addition, we discuss complementary learning paradigms for tackling limited labeled data scenarios and how these related approaches can be used in conjunction with PWS. Finally, we identify several critical challenges that remain under-explored in the area to hopefully inspire future research directions in the field.
1607.03408
Gabriel Martins Dias
Gabriel Martins Dias
Performance Optimization of WSNs using External Information
Published in: IEEE 14th International Symposium and Workshops on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2013 (copyright has been transferred to IEEE)
null
10.1109/WoWMoM.2013.6583430
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this work is to describe a self-management system that correlates data sensed by different Wireless Sensor Networks (WSNs) and adjusts the number of active nodes in each network to provide an appropriate amount of measurements. The architecture considers the factors that make the external data relevant to the local network, such as the distance between covered areas, the relation between the types of sensed data and the reliability of the measurements. As a result, the operation of each network will be tuned to trade-off the accuracy of the measurements and the power consumption.
[ { "created": "Tue, 12 Jul 2016 15:35:42 GMT", "version": "v1" } ]
2016-07-13
[ [ "Dias", "Gabriel Martins", "" ] ]
The goal of this work is to describe a self-management system that correlates data sensed by different Wireless Sensor Networks (WSNs) and adjusts the number of active nodes in each network to provide an appropriate amount of measurements. The architecture considers the factors that make the external data relevant to the local network, such as the distance between covered areas, the relation between the types of sensed data and the reliability of the measurements. As a result, the operation of each network will be tuned to trade-off the accuracy of the measurements and the power consumption.
1907.04592
Alexey Potapov
Alexey Potapov, Anatoly Belikov, Vitaly Bogdanov, Alexander Scherbatiy
Differentiable Probabilistic Logic Networks
null
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic logic reasoning is a central component of such cognitive architectures as OpenCog. However, as an integrative architecture, OpenCog facilitates cognitive synergy via hybridization of different inference methods. In this paper, we introduce a differentiable version of Probabilistic Logic networks, which rules operate over tensor truth values in such a way that a chain of reasoning steps constructs a computation graph over tensors that accepts truth values of premises from the knowledge base as input and produces truth values of conclusions as output. This allows for both learning truth values of premises and formulas for rules (specified in a form with trainable weights) by backpropagation combining subsymbolic optimization and symbolic reasoning.
[ { "created": "Wed, 10 Jul 2019 09:44:10 GMT", "version": "v1" } ]
2019-07-11
[ [ "Potapov", "Alexey", "" ], [ "Belikov", "Anatoly", "" ], [ "Bogdanov", "Vitaly", "" ], [ "Scherbatiy", "Alexander", "" ] ]
Probabilistic logic reasoning is a central component of such cognitive architectures as OpenCog. However, as an integrative architecture, OpenCog facilitates cognitive synergy via hybridization of different inference methods. In this paper, we introduce a differentiable version of Probabilistic Logic networks, which rules operate over tensor truth values in such a way that a chain of reasoning steps constructs a computation graph over tensors that accepts truth values of premises from the knowledge base as input and produces truth values of conclusions as output. This allows for both learning truth values of premises and formulas for rules (specified in a form with trainable weights) by backpropagation combining subsymbolic optimization and symbolic reasoning.
2104.04996
Ran Tamir (Averbuch)
Ran Tamir (Averbuch), Ariel Livshits, and Yonatan Shadmi
Simple Majority Consensus in Networks with Unreliable Communication
null
null
10.3390/e24030333
null
cs.IT cs.DC math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we analyze the performance of a simple majority-rule protocol solving a fundamental coordination problem in distributed systems - \emph{binary majority consensus}, in the presence of probabilistic message loss. Using probabilistic analysis for a large scale, fully-connected, network of $2n$ agents, we prove that the Simple Majority Protocol (SMP) reaches consensus in only three communication rounds with probability approaching $1$ as $n$ grows to infinity. Moreover, if the difference between the numbers of agents that hold different opinions grows at a rate of $\sqrt{n}$, then the SMP with only two communication rounds attains consensus on the majority opinion of the network, and if this difference grows faster than $\sqrt{n}$, then the SMP reaches consensus on the majority opinion of the network in a single round, with probability converging to $1$ exponentially fast as $n \rightarrow \infty$. We also provide some converse results, showing that these requirements are not only sufficient, but also necessary.
[ { "created": "Sun, 11 Apr 2021 11:36:21 GMT", "version": "v1" } ]
2022-03-09
[ [ "Tamir", "Ran", "", "Averbuch" ], [ "Livshits", "Ariel", "" ], [ "Shadmi", "Yonatan", "" ] ]
In this work, we analyze the performance of a simple majority-rule protocol solving a fundamental coordination problem in distributed systems - \emph{binary majority consensus}, in the presence of probabilistic message loss. Using probabilistic analysis for a large scale, fully-connected, network of $2n$ agents, we prove that the Simple Majority Protocol (SMP) reaches consensus in only three communication rounds with probability approaching $1$ as $n$ grows to infinity. Moreover, if the difference between the numbers of agents that hold different opinions grows at a rate of $\sqrt{n}$, then the SMP with only two communication rounds attains consensus on the majority opinion of the network, and if this difference grows faster than $\sqrt{n}$, then the SMP reaches consensus on the majority opinion of the network in a single round, with probability converging to $1$ exponentially fast as $n \rightarrow \infty$. We also provide some converse results, showing that these requirements are not only sufficient, but also necessary.
2011.00616
Alexander Wolpert
Evgeny Dantsin and Alexander Wolpert
Similarity Between Points in Metric Measure Spaces
10 pages, 2 figures. In: Proceedings of the 13th International Conference on Similarity Search and Applications, SISAP 2020. Vol. 12440. Lecture Notes in Computer Science. Springer, 2020, pp. 177-184
null
null
null
cs.DM
http://creativecommons.org/licenses/by/4.0/
This paper is about similarity between objects that can be represented as points in metric measure spaces. A metric measure space is a metric space that is also equipped with a measure. For example, a network with distances between its nodes and weights assigned to its nodes is a metric measure space. Given points x and y in different metric measure spaces or in the same space, how similar are they? A well known approach is to consider x and y similar if their neighborhoods are similar. For metric measure spaces, similarity between neighborhoods is well captured by the Gromov-Hausdorff-Prokhorov distance, but it is NP-hard to compute this distance even in quite simple cases. We propose a tractable alternative: the radial distribution distance between the neighborhoods of x and y. The similarity measure based on the radial distribution distance is coarser than the similarity based on the Gromov-Hausdorff-Prokhorov distance but much easier to compute.
[ { "created": "Sun, 1 Nov 2020 19:52:54 GMT", "version": "v1" } ]
2020-11-03
[ [ "Dantsin", "Evgeny", "" ], [ "Wolpert", "Alexander", "" ] ]
This paper is about similarity between objects that can be represented as points in metric measure spaces. A metric measure space is a metric space that is also equipped with a measure. For example, a network with distances between its nodes and weights assigned to its nodes is a metric measure space. Given points x and y in different metric measure spaces or in the same space, how similar are they? A well known approach is to consider x and y similar if their neighborhoods are similar. For metric measure spaces, similarity between neighborhoods is well captured by the Gromov-Hausdorff-Prokhorov distance, but it is NP-hard to compute this distance even in quite simple cases. We propose a tractable alternative: the radial distribution distance between the neighborhoods of x and y. The similarity measure based on the radial distribution distance is coarser than the similarity based on the Gromov-Hausdorff-Prokhorov distance but much easier to compute.
2007.15652
Peyman Moghadam
Thomas Lowe, Peyman Moghadam, Everard Edwards, Jason Williams
Canopy Density Estimation in Perennial Horticulture Crops Using 3D Spinning Lidar SLAM
Accepted to Journal of Field Robotics. More information at https://github.com/csiro-robotics/agscan3d
null
10.1002/rob.22006
null
cs.RO eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel, canopy density estimation solution using a 3D ray cloud representation for perennial horticultural crops at the field scale. To attain high spatial and temporal fidelity in field conditions, we propose the application of continuous-time 3D SLAM (Simultaneous Localisation and Mapping) to a spinning lidar payload (AgScan3D) mounted on a moving farm vehicle. The AgScan3D data is processed through a Continuous-Time SLAM algorithm into a globally registered 3D ray cloud. The global ray cloud is a canonical data format (a digital twin) from which we can compare vineyard snapshots over multiple times within a season and across seasons. Then, the vineyard rows are automatically extracted from the ray cloud and a novel density calculation is performed to estimate the maximum likelihood canopy densities of the vineyard. This combination of digital twinning, together with the accurate extraction of canopy structure information, allows entire vineyards to be analysed and compared, across the growing season and from year to year. The proposed method is evaluated both in simulation and field experiments. Field experiments were performed at four sites, which varied in vineyard structure and vine management, over two growing seasons and 64 data collection campaigns, resulting in a total traversal of 160 kilometres, 42.4 scanned hectares of vines with a combined total of approximately 93,000 scanned vines. Our experiments show canopy density repeatability of 3.8% (Relative RMSE) per vineyard panel, for acquisition speeds of 5-6 km/h, and under half the standard deviation in estimated densities when compared to an industry standard gap-fraction based solution. The code and field datasets are available at https://github.com/csiro-robotics/agscan3d.
[ { "created": "Thu, 30 Jul 2020 05:51:38 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 00:56:20 GMT", "version": "v2" } ]
2020-12-16
[ [ "Lowe", "Thomas", "" ], [ "Moghadam", "Peyman", "" ], [ "Edwards", "Everard", "" ], [ "Williams", "Jason", "" ] ]
We propose a novel, canopy density estimation solution using a 3D ray cloud representation for perennial horticultural crops at the field scale. To attain high spatial and temporal fidelity in field conditions, we propose the application of continuous-time 3D SLAM (Simultaneous Localisation and Mapping) to a spinning lidar payload (AgScan3D) mounted on a moving farm vehicle. The AgScan3D data is processed through a Continuous-Time SLAM algorithm into a globally registered 3D ray cloud. The global ray cloud is a canonical data format (a digital twin) from which we can compare vineyard snapshots over multiple times within a season and across seasons. Then, the vineyard rows are automatically extracted from the ray cloud and a novel density calculation is performed to estimate the maximum likelihood canopy densities of the vineyard. This combination of digital twinning, together with the accurate extraction of canopy structure information, allows entire vineyards to be analysed and compared, across the growing season and from year to year. The proposed method is evaluated both in simulation and field experiments. Field experiments were performed at four sites, which varied in vineyard structure and vine management, over two growing seasons and 64 data collection campaigns, resulting in a total traversal of 160 kilometres, 42.4 scanned hectares of vines with a combined total of approximately 93,000 scanned vines. Our experiments show canopy density repeatability of 3.8% (Relative RMSE) per vineyard panel, for acquisition speeds of 5-6 km/h, and under half the standard deviation in estimated densities when compared to an industry standard gap-fraction based solution. The code and field datasets are available at https://github.com/csiro-robotics/agscan3d.
1803.07724
Jasdeep Singh
Jasdeep Singh, Vincent Ying, Alex Nutkiewicz
Attention on Attention: Architectures for Visual Question Answering (VQA)
Visual Question Answering Project
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Question Answering (VQA) is an increasingly popular topic in deep learning research, requiring coordination of natural language processing and computer vision modules into a single architecture. We build upon the model which placed first in the VQA Challenge by developing thirteen new attention mechanisms and introducing a simplified classifier. We performed 300 GPU hours of extensive hyperparameter and architecture searches and were able to achieve an evaluation score of 64.78%, outperforming the existing state-of-the-art single model's validation score of 63.15%.
[ { "created": "Wed, 21 Mar 2018 03:05:58 GMT", "version": "v1" } ]
2018-03-22
[ [ "Singh", "Jasdeep", "" ], [ "Ying", "Vincent", "" ], [ "Nutkiewicz", "Alex", "" ] ]
Visual Question Answering (VQA) is an increasingly popular topic in deep learning research, requiring coordination of natural language processing and computer vision modules into a single architecture. We build upon the model which placed first in the VQA Challenge by developing thirteen new attention mechanisms and introducing a simplified classifier. We performed 300 GPU hours of extensive hyperparameter and architecture searches and were able to achieve an evaluation score of 64.78%, outperforming the existing state-of-the-art single model's validation score of 63.15%.
2408.04232
Wei Zhang
Wei Zhang, Peng Tang
Enhanced Traffic Flow Prediction with Multi-Segment Fusion Tensor Graph Convolutional Networks
null
null
null
null
cs.LG cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate traffic Flow Prediction can assist in traffic management, route planning, and congestion mitigation, which holds significant importance in enhancing the efficiency and reliability of intelligent transportation systems (ITS). However, existing traffic flow prediction models suffer from limitations in capturing the complex spatial-temporal dependencies within traffic networks. In order to address this issue, this study proposes a multi-segment fusion tensor graph convolutional network (MS-FTGCN) for traffic flow prediction with the following three-fold ideas: a) building a unified spatial-temporal graph convolutional framework based on Tensor M-product, which capture the spatial-temporal patterns simultaneously; b) incorporating hourly, daily, and weekly components to model multi temporal properties of traffic flows, respectively; c) fusing the outputs of the three components by attention mechanism to obtain the final traffic flow prediction results. The results of experiments conducted on two traffic flow datasets demonstrate that the proposed MS-FTGCN outperforms the state-of-the-art models.
[ { "created": "Thu, 8 Aug 2024 05:37:17 GMT", "version": "v1" } ]
2024-08-09
[ [ "Zhang", "Wei", "" ], [ "Tang", "Peng", "" ] ]
Accurate traffic Flow Prediction can assist in traffic management, route planning, and congestion mitigation, which holds significant importance in enhancing the efficiency and reliability of intelligent transportation systems (ITS). However, existing traffic flow prediction models suffer from limitations in capturing the complex spatial-temporal dependencies within traffic networks. In order to address this issue, this study proposes a multi-segment fusion tensor graph convolutional network (MS-FTGCN) for traffic flow prediction with the following three-fold ideas: a) building a unified spatial-temporal graph convolutional framework based on Tensor M-product, which capture the spatial-temporal patterns simultaneously; b) incorporating hourly, daily, and weekly components to model multi temporal properties of traffic flows, respectively; c) fusing the outputs of the three components by attention mechanism to obtain the final traffic flow prediction results. The results of experiments conducted on two traffic flow datasets demonstrate that the proposed MS-FTGCN outperforms the state-of-the-art models.
2211.03128
Giuseppe Vietri
Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
Confidence-Ranked Reconstruction of Census Microdata from Published Statistics
null
null
10.1073/pnas.2218605120
null
cs.CY cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
A reconstruction attack on a private dataset $D$ takes as input some publicly accessible information about the dataset and produces a list of candidate elements of $D$. We introduce a new class of data reconstruction attacks based on randomized methods for non-convex optimization. We empirically demonstrate that our attacks can not only reconstruct full rows of $D$ from aggregate query statistics $Q(D)\in \mathbb{R}^m$, but can do so in a way that reliably ranks reconstructed rows by their odds of appearing in the private data, providing a signature that could be used for prioritizing reconstructed rows for further actions such as identify theft or hate crime. We also design a sequence of baselines for evaluating reconstruction attacks. Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution. In other words, the queries $Q(D)$ are permitting reconstruction of elements of this dataset, not the distribution from which $D$ was drawn. These findings are established both on 2010 U.S. decennial Census data and queries and Census-derived American Community Survey datasets. Taken together, our methods and experiments illustrate the risks in releasing numerically precise aggregate statistics of a large dataset, and provide further motivation for the careful application of provably private techniques such as differential privacy.
[ { "created": "Sun, 6 Nov 2022 14:08:43 GMT", "version": "v1" }, { "created": "Mon, 6 Feb 2023 17:32:02 GMT", "version": "v2" } ]
2023-03-29
[ [ "Dick", "Travis", "" ], [ "Dwork", "Cynthia", "" ], [ "Kearns", "Michael", "" ], [ "Liu", "Terrance", "" ], [ "Roth", "Aaron", "" ], [ "Vietri", "Giuseppe", "" ], [ "Wu", "Zhiwei Steven", "" ] ]
A reconstruction attack on a private dataset $D$ takes as input some publicly accessible information about the dataset and produces a list of candidate elements of $D$. We introduce a new class of data reconstruction attacks based on randomized methods for non-convex optimization. We empirically demonstrate that our attacks can not only reconstruct full rows of $D$ from aggregate query statistics $Q(D)\in \mathbb{R}^m$, but can do so in a way that reliably ranks reconstructed rows by their odds of appearing in the private data, providing a signature that could be used for prioritizing reconstructed rows for further actions such as identify theft or hate crime. We also design a sequence of baselines for evaluating reconstruction attacks. Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution. In other words, the queries $Q(D)$ are permitting reconstruction of elements of this dataset, not the distribution from which $D$ was drawn. These findings are established both on 2010 U.S. decennial Census data and queries and Census-derived American Community Survey datasets. Taken together, our methods and experiments illustrate the risks in releasing numerically precise aggregate statistics of a large dataset, and provide further motivation for the careful application of provably private techniques such as differential privacy.
2302.14543
Himanshu .
Himanshu, Jinraj V Pushpangathan and Harikumar Kandath
RRT and Velocity Obstacles-based motion planning for Unmanned Aircraft Systems Traffic Management (UTM)
Currently under review in The 2023 International Conference On Unmanned Aircraft Systems
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In this paper, an algorithm for Unmanned Aircraft Systems Traffic Management (UTM) for a finite number of unmanned aerial vehicles (UAVs) is proposed. This algorithm is developed by combining the Rapidly-Exploring Random Trees (RRT) and Velocity Obstacle (VO) algorithms and is referred to as the RRT-VO UTM algorithm. Here, the RRT algorithm works offline to generate obstacle-free waypoints in a given environment with known static obstacles. The VO algorithm, on the other hand, operates online to avoid collisions with other UAVS and known static obstacles. The boundary of the static obstacles are approximated by small circles to facilitate the formulation of VO algorithm. The proposed algorithm's performance is evaluated using numerical simulation and then compared to the well-known artificial potential field (APF) algorithm for collision avoidance. The advantages of the proposed method are clearly shown in terms of lower path length and collision avoidance capabilities for a challenging scenario.
[ { "created": "Tue, 28 Feb 2023 13:08:11 GMT", "version": "v1" } ]
2023-03-01
[ [ "Himanshu", "", "" ], [ "Pushpangathan", "Jinraj V", "" ], [ "Kandath", "Harikumar", "" ] ]
In this paper, an algorithm for Unmanned Aircraft Systems Traffic Management (UTM) for a finite number of unmanned aerial vehicles (UAVs) is proposed. This algorithm is developed by combining the Rapidly-Exploring Random Trees (RRT) and Velocity Obstacle (VO) algorithms and is referred to as the RRT-VO UTM algorithm. Here, the RRT algorithm works offline to generate obstacle-free waypoints in a given environment with known static obstacles. The VO algorithm, on the other hand, operates online to avoid collisions with other UAVS and known static obstacles. The boundary of the static obstacles are approximated by small circles to facilitate the formulation of VO algorithm. The proposed algorithm's performance is evaluated using numerical simulation and then compared to the well-known artificial potential field (APF) algorithm for collision avoidance. The advantages of the proposed method are clearly shown in terms of lower path length and collision avoidance capabilities for a challenging scenario.
1710.10800
Bharath Ramesh
Bharath Ramesh, Hong Yang, Garrick Orchard, Ngoc Anh Le Thi, Shihao Zhang and Cheng Xiang
DART: Distribution Aware Retinal Transform for Event-based Cameras
12 pages, revision submitted to TPAMI in Nov 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.
[ { "created": "Mon, 30 Oct 2017 08:08:57 GMT", "version": "v1" }, { "created": "Tue, 13 Nov 2018 02:37:41 GMT", "version": "v2" }, { "created": "Wed, 14 Nov 2018 07:40:55 GMT", "version": "v3" } ]
2018-11-15
[ [ "Ramesh", "Bharath", "" ], [ "Yang", "Hong", "" ], [ "Orchard", "Garrick", "" ], [ "Thi", "Ngoc Anh Le", "" ], [ "Zhang", "Shihao", "" ], [ "Xiang", "Cheng", "" ] ]
We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.
1008.5189
Anastasia Paparrizou Ms
Thanasis Balafoutis, Anastasia Paparrizou, Kostas Stergiou and Toby Walsh
Improving the Performance of maxRPC
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that can achieve considerably stronger pruning than arc consistency. However, existing maxRRC algorithms suffer from overheads and redundancies as they can repeatedly perform many constraint checks without triggering any value deletions. In this paper we propose techniques that can boost the performance of maxRPC algorithms. These include the combined use of two data structures to avoid many redundant constraint checks, and heuristics for the efficient ordering and execution of certain operations. Based on these, we propose two closely related algorithms. The first one which is a maxRPC algorithm with optimal O(end^3) time complexity, displays good performance when used stand-alone, but is expensive to apply during search. The second one approximates maxRPC and has O(en^2d^4) time complexity, but a restricted version with O(end^4) complexity can be very efficient when used during search. Both algorithms have O(ed) space complexity. Experimental results demonstrate that the resulting methods constantly outperform previous algorithms for maxRPC, often by large margins, and constitute a more than viable alternative to arc consistency on many problems.
[ { "created": "Mon, 30 Aug 2010 23:50:33 GMT", "version": "v1" } ]
2010-09-01
[ [ "Balafoutis", "Thanasis", "" ], [ "Paparrizou", "Anastasia", "" ], [ "Stergiou", "Kostas", "" ], [ "Walsh", "Toby", "" ] ]
Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that can achieve considerably stronger pruning than arc consistency. However, existing maxRRC algorithms suffer from overheads and redundancies as they can repeatedly perform many constraint checks without triggering any value deletions. In this paper we propose techniques that can boost the performance of maxRPC algorithms. These include the combined use of two data structures to avoid many redundant constraint checks, and heuristics for the efficient ordering and execution of certain operations. Based on these, we propose two closely related algorithms. The first one which is a maxRPC algorithm with optimal O(end^3) time complexity, displays good performance when used stand-alone, but is expensive to apply during search. The second one approximates maxRPC and has O(en^2d^4) time complexity, but a restricted version with O(end^4) complexity can be very efficient when used during search. Both algorithms have O(ed) space complexity. Experimental results demonstrate that the resulting methods constantly outperform previous algorithms for maxRPC, often by large margins, and constitute a more than viable alternative to arc consistency on many problems.
2110.11155
Luca Traini PhD
Luca Traini, Vittorio Cortellessa
DeLag: Using Multi-Objective Optimization to Enhance the Detection of Latency Degradation Patterns in Service-based Systems
Accepted for publication in IEEE Transactions on Software Engineering (TSE)
null
10.1109/TSE.2023.3266041
null
cs.SE cs.LG cs.PF
http://creativecommons.org/licenses/by/4.0/
Performance debugging in production is a fundamental activity in modern service-based systems. The diagnosis of performance issues is often time-consuming, since it requires thorough inspection of large volumes of traces and performance indices. In this paper we present DeLag, a novel automated search-based approach for diagnosing performance issues in service-based systems. DeLag identifies subsets of requests that show, in the combination of their Remote Procedure Call execution times, symptoms of potentially relevant performance issues. We call such symptoms Latency Degradation Patterns. DeLag simultaneously searches for multiple latency degradation patterns while optimizing precision, recall and latency dissimilarity. Experimentation on 700 datasets of requests generated from two microservice-based systems shows that our approach provides better and more stable effectiveness than three state-of-the-art approaches and general purpose machine learning clustering algorithms. DeLag is more effective than all baseline techniques in at least one case study (with p $\leq$ 0.05 and non-negligible effect size). Moreover, DeLag outperforms in terms of efficiency the second and the third most effective baseline techniques on the largest datasets used in our evaluation (up to 22%).
[ { "created": "Thu, 21 Oct 2021 13:59:32 GMT", "version": "v1" }, { "created": "Fri, 30 Sep 2022 10:58:53 GMT", "version": "v2" }, { "created": "Thu, 29 Dec 2022 18:53:22 GMT", "version": "v3" }, { "created": "Fri, 7 Apr 2023 14:09:42 GMT", "version": "v4" } ]
2023-04-10
[ [ "Traini", "Luca", "" ], [ "Cortellessa", "Vittorio", "" ] ]
Performance debugging in production is a fundamental activity in modern service-based systems. The diagnosis of performance issues is often time-consuming, since it requires thorough inspection of large volumes of traces and performance indices. In this paper we present DeLag, a novel automated search-based approach for diagnosing performance issues in service-based systems. DeLag identifies subsets of requests that show, in the combination of their Remote Procedure Call execution times, symptoms of potentially relevant performance issues. We call such symptoms Latency Degradation Patterns. DeLag simultaneously searches for multiple latency degradation patterns while optimizing precision, recall and latency dissimilarity. Experimentation on 700 datasets of requests generated from two microservice-based systems shows that our approach provides better and more stable effectiveness than three state-of-the-art approaches and general purpose machine learning clustering algorithms. DeLag is more effective than all baseline techniques in at least one case study (with p $\leq$ 0.05 and non-negligible effect size). Moreover, DeLag outperforms in terms of efficiency the second and the third most effective baseline techniques on the largest datasets used in our evaluation (up to 22%).
2209.03496
Allen Chang
Allen Chang, Lauren Klein, Marcelo R. Rosales, Weiyang Deng, Beth A. Smith, Maja J. Matari\'c
Evaluating Temporal Patterns in Applied Infant Affect Recognition
8 pages, 6 figures, 10th International Conference on Affective Computing and Intelligent Interaction (ACII 2022)
null
null
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agents must monitor their partners' affective states continuously in order to understand and engage in social interactions. However, methods for evaluating affect recognition do not account for changes in classification performance that may occur during occlusions or transitions between affective states. This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction, where infants' affective states contribute to their ability to participate in a therapeutic leg movement activity. To support robustness to facial occlusions in video recordings, we trained infant affect recognition classifiers using both facial and body features. Next, we conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time as the models encountered missing data and changing infant affect. During time windows when features were extracted with high confidence, a unimodal model trained on facial features achieved the same optimal performance as multimodal models trained on both facial and body features. However, multimodal models outperformed unimodal models when evaluated on the entire dataset. Additionally, model performance was weakest when predicting an affective state transition and improved after multiple predictions of the same affective state. These findings emphasize the benefits of incorporating body features in continuous affect recognition for infants. Our work highlights the importance of evaluating variability in model performance both over time and in the presence of missing data when applying affect recognition to social interactions.
[ { "created": "Wed, 7 Sep 2022 23:29:15 GMT", "version": "v1" } ]
2022-09-09
[ [ "Chang", "Allen", "" ], [ "Klein", "Lauren", "" ], [ "Rosales", "Marcelo R.", "" ], [ "Deng", "Weiyang", "" ], [ "Smith", "Beth A.", "" ], [ "Matarić", "Maja J.", "" ] ]
Agents must monitor their partners' affective states continuously in order to understand and engage in social interactions. However, methods for evaluating affect recognition do not account for changes in classification performance that may occur during occlusions or transitions between affective states. This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction, where infants' affective states contribute to their ability to participate in a therapeutic leg movement activity. To support robustness to facial occlusions in video recordings, we trained infant affect recognition classifiers using both facial and body features. Next, we conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time as the models encountered missing data and changing infant affect. During time windows when features were extracted with high confidence, a unimodal model trained on facial features achieved the same optimal performance as multimodal models trained on both facial and body features. However, multimodal models outperformed unimodal models when evaluated on the entire dataset. Additionally, model performance was weakest when predicting an affective state transition and improved after multiple predictions of the same affective state. These findings emphasize the benefits of incorporating body features in continuous affect recognition for infants. Our work highlights the importance of evaluating variability in model performance both over time and in the presence of missing data when applying affect recognition to social interactions.
2007.15415
Luca Reggio
Mai Gehrke, Tomas Jakl, Luca Reggio
A Cook's tour of duality in logic: from quantifiers, through Vietoris, to measures
29 pages
null
null
null
cs.LO math.CT math.GN math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify and highlight certain landmark results in Samson Abramsky's work which we believe are fundamental to current developments and future trends. In particular, we focus on the use of (i) topological duality methods to solve problems in logic and computer science; (ii) category theory and, more particularly, free (and co-free) constructions; (iii) these tools to unify the `power' and `structure' strands in computer science.
[ { "created": "Thu, 30 Jul 2020 12:22:10 GMT", "version": "v1" } ]
2020-07-31
[ [ "Gehrke", "Mai", "" ], [ "Jakl", "Tomas", "" ], [ "Reggio", "Luca", "" ] ]
We identify and highlight certain landmark results in Samson Abramsky's work which we believe are fundamental to current developments and future trends. In particular, we focus on the use of (i) topological duality methods to solve problems in logic and computer science; (ii) category theory and, more particularly, free (and co-free) constructions; (iii) these tools to unify the `power' and `structure' strands in computer science.
1809.05515
Marko Angjelichinoski
Marko Angjelichinoski, Kasper Fl{\o}e Trillingsgaard and Petar Popovski
A Statistical Learning Approach to Ultra-Reliable Low Latency Communication
Submitted for publication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mission-critical applications require Ultra-Reliable Low Latency (URLLC) wireless connections, where the packet error rate (PER) goes down to $10^{-9}$. Fulfillment of the bold reliability figures becomes meaningful only if it can be related to a statistical model in which the URLLC system operates. However, this model is generally not known and needs to be learned by sampling the wireless environment. In this paper we treat this fundamental problem in the simplest possible communication-theoretic setting: selecting a transmission rate over a dynamic wireless channel in order to guarantee high transmission reliability. We introduce a novel statistical framework for design and assessment of URLLC systems, consisting of three key components: (i) channel model selection; (ii) learning the model using training; (3) selecting the transmission rate to satisfy the required reliability. As it is insufficient to specify the URLLC requirements only through PER, two types of statistical constraints are introduced, Averaged Reliability (AR) and Probably Correct Reliability (PCR). The analysis and the evaluations show that adequate model selection and learning are indispensable for designing consistent physical layer that asymptotically behaves as if the channel was known perfectly, while maintaining the reliability requirements in URLLC systems.
[ { "created": "Fri, 14 Sep 2018 17:30:58 GMT", "version": "v1" } ]
2018-09-17
[ [ "Angjelichinoski", "Marko", "" ], [ "Trillingsgaard", "Kasper Fløe", "" ], [ "Popovski", "Petar", "" ] ]
Mission-critical applications require Ultra-Reliable Low Latency (URLLC) wireless connections, where the packet error rate (PER) goes down to $10^{-9}$. Fulfillment of the bold reliability figures becomes meaningful only if it can be related to a statistical model in which the URLLC system operates. However, this model is generally not known and needs to be learned by sampling the wireless environment. In this paper we treat this fundamental problem in the simplest possible communication-theoretic setting: selecting a transmission rate over a dynamic wireless channel in order to guarantee high transmission reliability. We introduce a novel statistical framework for design and assessment of URLLC systems, consisting of three key components: (i) channel model selection; (ii) learning the model using training; (3) selecting the transmission rate to satisfy the required reliability. As it is insufficient to specify the URLLC requirements only through PER, two types of statistical constraints are introduced, Averaged Reliability (AR) and Probably Correct Reliability (PCR). The analysis and the evaluations show that adequate model selection and learning are indispensable for designing consistent physical layer that asymptotically behaves as if the channel was known perfectly, while maintaining the reliability requirements in URLLC systems.
2406.11081
Sara Ahmadi
Sara Ahmadi, Peter Desain, Jordy Thielen
A Bayesian dynamic stopping method for evoked response brain-computer interfacing
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
As brain-computer interfacing (BCI) systems transition from assistive technology to more diverse applications, their speed, reliability, and user experience become increasingly important. Dynamic stopping methods enhance BCI system speed by deciding at any moment whether to output a result or wait for more information. Such approach leverages trial variance, allowing good trials to be detected earlier, thereby speeding up the process without significantly compromising accuracy. Existing dynamic stopping algorithms typically optimize measures such as symbols per minute (SPM) and information transfer rate (ITR). However, these metrics may not accurately reflect system performance for specific applications or user types. Moreover, many methods depend on arbitrary thresholds or parameters that require extensive training data. We propose a model-based approach that takes advantage of the analytical knowledge that we have about the underlying classification model. By using a risk minimisation approach, our model allows precise control over the types of errors and the balance between precision and speed. This adaptability makes it ideal for customizing BCI systems to meet the diverse needs of various applications. We validate our proposed method on a publicly available dataset, comparing it with established static and dynamic stopping methods. Our results demonstrate that our approach offers a broad range of accuracy-speed trade-offs and achieves higher precision than baseline stopping methods.
[ { "created": "Sun, 16 Jun 2024 21:41:48 GMT", "version": "v1" } ]
2024-06-18
[ [ "Ahmadi", "Sara", "" ], [ "Desain", "Peter", "" ], [ "Thielen", "Jordy", "" ] ]
As brain-computer interfacing (BCI) systems transition from assistive technology to more diverse applications, their speed, reliability, and user experience become increasingly important. Dynamic stopping methods enhance BCI system speed by deciding at any moment whether to output a result or wait for more information. Such approach leverages trial variance, allowing good trials to be detected earlier, thereby speeding up the process without significantly compromising accuracy. Existing dynamic stopping algorithms typically optimize measures such as symbols per minute (SPM) and information transfer rate (ITR). However, these metrics may not accurately reflect system performance for specific applications or user types. Moreover, many methods depend on arbitrary thresholds or parameters that require extensive training data. We propose a model-based approach that takes advantage of the analytical knowledge that we have about the underlying classification model. By using a risk minimisation approach, our model allows precise control over the types of errors and the balance between precision and speed. This adaptability makes it ideal for customizing BCI systems to meet the diverse needs of various applications. We validate our proposed method on a publicly available dataset, comparing it with established static and dynamic stopping methods. Our results demonstrate that our approach offers a broad range of accuracy-speed trade-offs and achieves higher precision than baseline stopping methods.
1506.09075
Byeongkeun Kang
Yuanyuan Wu, Xiaohai He, Byeongkeun Kang, Haiying Song, and Truong Q. Nguyen
Long-Range Motion Trajectories Extraction of Articulated Human Using Mesh Evolution
IEEE Signal Processing Letters
null
10.1109/LSP.2016.2536647
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter presents a novel approach to extract reliable dense and long-range motion trajectories of articulated human in a video sequence. Compared with existing approaches that emphasize temporal consistency of each tracked point, we also consider the spatial structure of tracked points on the articulated human. We treat points as a set of vertices, and build a triangle mesh to join them in image space. The problem of extracting long-range motion trajectories is changed to the issue of consistency of mesh evolution over time. First, self-occlusion is detected by a novel mesh-based method and an adaptive motion estimation method is proposed to initialize mesh between successive frames. Furthermore, we propose an iterative algorithm to efficiently adjust vertices of mesh for a physically plausible deformation, which can meet the local rigidity of mesh and silhouette constraints. Finally, we compare the proposed method with the state-of-the-art methods on a set of challenging sequences. Evaluations demonstrate that our method achieves favorable performance in terms of both accuracy and integrity of extracted trajectories.
[ { "created": "Tue, 30 Jun 2015 13:18:18 GMT", "version": "v1" }, { "created": "Mon, 29 Feb 2016 17:10:11 GMT", "version": "v2" }, { "created": "Tue, 29 Mar 2016 00:21:40 GMT", "version": "v3" } ]
2016-03-30
[ [ "Wu", "Yuanyuan", "" ], [ "He", "Xiaohai", "" ], [ "Kang", "Byeongkeun", "" ], [ "Song", "Haiying", "" ], [ "Nguyen", "Truong Q.", "" ] ]
This letter presents a novel approach to extract reliable dense and long-range motion trajectories of articulated human in a video sequence. Compared with existing approaches that emphasize temporal consistency of each tracked point, we also consider the spatial structure of tracked points on the articulated human. We treat points as a set of vertices, and build a triangle mesh to join them in image space. The problem of extracting long-range motion trajectories is changed to the issue of consistency of mesh evolution over time. First, self-occlusion is detected by a novel mesh-based method and an adaptive motion estimation method is proposed to initialize mesh between successive frames. Furthermore, we propose an iterative algorithm to efficiently adjust vertices of mesh for a physically plausible deformation, which can meet the local rigidity of mesh and silhouette constraints. Finally, we compare the proposed method with the state-of-the-art methods on a set of challenging sequences. Evaluations demonstrate that our method achieves favorable performance in terms of both accuracy and integrity of extracted trajectories.
2305.19860
Likang Wu
Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, Hui Xiong, Enhong Chen
A Survey on Large Language Models for Recommendation
34 pages, 7 figures, 2 tables
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP) and have recently gained significant attention in the domain of Recommendation Systems (RS). These models, trained on massive amounts of data using self-supervised learning, have demonstrated remarkable success in learning universal representations and have the potential to enhance various aspects of recommendation systems by some effective transfer techniques such as fine-tuning and prompt tuning, and so on. The crucial aspect of harnessing the power of language models in enhancing recommendation quality is the utilization of their high-quality representations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. To provide a comprehensive understanding of the existing LLM-based recommendation systems, this survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec), with the latter being systematically sorted out for the first time. Furthermore, we systematically review and analyze existing LLM-based recommendation systems within each paradigm, providing insights into their methodologies, techniques, and performance. Additionally, we identify key challenges and several valuable findings to provide researchers and practitioners with inspiration. We have also created a GitHub repository to index relevant papers on LLMs for recommendation, https://github.com/WLiK/LLM4Rec.
[ { "created": "Wed, 31 May 2023 13:51:26 GMT", "version": "v1" }, { "created": "Thu, 1 Jun 2023 03:22:17 GMT", "version": "v2" }, { "created": "Fri, 4 Aug 2023 02:58:15 GMT", "version": "v3" }, { "created": "Fri, 18 Aug 2023 05:56:05 GMT", "version": "v4" }, { "created": "Tue, 18 Jun 2024 08:07:01 GMT", "version": "v5" } ]
2024-06-19
[ [ "Wu", "Likang", "" ], [ "Zheng", "Zhi", "" ], [ "Qiu", "Zhaopeng", "" ], [ "Wang", "Hao", "" ], [ "Gu", "Hongchao", "" ], [ "Shen", "Tingjia", "" ], [ "Qin", "Chuan", "" ], [ "Zhu", "Chen", "" ], [ "Zhu", "Hengshu", "" ], [ "Liu", "Qi", "" ], [ "Xiong", "Hui", "" ], [ "Chen", "Enhong", "" ] ]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP) and have recently gained significant attention in the domain of Recommendation Systems (RS). These models, trained on massive amounts of data using self-supervised learning, have demonstrated remarkable success in learning universal representations and have the potential to enhance various aspects of recommendation systems by some effective transfer techniques such as fine-tuning and prompt tuning, and so on. The crucial aspect of harnessing the power of language models in enhancing recommendation quality is the utilization of their high-quality representations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. To provide a comprehensive understanding of the existing LLM-based recommendation systems, this survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec), with the latter being systematically sorted out for the first time. Furthermore, we systematically review and analyze existing LLM-based recommendation systems within each paradigm, providing insights into their methodologies, techniques, and performance. Additionally, we identify key challenges and several valuable findings to provide researchers and practitioners with inspiration. We have also created a GitHub repository to index relevant papers on LLMs for recommendation, https://github.com/WLiK/LLM4Rec.
2408.02814
Shaopeng Fu
Shaopeng Fu, Xuexue Sun, Ke Qing, Tianhang Zheng, Di Wang
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
null
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Though pre-trained encoders can be easily accessed online to build downstream machine learning (ML) services quickly, various attacks have been designed to compromise the security and privacy of these encoders. While most attacks target encoders on the upstream side, it remains unknown how an encoder could be threatened when deployed in a downstream ML service. This paper unveils a new vulnerability: the Pre-trained Encoder Inference (PEI) attack, which posts privacy threats toward encoders hidden behind downstream ML services. By only providing API accesses to a targeted downstream service and a set of candidate encoders, the PEI attack can infer which encoder is secretly used by the targeted service based on candidate ones. We evaluate the attack performance of PEI against real-world encoders on three downstream tasks: image classification, text classification, and text-to-image generation. Experiments show that the PEI attack succeeds in revealing the hidden encoder in most cases and seldom makes mistakes even when the hidden encoder is not in the candidate set. We also conducted a case study on one of the most recent vision-language models, LLaVA, to illustrate that the PEI attack is useful in assisting other ML attacks such as adversarial attacks. The code is available at https://github.com/fshp971/encoder-inference.
[ { "created": "Mon, 5 Aug 2024 20:27:54 GMT", "version": "v1" } ]
2024-08-07
[ [ "Fu", "Shaopeng", "" ], [ "Sun", "Xuexue", "" ], [ "Qing", "Ke", "" ], [ "Zheng", "Tianhang", "" ], [ "Wang", "Di", "" ] ]
Though pre-trained encoders can be easily accessed online to build downstream machine learning (ML) services quickly, various attacks have been designed to compromise the security and privacy of these encoders. While most attacks target encoders on the upstream side, it remains unknown how an encoder could be threatened when deployed in a downstream ML service. This paper unveils a new vulnerability: the Pre-trained Encoder Inference (PEI) attack, which posts privacy threats toward encoders hidden behind downstream ML services. By only providing API accesses to a targeted downstream service and a set of candidate encoders, the PEI attack can infer which encoder is secretly used by the targeted service based on candidate ones. We evaluate the attack performance of PEI against real-world encoders on three downstream tasks: image classification, text classification, and text-to-image generation. Experiments show that the PEI attack succeeds in revealing the hidden encoder in most cases and seldom makes mistakes even when the hidden encoder is not in the candidate set. We also conducted a case study on one of the most recent vision-language models, LLaVA, to illustrate that the PEI attack is useful in assisting other ML attacks such as adversarial attacks. The code is available at https://github.com/fshp971/encoder-inference.
2305.00077
Binnur Gorer
Binnur G\"orer and Fatma Ba\c{s}ak Aydemir
Exploring Emerging Technologies for Requirements Elicitation Interview Training: Empirical Assessment of Robotic and Virtual Tutors
Author submitted manuscript
null
null
null
cs.SE cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Requirements elicitation interviews are a widely adopted technique, where the interview success heavily depends on the interviewer's preparedness and communication skills. Students can enhance these skills through practice interviews. However, organizing practice interviews for many students presents scalability challenges, given the time and effort required to involve stakeholders in each session. To address this, we propose REIT, an extensible architecture for Requirements Elicitation Interview Training system based on emerging educational technologies. REIT has components to support both the interview phase, wherein students act as interviewers while the system assumes the role of an interviewee, and the feedback phase, during which the system assesses students' performance and offers contextual and behavioral feedback to enhance their interviewing skills. We demonstrate the applicability of REIT through two implementations: RoREIT with a physical robotic agent and VoREIT with a virtual voice-only agent. We empirically evaluated both instances with a group of graduate students. The participants appreciated both systems. They demonstrated higher learning gain when trained with RoREIT, but they found VoREIT more engaging and easier to use. These findings indicate that each system has distinct benefits and drawbacks, suggesting that REIT can be realized for various educational settings based on preferences and available resources.
[ { "created": "Fri, 28 Apr 2023 20:03:48 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 23:21:20 GMT", "version": "v2" }, { "created": "Wed, 30 Aug 2023 14:39:22 GMT", "version": "v3" } ]
2023-08-31
[ [ "Görer", "Binnur", "" ], [ "Aydemir", "Fatma Başak", "" ] ]
Requirements elicitation interviews are a widely adopted technique, where the interview success heavily depends on the interviewer's preparedness and communication skills. Students can enhance these skills through practice interviews. However, organizing practice interviews for many students presents scalability challenges, given the time and effort required to involve stakeholders in each session. To address this, we propose REIT, an extensible architecture for Requirements Elicitation Interview Training system based on emerging educational technologies. REIT has components to support both the interview phase, wherein students act as interviewers while the system assumes the role of an interviewee, and the feedback phase, during which the system assesses students' performance and offers contextual and behavioral feedback to enhance their interviewing skills. We demonstrate the applicability of REIT through two implementations: RoREIT with a physical robotic agent and VoREIT with a virtual voice-only agent. We empirically evaluated both instances with a group of graduate students. The participants appreciated both systems. They demonstrated higher learning gain when trained with RoREIT, but they found VoREIT more engaging and easier to use. These findings indicate that each system has distinct benefits and drawbacks, suggesting that REIT can be realized for various educational settings based on preferences and available resources.
2103.02270
Dian Fan
Dian Fan, Xiaojun Yuan, Ying-Jun Angela Zhang
Temporal-Structure-Assisted Gradient Aggregation for Over-the-Air Federated Edge Learning
null
null
null
null
cs.IT cs.LG math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, we investigate over-the-air model aggregation in a federated edge learning (FEEL) system. We introduce a Markovian probability model to characterize the intrinsic temporal structure of the model aggregation series. With this temporal probability model, we formulate the model aggregation problem as to infer the desired aggregated update given all the past observations from a Bayesian perspective. We develop a message passing based algorithm, termed temporal-structure-assisted gradient aggregation (TSA-GA), to fulfil this estimation task with low complexity and near-optimal performance. We further establish the state evolution (SE) analysis to characterize the behaviour of the proposed TSA-GA algorithm, and derive an explicit bound of the expected loss reduction of the FEEL system under certain standard regularity conditions. In addition, we develop an expectation maximization (EM) strategy to learn the unknown parameters in the Markovian model. We show that the proposed TSAGA algorithm significantly outperforms the state-of-the-art, and is able to achieve comparable learning performance as the error-free benchmark in terms of both convergence rate and final test accuracy.
[ { "created": "Wed, 3 Mar 2021 09:13:27 GMT", "version": "v1" } ]
2021-03-04
[ [ "Fan", "Dian", "" ], [ "Yuan", "Xiaojun", "" ], [ "Zhang", "Ying-Jun Angela", "" ] ]
In this paper, we investigate over-the-air model aggregation in a federated edge learning (FEEL) system. We introduce a Markovian probability model to characterize the intrinsic temporal structure of the model aggregation series. With this temporal probability model, we formulate the model aggregation problem as to infer the desired aggregated update given all the past observations from a Bayesian perspective. We develop a message passing based algorithm, termed temporal-structure-assisted gradient aggregation (TSA-GA), to fulfil this estimation task with low complexity and near-optimal performance. We further establish the state evolution (SE) analysis to characterize the behaviour of the proposed TSA-GA algorithm, and derive an explicit bound of the expected loss reduction of the FEEL system under certain standard regularity conditions. In addition, we develop an expectation maximization (EM) strategy to learn the unknown parameters in the Markovian model. We show that the proposed TSAGA algorithm significantly outperforms the state-of-the-art, and is able to achieve comparable learning performance as the error-free benchmark in terms of both convergence rate and final test accuracy.
2406.08164
Muhammad Jehanzeb Mirza
Irene Huang, Wei Lin, M. Jehanzeb Mirza, Jacob A. Hansen, Sivan Doveh, Victor Ion Butoi, Roei Herzig, Assaf Arbelle, Hilde Kuhene, Trevor Darrel, Chuang Gan, Aude Oliva, Rogerio Feris, Leonid Karlinsky
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
The first three authors contributed equally
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Compositional Reasoning (CR) entails grasping the significance of attributes, relations, and word order. Recent Vision-Language Models (VLMs), comprising a visual encoder and a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in such reasoning tasks. This prompts a crucial question: have VLMs effectively tackled the CR challenge? We conjecture that existing CR benchmarks may not adequately push the boundaries of modern VLMs due to the reliance on an LLM-only negative text generation pipeline. Consequently, the negatives produced either appear as outliers from the natural language distribution learned by VLMs' LLM decoders or as improbable within the corresponding image context. To address these limitations, we introduce ConMe -- a compositional reasoning benchmark and a novel data generation pipeline leveraging VLMs to produce `hard CR Q&A'. Through a new concept of VLMs conversing with each other to collaboratively expose their weaknesses, our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, establishing a robust CR benchmark, also subsequently validated manually. Our benchmark provokes a noteworthy, up to 33%, decrease in CR performance compared to preceding benchmarks, reinstating the CR challenge even for state-of-the-art VLMs.
[ { "created": "Wed, 12 Jun 2024 12:54:27 GMT", "version": "v1" } ]
2024-06-13
[ [ "Huang", "Irene", "" ], [ "Lin", "Wei", "" ], [ "Mirza", "M. Jehanzeb", "" ], [ "Hansen", "Jacob A.", "" ], [ "Doveh", "Sivan", "" ], [ "Butoi", "Victor Ion", "" ], [ "Herzig", "Roei", "" ], [ "Arbelle", "Assaf", "" ], [ "Kuhene", "Hilde", "" ], [ "Darrel", "Trevor", "" ], [ "Gan", "Chuang", "" ], [ "Oliva", "Aude", "" ], [ "Feris", "Rogerio", "" ], [ "Karlinsky", "Leonid", "" ] ]
Compositional Reasoning (CR) entails grasping the significance of attributes, relations, and word order. Recent Vision-Language Models (VLMs), comprising a visual encoder and a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in such reasoning tasks. This prompts a crucial question: have VLMs effectively tackled the CR challenge? We conjecture that existing CR benchmarks may not adequately push the boundaries of modern VLMs due to the reliance on an LLM-only negative text generation pipeline. Consequently, the negatives produced either appear as outliers from the natural language distribution learned by VLMs' LLM decoders or as improbable within the corresponding image context. To address these limitations, we introduce ConMe -- a compositional reasoning benchmark and a novel data generation pipeline leveraging VLMs to produce `hard CR Q&A'. Through a new concept of VLMs conversing with each other to collaboratively expose their weaknesses, our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, establishing a robust CR benchmark, also subsequently validated manually. Our benchmark provokes a noteworthy, up to 33%, decrease in CR performance compared to preceding benchmarks, reinstating the CR challenge even for state-of-the-art VLMs.
2111.00610
Anurag Katakkar
Anurag Katakkar, Alan W Black
Towards Language Modelling in the Speech Domain Using Sub-word Linguistic Units
null
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language models (LMs) for text data have been studied extensively for their usefulness in language generation and other downstream tasks. However, language modelling purely in the speech domain is still a relatively unexplored topic, with traditional speech LMs often depending on auxiliary text LMs for learning distributional aspects of the language. For the English language, these LMs treat words as atomic units, which presents inherent challenges to language modelling in the speech domain. In this paper, we propose a novel LSTM-based generative speech LM that is inspired by the CBOW model and built on linguistic units including syllables and phonemes. This offers better acoustic consistency across utterances in the dataset, as opposed to single melspectrogram frames, or whole words. With a limited dataset, orders of magnitude smaller than that required by contemporary generative models, our model closely approximates babbling speech. We show the effect of training with auxiliary text LMs, multitask learning objectives, and auxiliary articulatory features. Through our experiments, we also highlight some well known, but poorly documented challenges in training generative speech LMs, including the mismatch between the supervised learning objective with which these models are trained such as Mean Squared Error (MSE), and the true objective, which is speech quality. Our experiments provide an early indication that while validation loss and Mel Cepstral Distortion (MCD) are not strongly correlated with generated speech quality, traditional text language modelling metrics like perplexity and next-token-prediction accuracy might be.
[ { "created": "Sun, 31 Oct 2021 22:48:30 GMT", "version": "v1" } ]
2021-11-02
[ [ "Katakkar", "Anurag", "" ], [ "Black", "Alan W", "" ] ]
Language models (LMs) for text data have been studied extensively for their usefulness in language generation and other downstream tasks. However, language modelling purely in the speech domain is still a relatively unexplored topic, with traditional speech LMs often depending on auxiliary text LMs for learning distributional aspects of the language. For the English language, these LMs treat words as atomic units, which presents inherent challenges to language modelling in the speech domain. In this paper, we propose a novel LSTM-based generative speech LM that is inspired by the CBOW model and built on linguistic units including syllables and phonemes. This offers better acoustic consistency across utterances in the dataset, as opposed to single melspectrogram frames, or whole words. With a limited dataset, orders of magnitude smaller than that required by contemporary generative models, our model closely approximates babbling speech. We show the effect of training with auxiliary text LMs, multitask learning objectives, and auxiliary articulatory features. Through our experiments, we also highlight some well known, but poorly documented challenges in training generative speech LMs, including the mismatch between the supervised learning objective with which these models are trained such as Mean Squared Error (MSE), and the true objective, which is speech quality. Our experiments provide an early indication that while validation loss and Mel Cepstral Distortion (MCD) are not strongly correlated with generated speech quality, traditional text language modelling metrics like perplexity and next-token-prediction accuracy might be.
2009.01229
Marialejandra Garcia-Corretjer
Marialejandra Garcia Corretjer, David Miralles, and Raquel Ros
A Theoretical Approach for a Novel Model to Realizing Empathy
47 pages, 11 figures
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first objective of this paper are to introduce a strong theoretical concept as a proposed model that visualizes the process of realizing empathy, based on the ample analysis of the collected work in the survey. Secondly, the intended purpose of this proposed model, is to create an initial blueprint that may be applicable to a range of disciplines with clear must-have concepts important to consider for the realization of empathy between people and their technology.For this reason, after the model is explained, this paper exemplifies tools for its application and a couple of encouraging case study projects that begin to integrate this model into their interactive experiments.
[ { "created": "Thu, 3 Sep 2020 17:21:49 GMT", "version": "v1" } ]
2020-09-04
[ [ "Corretjer", "Marialejandra Garcia", "" ], [ "Miralles", "David", "" ], [ "Ros", "Raquel", "" ] ]
The first objective of this paper are to introduce a strong theoretical concept as a proposed model that visualizes the process of realizing empathy, based on the ample analysis of the collected work in the survey. Secondly, the intended purpose of this proposed model, is to create an initial blueprint that may be applicable to a range of disciplines with clear must-have concepts important to consider for the realization of empathy between people and their technology.For this reason, after the model is explained, this paper exemplifies tools for its application and a couple of encouraging case study projects that begin to integrate this model into their interactive experiments.
1712.00811
Hamoon Mousavi
Hamoon Mousavi
Lower Bounds on Regular Expression Size
29 pages
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce linear programs encoding regular expressions of finite languages. We show that, given a language, the optimum value of the associated linear program is a lower bound on the size of any regular expression of the language. Moreover we show that any regular expression can be turned into a dual feasible solution with an objective value that is equal to the size of the regular expression. For binomial languages we can relax the associated linear program using duality theorem. We use this relaxation to prove lower bounds on the size of regular expressions of binomial and threshold languages.
[ { "created": "Sun, 3 Dec 2017 18:35:48 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2017 22:05:17 GMT", "version": "v2" } ]
2017-12-08
[ [ "Mousavi", "Hamoon", "" ] ]
We introduce linear programs encoding regular expressions of finite languages. We show that, given a language, the optimum value of the associated linear program is a lower bound on the size of any regular expression of the language. Moreover we show that any regular expression can be turned into a dual feasible solution with an objective value that is equal to the size of the regular expression. For binomial languages we can relax the associated linear program using duality theorem. We use this relaxation to prove lower bounds on the size of regular expressions of binomial and threshold languages.
1808.07712
Gurkirt Singh
Gurkirt Singh and Suman Saha and Fabio Cuzzolin
Predicting Action Tubes
ECCV workshop; Anticipating Human Behaviour 2018; 16 page 7 figures
null
null
null
cs.CV cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present a method to predict an entire `action tube' (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in real-time and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.
[ { "created": "Thu, 23 Aug 2018 12:11:06 GMT", "version": "v1" } ]
2018-08-24
[ [ "Singh", "Gurkirt", "" ], [ "Saha", "Suman", "" ], [ "Cuzzolin", "Fabio", "" ] ]
In this work, we present a method to predict an entire `action tube' (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in real-time and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.
2303.12696
Zhiyuan Hu
Zhiyuan Hu, Yunsheng Li, Jiancheng Lyu, Dashan Gao, Nuno Vasconcelos
Dense Network Expansion for Class Incremental Learning
Accepted by CVPR2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The problem of class incremental learning (CIL) is considered. State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task. While effective from a computational standpoint, these methods lead to models that grow quickly with the number of tasks. A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity. This is accomplished by the introduction of dense connections between the intermediate layers of the task expert networks, that enable the transfer of knowledge from old to new tasks via feature sharing and reusing. This sharing is implemented with a cross-task attention mechanism, based on a new task attention block (TAB), that fuses information across tasks. Unlike traditional attention mechanisms, TAB operates at the level of the feature mixing and is decoupled with spatial attentions. This is shown more effective than a joint spatial-and-task attention for CIL. The proposed DNE approach can strictly maintain the feature space of old classes while growing the network and feature scale at a much slower rate than previous methods. In result, it outperforms the previous SOTA methods by a margin of 4\% in terms of accuracy, with similar or even smaller model scale.
[ { "created": "Wed, 22 Mar 2023 16:42:26 GMT", "version": "v1" } ]
2023-03-23
[ [ "Hu", "Zhiyuan", "" ], [ "Li", "Yunsheng", "" ], [ "Lyu", "Jiancheng", "" ], [ "Gao", "Dashan", "" ], [ "Vasconcelos", "Nuno", "" ] ]
The problem of class incremental learning (CIL) is considered. State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task. While effective from a computational standpoint, these methods lead to models that grow quickly with the number of tasks. A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity. This is accomplished by the introduction of dense connections between the intermediate layers of the task expert networks, that enable the transfer of knowledge from old to new tasks via feature sharing and reusing. This sharing is implemented with a cross-task attention mechanism, based on a new task attention block (TAB), that fuses information across tasks. Unlike traditional attention mechanisms, TAB operates at the level of the feature mixing and is decoupled with spatial attentions. This is shown more effective than a joint spatial-and-task attention for CIL. The proposed DNE approach can strictly maintain the feature space of old classes while growing the network and feature scale at a much slower rate than previous methods. In result, it outperforms the previous SOTA methods by a margin of 4\% in terms of accuracy, with similar or even smaller model scale.
2401.10338
Jingchao Ni
Jingchao Ni, Gauthier Guinet, Peihong Jiang, Laurent Callot, Andrey Kan
MELODY: Robust Semi-Supervised Hybrid Model for Entity-Level Online Anomaly Detection with Multivariate Time Series
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In large IT systems, software deployment is a crucial process in online services as their code is regularly updated. However, a faulty code change may degrade the target service's performance and cause cascading outages in downstream services. Thus, software deployments should be comprehensively monitored, and their anomalies should be detected timely. In this paper, we study the problem of anomaly detection for deployments. We begin by identifying the challenges unique to this anomaly detection problem, which is at entity-level (e.g., deployments), relative to the more typical problem of anomaly detection in multivariate time series (MTS). The unique challenges include the heterogeneity of deployments, the low latency tolerance, the ambiguous anomaly definition, and the limited supervision. To address them, we propose a novel framework, semi-supervised hybrid Model for Entity-Level Online Detection of anomalY (MELODY). MELODY first transforms the MTS of different entities to the same feature space by an online feature extractor, then uses a newly proposed semi-supervised deep one-class model for detecting anomalous entities. We evaluated MELODY on real data of cloud services with 1.2M+ time series. The relative F1 score improvement of MELODY over the state-of-the-art methods ranges from 7.6% to 56.5%. The user evaluation suggests MELODY is suitable for monitoring deployments in large online systems.
[ { "created": "Thu, 18 Jan 2024 19:02:41 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2024 04:35:00 GMT", "version": "v2" } ]
2024-06-07
[ [ "Ni", "Jingchao", "" ], [ "Guinet", "Gauthier", "" ], [ "Jiang", "Peihong", "" ], [ "Callot", "Laurent", "" ], [ "Kan", "Andrey", "" ] ]
In large IT systems, software deployment is a crucial process in online services as their code is regularly updated. However, a faulty code change may degrade the target service's performance and cause cascading outages in downstream services. Thus, software deployments should be comprehensively monitored, and their anomalies should be detected timely. In this paper, we study the problem of anomaly detection for deployments. We begin by identifying the challenges unique to this anomaly detection problem, which is at entity-level (e.g., deployments), relative to the more typical problem of anomaly detection in multivariate time series (MTS). The unique challenges include the heterogeneity of deployments, the low latency tolerance, the ambiguous anomaly definition, and the limited supervision. To address them, we propose a novel framework, semi-supervised hybrid Model for Entity-Level Online Detection of anomalY (MELODY). MELODY first transforms the MTS of different entities to the same feature space by an online feature extractor, then uses a newly proposed semi-supervised deep one-class model for detecting anomalous entities. We evaluated MELODY on real data of cloud services with 1.2M+ time series. The relative F1 score improvement of MELODY over the state-of-the-art methods ranges from 7.6% to 56.5%. The user evaluation suggests MELODY is suitable for monitoring deployments in large online systems.
1705.01143
Shih-Chieh Su
Shih-Chieh Su
Summarized Network Behavior Prediction
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies the entity-wise topical behavior from massive network logs. Both the temporal and the spatial relationships of the behavior are explored with the learning architectures combing the recurrent neural network (RNN) and the convolutional neural network (CNN). To make the behavioral data appropriate for the spatial learning in CNN, several reduction steps are taken to form the topical metrics and place them homogeneously like pixels in the images. The experimental result shows both the temporal- and the spatial- gains when compared to a multilayer perceptron (MLP) network. A new learning framework called spatially connected convolutional networks (SCCN) is introduced to more efficiently predict the behavior.
[ { "created": "Tue, 2 May 2017 19:12:23 GMT", "version": "v1" } ]
2017-05-04
[ [ "Su", "Shih-Chieh", "" ] ]
This work studies the entity-wise topical behavior from massive network logs. Both the temporal and the spatial relationships of the behavior are explored with the learning architectures combing the recurrent neural network (RNN) and the convolutional neural network (CNN). To make the behavioral data appropriate for the spatial learning in CNN, several reduction steps are taken to form the topical metrics and place them homogeneously like pixels in the images. The experimental result shows both the temporal- and the spatial- gains when compared to a multilayer perceptron (MLP) network. A new learning framework called spatially connected convolutional networks (SCCN) is introduced to more efficiently predict the behavior.
2010.03110
Sumedh Sontakke
Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Sch\"olkopf
Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning
International Conference on Machine Learning, PMLR 139, 2021
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Animals exhibit an innate ability to learn regularities of the world through interaction. By performing experiments in their environment, they are able to discern the causal factors of variation and infer how they affect the world's dynamics. Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner. We introduce {\em causal curiosity}, a novel intrinsic reward, and show that it allows our agents to learn optimal sequences of actions and discover causal factors in the dynamics of the environment. The learned behavior allows the agents to infer a binary quantized representation for the ground-truth causal factors in every environment. Additionally, we find that these experimental behaviors are semantically meaningful (e.g., our agents learn to lift blocks to categorize them by weight), and are learnt in a self-supervised manner with approximately 2.5 times less data than conventional supervised planners. We show that these behaviors can be re-purposed and fine-tuned (e.g., from lifting to pushing or other downstream tasks). Finally, we show that the knowledge of causal factor representations aids zero-shot learning for more complex tasks. Visit https://sites.google.com/usc.edu/causal-curiosity/home for website.
[ { "created": "Wed, 7 Oct 2020 02:07:51 GMT", "version": "v1" }, { "created": "Wed, 14 Apr 2021 23:59:04 GMT", "version": "v2" }, { "created": "Wed, 9 Jun 2021 01:19:39 GMT", "version": "v3" }, { "created": "Fri, 6 Aug 2021 21:53:05 GMT", "version": "v4" } ]
2021-08-10
[ [ "Sontakke", "Sumedh A.", "" ], [ "Mehrjou", "Arash", "" ], [ "Itti", "Laurent", "" ], [ "Schölkopf", "Bernhard", "" ] ]
Animals exhibit an innate ability to learn regularities of the world through interaction. By performing experiments in their environment, they are able to discern the causal factors of variation and infer how they affect the world's dynamics. Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner. We introduce {\em causal curiosity}, a novel intrinsic reward, and show that it allows our agents to learn optimal sequences of actions and discover causal factors in the dynamics of the environment. The learned behavior allows the agents to infer a binary quantized representation for the ground-truth causal factors in every environment. Additionally, we find that these experimental behaviors are semantically meaningful (e.g., our agents learn to lift blocks to categorize them by weight), and are learnt in a self-supervised manner with approximately 2.5 times less data than conventional supervised planners. We show that these behaviors can be re-purposed and fine-tuned (e.g., from lifting to pushing or other downstream tasks). Finally, we show that the knowledge of causal factor representations aids zero-shot learning for more complex tasks. Visit https://sites.google.com/usc.edu/causal-curiosity/home for website.
1602.00251
Kaveh Bakhtiyari
Kaveh Bakhtiyari
Do we have privacy in the digital world?
null
null
10.13140/RG.2.1.2492.5203/2
null
cs.CR cs.HC cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Not really.
[ { "created": "Sun, 31 Jan 2016 14:22:47 GMT", "version": "v1" }, { "created": "Thu, 26 Jan 2017 15:53:55 GMT", "version": "v2" } ]
2017-01-27
[ [ "Bakhtiyari", "Kaveh", "" ] ]
Not really.
2312.00220
Linzi Xing
Linzi Xing, Quan Tran, Fabian Caba, Franck Dernoncourt, Seunghyun Yoon, Zhaowen Wang, Trung Bui, Giuseppe Carenini
Multi-Modal Video Topic Segmentation with Dual-Contrastive Domain Adaptation
Accepted at the 30th International Conference on Multimedia Modeling (MMM 2024)
null
null
null
cs.MM cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Video topic segmentation unveils the coarse-grained semantic structure underlying videos and is essential for other video understanding tasks. Given the recent surge in multi-modal, relying solely on a single modality is arguably insufficient. On the other hand, prior solutions for similar tasks like video scene/shot segmentation cater to short videos with clear visual shifts but falter for long videos with subtle changes, such as livestreams. In this paper, we introduce a multi-modal video topic segmenter that utilizes both video transcripts and frames, bolstered by a cross-modal attention mechanism. Furthermore, we propose a dual-contrastive learning framework adhering to the unsupervised domain adaptation paradigm, enhancing our model's adaptability to longer, more semantically complex videos. Experiments on short and long video corpora demonstrate that our proposed solution, significantly surpasses baseline methods in terms of both accuracy and transferability, in both intra- and cross-domain settings.
[ { "created": "Thu, 30 Nov 2023 21:59:05 GMT", "version": "v1" } ]
2023-12-04
[ [ "Xing", "Linzi", "" ], [ "Tran", "Quan", "" ], [ "Caba", "Fabian", "" ], [ "Dernoncourt", "Franck", "" ], [ "Yoon", "Seunghyun", "" ], [ "Wang", "Zhaowen", "" ], [ "Bui", "Trung", "" ], [ "Carenini", "Giuseppe", "" ] ]
Video topic segmentation unveils the coarse-grained semantic structure underlying videos and is essential for other video understanding tasks. Given the recent surge in multi-modal, relying solely on a single modality is arguably insufficient. On the other hand, prior solutions for similar tasks like video scene/shot segmentation cater to short videos with clear visual shifts but falter for long videos with subtle changes, such as livestreams. In this paper, we introduce a multi-modal video topic segmenter that utilizes both video transcripts and frames, bolstered by a cross-modal attention mechanism. Furthermore, we propose a dual-contrastive learning framework adhering to the unsupervised domain adaptation paradigm, enhancing our model's adaptability to longer, more semantically complex videos. Experiments on short and long video corpora demonstrate that our proposed solution, significantly surpasses baseline methods in terms of both accuracy and transferability, in both intra- and cross-domain settings.
2004.14793
Gal Mendelson
Gal Mendelson
A Lower Bound on the stability region of Redundancy-d with FIFO service discipline
null
null
null
null
cs.PF cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Redundancy-d (R(d)) is a load balancing method used to route incoming jobs to K servers, each with its own queue. Every arriving job is replicated into 2<=d<=K tasks, which are then routed to d servers chosen uniformly at random. When the first task finishes service, the remaining d-1 tasks are cancelled and the job departs the system. Despite the fact that R(d) is known, under certain conditions, to substantially improve job completion times compared to not using redundancy at all, little is known on a more fundamental performance criterion: what is the set of arrival rates under which the R(d) queueing system with FIFO service discipline is stable? In this context, due to the complex dynamics of systems with redundancy and cancellations, existing results are scarce and are limited to very special cases with respect to the joint service time distribution of tasks. In this paper we provide a non-trivial, closed form lower bound on the stability region of R(d) for a general joint service time distribution of tasks with finite first and second moments. We consider a discrete time system with Bernoulli arrivals and assume that jobs are processed by their order of arrival. We use the workload processes and a quadratic Lyapunov function to characterize the set of arrival rates for which the system is stable. While simulation results indicate our bound is not tight, it provides an easy-to-check performance guarantee.
[ { "created": "Thu, 30 Apr 2020 14:07:25 GMT", "version": "v1" }, { "created": "Thu, 21 May 2020 18:15:06 GMT", "version": "v2" } ]
2020-05-25
[ [ "Mendelson", "Gal", "" ] ]
Redundancy-d (R(d)) is a load balancing method used to route incoming jobs to K servers, each with its own queue. Every arriving job is replicated into 2<=d<=K tasks, which are then routed to d servers chosen uniformly at random. When the first task finishes service, the remaining d-1 tasks are cancelled and the job departs the system. Despite the fact that R(d) is known, under certain conditions, to substantially improve job completion times compared to not using redundancy at all, little is known on a more fundamental performance criterion: what is the set of arrival rates under which the R(d) queueing system with FIFO service discipline is stable? In this context, due to the complex dynamics of systems with redundancy and cancellations, existing results are scarce and are limited to very special cases with respect to the joint service time distribution of tasks. In this paper we provide a non-trivial, closed form lower bound on the stability region of R(d) for a general joint service time distribution of tasks with finite first and second moments. We consider a discrete time system with Bernoulli arrivals and assume that jobs are processed by their order of arrival. We use the workload processes and a quadratic Lyapunov function to characterize the set of arrival rates for which the system is stable. While simulation results indicate our bound is not tight, it provides an easy-to-check performance guarantee.
2404.00492
Lijie Hu
Keyuan Cheng, Gang Lin, Haoyang Fei, Yuxuan zhai, Lu Yu, Muhammad Asif Ali, Lijie Hu, and Di Wang
Multi-hop Question Answering under Temporal Knowledge Editing
23 pages
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models. However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts. To address this limitation, we propose a novel framework, namely TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA). Unlike previous methods, TEMPLE-MQA first constructs a time-aware graph (TAG) to store edit knowledge in a structured manner. Then, through our proposed inference path, structural retrieval, and joint reasoning stages, TEMPLE-MQA effectively discerns temporal contexts within the question query. Experiments on benchmark datasets demonstrate that TEMPLE-MQA significantly outperforms baseline models. Additionally, we contribute a new dataset, namely TKEMQA, which serves as the inaugural benchmark tailored specifically for MQA with temporal scopes.
[ { "created": "Sat, 30 Mar 2024 23:22:51 GMT", "version": "v1" } ]
2024-04-02
[ [ "Cheng", "Keyuan", "" ], [ "Lin", "Gang", "" ], [ "Fei", "Haoyang", "" ], [ "zhai", "Yuxuan", "" ], [ "Yu", "Lu", "" ], [ "Ali", "Muhammad Asif", "" ], [ "Hu", "Lijie", "" ], [ "Wang", "Di", "" ] ]
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models. However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts. To address this limitation, we propose a novel framework, namely TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA). Unlike previous methods, TEMPLE-MQA first constructs a time-aware graph (TAG) to store edit knowledge in a structured manner. Then, through our proposed inference path, structural retrieval, and joint reasoning stages, TEMPLE-MQA effectively discerns temporal contexts within the question query. Experiments on benchmark datasets demonstrate that TEMPLE-MQA significantly outperforms baseline models. Additionally, we contribute a new dataset, namely TKEMQA, which serves as the inaugural benchmark tailored specifically for MQA with temporal scopes.
1706.02499
Burak Benligiray
Burak Benligiray, Cihan Topal, Cuneyt Akinlar
SliceType: Fast Gaze Typing with a Merging Keyboard
null
Journal on Multimodal User Interfaces, 2018
10.1007/s12193-018-0285-z
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jitter is an inevitable by-product of gaze detection. Because of this, gaze typing tends to be a slow and frustrating process. In this paper, we propose SliceType, a soft keyboard that is optimized for gaze input. Our main design objective is to use the screen area more efficiently by allocating a larger area to the target keys. We achieve this by determining the keys that will not be used for the next input, and allocating their space to the adjacent keys with a merging animation. Larger keys are faster to navigate towards, and easy to dwell on in the presence of eye tracking jitter. As a result, the user types faster and more comfortably. In addition, we employ a word completion scheme that complements gaze typing mechanics. A character and a related prediction is displayed at each key. Dwelling at a key enters the character, and double-dwelling enters the prediction. While dwelling on a key to enter a character, the user reads the related prediction effortlessly. The improvements provided by these features are quantified using the Fitts' law. The performance of the proposed keyboard is compared with two other soft keyboards designed for gaze typing, Dasher and GazeTalk. 37 novice users gaze-typed a piece of text using all three keyboards. The results of the experiment show that the proposed keyboard allows faster typing, and is more preferred by the users.
[ { "created": "Thu, 8 Jun 2017 10:06:52 GMT", "version": "v1" }, { "created": "Thu, 8 Mar 2018 13:39:05 GMT", "version": "v2" }, { "created": "Sun, 18 Mar 2018 19:14:36 GMT", "version": "v3" }, { "created": "Thu, 27 Dec 2018 13:59:19 GMT", "version": "v4" } ]
2018-12-31
[ [ "Benligiray", "Burak", "" ], [ "Topal", "Cihan", "" ], [ "Akinlar", "Cuneyt", "" ] ]
Jitter is an inevitable by-product of gaze detection. Because of this, gaze typing tends to be a slow and frustrating process. In this paper, we propose SliceType, a soft keyboard that is optimized for gaze input. Our main design objective is to use the screen area more efficiently by allocating a larger area to the target keys. We achieve this by determining the keys that will not be used for the next input, and allocating their space to the adjacent keys with a merging animation. Larger keys are faster to navigate towards, and easy to dwell on in the presence of eye tracking jitter. As a result, the user types faster and more comfortably. In addition, we employ a word completion scheme that complements gaze typing mechanics. A character and a related prediction is displayed at each key. Dwelling at a key enters the character, and double-dwelling enters the prediction. While dwelling on a key to enter a character, the user reads the related prediction effortlessly. The improvements provided by these features are quantified using the Fitts' law. The performance of the proposed keyboard is compared with two other soft keyboards designed for gaze typing, Dasher and GazeTalk. 37 novice users gaze-typed a piece of text using all three keyboards. The results of the experiment show that the proposed keyboard allows faster typing, and is more preferred by the users.
2405.07456
ABDELLAH Zakaria Sellam
Zakaria Abdellah Sellam, Cosimo Distante, Abdelmalik Taleb-Ahmed, Pier Luigi Mazzeo
Boosting House Price Estimations with Multi-Head Gated Attention
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating house prices is crucial for various stakeholders, including homeowners, investors, and policymakers. However, traditional spatial interpolation methods have limitations in capturing the complex spatial relationships that affect property values. To address these challenges, we have developed a new method called Multi-Head Gated Attention for spatial interpolation. Our approach builds upon attention-based interpolation models and incorporates multiple attention heads and gating mechanisms to capture spatial dependencies and contextual information better. Importantly, our model produces embeddings that reduce the dimensionality of the data, enabling simpler models like linear regression to outperform complex ensembling models. We conducted extensive experiments to compare our model with baseline methods and the original attention-based interpolation model. The results show a significant improvement in the accuracy of house price predictions, validating the effectiveness of our approach. This research advances the field of spatial interpolation and provides a robust tool for more precise house price evaluation. Our GitHub repository.contains the data and code for all datasets, which are available for researchers and practitioners interested in replicating or building upon our work.
[ { "created": "Mon, 13 May 2024 04:12:03 GMT", "version": "v1" } ]
2024-05-14
[ [ "Sellam", "Zakaria Abdellah", "" ], [ "Distante", "Cosimo", "" ], [ "Taleb-Ahmed", "Abdelmalik", "" ], [ "Mazzeo", "Pier Luigi", "" ] ]
Evaluating house prices is crucial for various stakeholders, including homeowners, investors, and policymakers. However, traditional spatial interpolation methods have limitations in capturing the complex spatial relationships that affect property values. To address these challenges, we have developed a new method called Multi-Head Gated Attention for spatial interpolation. Our approach builds upon attention-based interpolation models and incorporates multiple attention heads and gating mechanisms to capture spatial dependencies and contextual information better. Importantly, our model produces embeddings that reduce the dimensionality of the data, enabling simpler models like linear regression to outperform complex ensembling models. We conducted extensive experiments to compare our model with baseline methods and the original attention-based interpolation model. The results show a significant improvement in the accuracy of house price predictions, validating the effectiveness of our approach. This research advances the field of spatial interpolation and provides a robust tool for more precise house price evaluation. Our GitHub repository.contains the data and code for all datasets, which are available for researchers and practitioners interested in replicating or building upon our work.
2204.07333
Prabhat Kumar
Prabhat Kumar, Eduardo Fern\'andez
Topology optimization for additive manufacturing with length scale, overhang, and building orientation constraints
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a density-based topology optimization approach considering additive manufacturing limitations. The presented method considers the minimum size of parts, the minimum size of cavities, the inability of printing overhanging parts without the use of sacrificial supporting structures, and the printing directions. These constraints are geometrically addressed and implemented. The minimum size on solid and void zones is imposed through a well-known filtering technique. The sacrificial support material is reduced using a constraint that limits the maximum overhang angle of parts by comparing the structural gradient with a critical reference slope. Due to the local nature of the gradient, the chosen restriction is prone to introduce parts that meet the structural slope but that may not be self-supporting. The restriction limits the maximum overhang angle for a user-defined printing direction, which could reduce structural performance if the orientation is not properly selected. To ease these challenges, a new approach to reduce the introduction of such non-self-supporting parts and a novel method that includes different printing directions in the maximum overhang angle constraint are presented. The proposed strategy for considering the minimum size of solid and void phases, maximum overhang angle, and printing direction, is illustrated by solving a set of 2D benchmark design problems including stiff structures and compliant mechanisms. We also provide MATLAB codes in the appendix for educational purposes and for replication of the results.
[ { "created": "Fri, 15 Apr 2022 05:16:58 GMT", "version": "v1" } ]
2022-04-18
[ [ "Kumar", "Prabhat", "" ], [ "Fernández", "Eduardo", "" ] ]
This paper presents a density-based topology optimization approach considering additive manufacturing limitations. The presented method considers the minimum size of parts, the minimum size of cavities, the inability of printing overhanging parts without the use of sacrificial supporting structures, and the printing directions. These constraints are geometrically addressed and implemented. The minimum size on solid and void zones is imposed through a well-known filtering technique. The sacrificial support material is reduced using a constraint that limits the maximum overhang angle of parts by comparing the structural gradient with a critical reference slope. Due to the local nature of the gradient, the chosen restriction is prone to introduce parts that meet the structural slope but that may not be self-supporting. The restriction limits the maximum overhang angle for a user-defined printing direction, which could reduce structural performance if the orientation is not properly selected. To ease these challenges, a new approach to reduce the introduction of such non-self-supporting parts and a novel method that includes different printing directions in the maximum overhang angle constraint are presented. The proposed strategy for considering the minimum size of solid and void phases, maximum overhang angle, and printing direction, is illustrated by solving a set of 2D benchmark design problems including stiff structures and compliant mechanisms. We also provide MATLAB codes in the appendix for educational purposes and for replication of the results.
2304.10512
Orchid Chetia Phukan
Usha Lokala, Orchid Chetia Phukan, Triyasha Ghosh Dastidar, Francois Lamy, Raminta Daniulaityte, Amit Sheth
"Can We Detect Substance Use Disorder?": Knowledge and Time Aware Classification on Social Media from Darkweb
null
null
null
null
cs.LG cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
Opioid and substance misuse is rampant in the United States today, with the phenomenon known as the "opioid crisis". The relationship between substance use and mental health has been extensively studied, with one possible relationship being: substance misuse causes poor mental health. However, the lack of evidence on the relationship has resulted in opioids being largely inaccessible through legal means. This study analyzes the substance use posts on social media with opioids being sold through crypto market listings. We use the Drug Abuse Ontology, state-of-the-art deep learning, and knowledge-aware BERT-based models to generate sentiment and emotion for the social media posts to understand users' perceptions on social media by investigating questions such as: which synthetic opioids people are optimistic, neutral, or negative about? or what kind of drugs induced fear and sorrow? or what kind of drugs people love or are thankful about? or which drugs people think negatively about? or which opioids cause little to no sentimental reaction. We discuss how we crawled crypto market data and its use in extracting posts for fentanyl, fentanyl analogs, and other novel synthetic opioids. We also perform topic analysis associated with the generated sentiments and emotions to understand which topics correlate with people's responses to various drugs. Additionally, we analyze time-aware neural models built on these features while considering historical sentiment and emotional activity of posts related to a drug. The most effective model performs well (statistically significant) with (macroF1=82.12, recall =83.58) to identify substance use disorder.
[ { "created": "Thu, 20 Apr 2023 17:47:13 GMT", "version": "v1" } ]
2023-04-21
[ [ "Lokala", "Usha", "" ], [ "Phukan", "Orchid Chetia", "" ], [ "Dastidar", "Triyasha Ghosh", "" ], [ "Lamy", "Francois", "" ], [ "Daniulaityte", "Raminta", "" ], [ "Sheth", "Amit", "" ] ]
Opioid and substance misuse is rampant in the United States today, with the phenomenon known as the "opioid crisis". The relationship between substance use and mental health has been extensively studied, with one possible relationship being: substance misuse causes poor mental health. However, the lack of evidence on the relationship has resulted in opioids being largely inaccessible through legal means. This study analyzes the substance use posts on social media with opioids being sold through crypto market listings. We use the Drug Abuse Ontology, state-of-the-art deep learning, and knowledge-aware BERT-based models to generate sentiment and emotion for the social media posts to understand users' perceptions on social media by investigating questions such as: which synthetic opioids people are optimistic, neutral, or negative about? or what kind of drugs induced fear and sorrow? or what kind of drugs people love or are thankful about? or which drugs people think negatively about? or which opioids cause little to no sentimental reaction. We discuss how we crawled crypto market data and its use in extracting posts for fentanyl, fentanyl analogs, and other novel synthetic opioids. We also perform topic analysis associated with the generated sentiments and emotions to understand which topics correlate with people's responses to various drugs. Additionally, we analyze time-aware neural models built on these features while considering historical sentiment and emotional activity of posts related to a drug. The most effective model performs well (statistically significant) with (macroF1=82.12, recall =83.58) to identify substance use disorder.
1604.03178
Luca de Alfaro
Luca de Alfaro, Michael Shavlovsky, Vassilis Polychronopoulos
Incentives for Truthful Peer Grading
26 pages
null
null
UCSC-SOE-15-19
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peer grading systems work well only if users have incentives to grade truthfully. An example of non-truthful grading, that we observed in classrooms, consists in students assigning the maximum grade to all submissions. With a naive grading scheme, such as averaging the assigned grades, all students would receive the maximum grade. In this paper, we develop three grading schemes that provide incentives for truthful peer grading. In the first scheme, the instructor grades a fraction p of the submissions, and penalizes students whose grade deviates from the instructor grade. We provide lower bounds on p to ensure truthfulness, and conclude that these schemes work only for moderate class sizes, up to a few hundred students. To overcome this limitation, we propose a hierarchical extension of this supervised scheme, and we show that it can handle classes of any size with bounded (and little) instructor work, and is therefore applicable to Massive Open Online Courses (MOOCs). Finally, we propose unsupervised incentive schemes, in which the student incentive is based on statistical properties of the grade distribution, without any grading required by the instructor. We show that the proposed unsupervised schemes provide incentives to truthful grading, at the price of being possibly unfair to individual students.
[ { "created": "Mon, 11 Apr 2016 23:56:21 GMT", "version": "v1" } ]
2016-04-13
[ [ "de Alfaro", "Luca", "" ], [ "Shavlovsky", "Michael", "" ], [ "Polychronopoulos", "Vassilis", "" ] ]
Peer grading systems work well only if users have incentives to grade truthfully. An example of non-truthful grading, that we observed in classrooms, consists in students assigning the maximum grade to all submissions. With a naive grading scheme, such as averaging the assigned grades, all students would receive the maximum grade. In this paper, we develop three grading schemes that provide incentives for truthful peer grading. In the first scheme, the instructor grades a fraction p of the submissions, and penalizes students whose grade deviates from the instructor grade. We provide lower bounds on p to ensure truthfulness, and conclude that these schemes work only for moderate class sizes, up to a few hundred students. To overcome this limitation, we propose a hierarchical extension of this supervised scheme, and we show that it can handle classes of any size with bounded (and little) instructor work, and is therefore applicable to Massive Open Online Courses (MOOCs). Finally, we propose unsupervised incentive schemes, in which the student incentive is based on statistical properties of the grade distribution, without any grading required by the instructor. We show that the proposed unsupervised schemes provide incentives to truthful grading, at the price of being possibly unfair to individual students.
2303.00871
Lorenzo Mur-Labadia
Lorenzo Mur-Labadia, Ruben Martinez-Cantin and Jose J. Guerrero
Bayesian Deep Learning for Affordance Segmentation in images
2023 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Affordances are a fundamental concept in robotics since they relate available actions for an agent depending on its sensory-motor capabilities and the environment. We present a novel Bayesian deep network to detect affordances in images, at the same time that we quantify the distribution of the aleatoric and epistemic variance at the spatial level. We adapt the Mask-RCNN architecture to learn a probabilistic representation using Monte Carlo dropout. Our results outperform the state-of-the-art of deterministic networks. We attribute this improvement to a better probabilistic feature space representation on the encoder and the Bayesian variability induced at the mask generation, which adapts better to the object contours. We also introduce the new Probability-based Mask Quality measure that reveals the semantic and spatial differences on a probabilistic instance segmentation model. We modify the existing Probabilistic Detection Quality metric by comparing the binary masks rather than the predicted bounding boxes, achieving a finer-grained evaluation of the probabilistic segmentation. We find aleatoric variance in the contours of the objects due to the camera noise, while epistemic variance appears in visual challenging pixels.
[ { "created": "Thu, 2 Mar 2023 00:01:13 GMT", "version": "v1" } ]
2023-03-03
[ [ "Mur-Labadia", "Lorenzo", "" ], [ "Martinez-Cantin", "Ruben", "" ], [ "Guerrero", "Jose J.", "" ] ]
Affordances are a fundamental concept in robotics since they relate available actions for an agent depending on its sensory-motor capabilities and the environment. We present a novel Bayesian deep network to detect affordances in images, at the same time that we quantify the distribution of the aleatoric and epistemic variance at the spatial level. We adapt the Mask-RCNN architecture to learn a probabilistic representation using Monte Carlo dropout. Our results outperform the state-of-the-art of deterministic networks. We attribute this improvement to a better probabilistic feature space representation on the encoder and the Bayesian variability induced at the mask generation, which adapts better to the object contours. We also introduce the new Probability-based Mask Quality measure that reveals the semantic and spatial differences on a probabilistic instance segmentation model. We modify the existing Probabilistic Detection Quality metric by comparing the binary masks rather than the predicted bounding boxes, achieving a finer-grained evaluation of the probabilistic segmentation. We find aleatoric variance in the contours of the objects due to the camera noise, while epistemic variance appears in visual challenging pixels.
2211.13720
Sachit Rao
Wayne Paul Martis and Sachit Rao
Cooperative Collision Avoidance in Mobile Robots using Dynamic Vortex Potential Fields
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In this paper, the collision avoidance problem for non-holonomic robots moving at constant linear speeds in the 2-D plane is considered. The maneuvers to avoid collisions are designed using dynamic vortex potential fields (PFs) and their negative gradients; this formulation leads to a reciprocal behaviour between the robots, denoted as being cooperative. The repulsive field is selected as a function of the velocity and position of a robot relative to another and introducing vorticity in its definition guarantees the absence of local minima. Such a repulsive field is activated by a robot only when it is on a collision path with other mobile robots or stationary obstacles. By analysing the kinematics-based engagement dynamics in polar coordinates, it is shown that a cooperative robot is able to avoid collisions with non-cooperating robots, such as stationary and constant velocity robots, as well as those actively seeking to collide with it. Conditions on the PF parameters are identified that ensure collision avoidance for all cases. Experimental results acquired using a mobile robot platform support the theoretical contributions.
[ { "created": "Thu, 24 Nov 2022 17:16:01 GMT", "version": "v1" } ]
2022-11-28
[ [ "Martis", "Wayne Paul", "" ], [ "Rao", "Sachit", "" ] ]
In this paper, the collision avoidance problem for non-holonomic robots moving at constant linear speeds in the 2-D plane is considered. The maneuvers to avoid collisions are designed using dynamic vortex potential fields (PFs) and their negative gradients; this formulation leads to a reciprocal behaviour between the robots, denoted as being cooperative. The repulsive field is selected as a function of the velocity and position of a robot relative to another and introducing vorticity in its definition guarantees the absence of local minima. Such a repulsive field is activated by a robot only when it is on a collision path with other mobile robots or stationary obstacles. By analysing the kinematics-based engagement dynamics in polar coordinates, it is shown that a cooperative robot is able to avoid collisions with non-cooperating robots, such as stationary and constant velocity robots, as well as those actively seeking to collide with it. Conditions on the PF parameters are identified that ensure collision avoidance for all cases. Experimental results acquired using a mobile robot platform support the theoretical contributions.
2101.10488
EPTCS
Paul Wilson (University of Southampton), Fabio Zanasi (University College London)
Reverse Derivative Ascent: A Categorical Approach to Learning Boolean Circuits
In Proceedings ACT 2020, arXiv:2101.07888
EPTCS 333, 2021, pp. 247-260
10.4204/EPTCS.333.17
null
cs.LO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Reverse Derivative Ascent: a categorical analogue of gradient based methods for machine learning. Our algorithm is defined at the level of so-called reverse differential categories. It can be used to learn the parameters of models which are expressed as morphisms of such categories. Our motivating example is boolean circuits: we show how our algorithm can be applied to such circuits by using the theory of reverse differential categories. Note our methodology allows us to learn the parameters of boolean circuits directly, in contrast to existing binarised neural network approaches. Moreover, we demonstrate its empirical value by giving experimental results on benchmark machine learning datasets.
[ { "created": "Tue, 26 Jan 2021 00:07:20 GMT", "version": "v1" } ]
2021-01-27
[ [ "Wilson", "Paul", "", "University of Southampton" ], [ "Zanasi", "Fabio", "", "University\n College London" ] ]
We introduce Reverse Derivative Ascent: a categorical analogue of gradient based methods for machine learning. Our algorithm is defined at the level of so-called reverse differential categories. It can be used to learn the parameters of models which are expressed as morphisms of such categories. Our motivating example is boolean circuits: we show how our algorithm can be applied to such circuits by using the theory of reverse differential categories. Note our methodology allows us to learn the parameters of boolean circuits directly, in contrast to existing binarised neural network approaches. Moreover, we demonstrate its empirical value by giving experimental results on benchmark machine learning datasets.
1511.03532
Ali Keles
Ali Keles, Ayturk Keles
IBMMS Decision Support Tool For Management of Bank Telemarketing Campaigns
15 pages, 4 figures, 4 tables, journal in International Journal of Database Management Systems, Vol.7, No.5, October 2015
null
10.5121/ijdms.2015.7501
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although direct marketing is a good method for banks to utilize in the face of global competition and the financial crisis, it has been shown to exhibit poor performance. However, there are some drawbacks to direct campaigns, such as those related to improving the negative attributes that customers ascribe to banks. To overcome these problems, attractive long-term deposit campaigns should be organized and managed more effectively. The aim of this study is to develop an Intelligent Bank Market Management System (IBMMS) for bank managers who want to manage efficient marketing campaigns. IBMMS is the first system developed by combining the power of data mining with the capabilities of expert systems in this area. Moreover, IBMMS includes important features that enable it to be intelligent: a knowledge base, an inference engine and an advisor. Using this system, a manager can successfully direct marketing campaigns and follow the decision schemas of customers both as individuals and as a group; moreover, a manager can make decisions that lead to the desired response by customers.
[ { "created": "Wed, 11 Nov 2015 15:26:08 GMT", "version": "v1" }, { "created": "Thu, 12 Nov 2015 14:14:01 GMT", "version": "v2" } ]
2015-11-13
[ [ "Keles", "Ali", "" ], [ "Keles", "Ayturk", "" ] ]
Although direct marketing is a good method for banks to utilize in the face of global competition and the financial crisis, it has been shown to exhibit poor performance. However, there are some drawbacks to direct campaigns, such as those related to improving the negative attributes that customers ascribe to banks. To overcome these problems, attractive long-term deposit campaigns should be organized and managed more effectively. The aim of this study is to develop an Intelligent Bank Market Management System (IBMMS) for bank managers who want to manage efficient marketing campaigns. IBMMS is the first system developed by combining the power of data mining with the capabilities of expert systems in this area. Moreover, IBMMS includes important features that enable it to be intelligent: a knowledge base, an inference engine and an advisor. Using this system, a manager can successfully direct marketing campaigns and follow the decision schemas of customers both as individuals and as a group; moreover, a manager can make decisions that lead to the desired response by customers.
1808.08106
Joseph Schuchart
Joseph Schuchart, Daniel Hackenberg, Robert Sch\"one, Thomas Ilsche, Ramkumar Nagappan, Michael K. Patterson
The Shift from Processor Power Consumption to Performance Variations: Fundamental Implications at Scale
null
Computer Science - Research and Development, Vol. 31, pp. 197--205, Nov 2016
10.1007/s00450-016-0327-2
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Intel Haswell-EP processor generation introduces several major advancements of power control and energy-efficiency features. For computationally intense applications using advanced vector extension (AVX) instructions, the processor cannot continuously operate at full speed but instead reduces its frequency below the nominal frequency to maintain operations within thermal design power (TDP) limitations. Moreover, the running average power limitation (RAPL) mechanism to enforce the TDP limitation has changed from a modeling to a measurement approach. The combination of these two novelties have significant implications. Through measurements on an Intel Sandy Bridge-EP cluster, we show that previous generations have sustained homogeneous performance across multiple CPUs and compensated for hardware manufacturing variability through varying power consumption. In contrast, our measurements on a Petaflop Haswell system show that this generation exhibits rather homogeneous power consumption limited by the TDP and capped by the improved RAPL while providing inhomogeneous performance under full load. Since all of these controls are transparent to the user, this behavior is likely to complicate performance analysis tasks and impact tightly coupled parallel applications.
[ { "created": "Fri, 24 Aug 2018 12:40:03 GMT", "version": "v1" } ]
2018-08-27
[ [ "Schuchart", "Joseph", "" ], [ "Hackenberg", "Daniel", "" ], [ "Schöne", "Robert", "" ], [ "Ilsche", "Thomas", "" ], [ "Nagappan", "Ramkumar", "" ], [ "Patterson", "Michael K.", "" ] ]
The Intel Haswell-EP processor generation introduces several major advancements of power control and energy-efficiency features. For computationally intense applications using advanced vector extension (AVX) instructions, the processor cannot continuously operate at full speed but instead reduces its frequency below the nominal frequency to maintain operations within thermal design power (TDP) limitations. Moreover, the running average power limitation (RAPL) mechanism to enforce the TDP limitation has changed from a modeling to a measurement approach. The combination of these two novelties have significant implications. Through measurements on an Intel Sandy Bridge-EP cluster, we show that previous generations have sustained homogeneous performance across multiple CPUs and compensated for hardware manufacturing variability through varying power consumption. In contrast, our measurements on a Petaflop Haswell system show that this generation exhibits rather homogeneous power consumption limited by the TDP and capped by the improved RAPL while providing inhomogeneous performance under full load. Since all of these controls are transparent to the user, this behavior is likely to complicate performance analysis tasks and impact tightly coupled parallel applications.
2012.08483
Valerio Perrone
Piali Das, Valerio Perrone, Nikita Ivkin, Tanya Bansal, Zohar Karnin, Huibin Shen, Iaroslav Shcherbatyi, Yotam Elor, Wilton Wu, Aida Zolic, Thibaut Lienart, Alex Tang, Amr Ahmed, Jean Baptiste Faddoul, Rodolphe Jenatton, Fela Winkelmolen, Philip Gautier, Leo Dirac, Andre Perunicic, Miroslav Miladinovic, Giovanni Zappella, C\'edric Archambeau, Matthias Seeger, Bhaskar Dutt, Laurence Rouesnel
Amazon SageMaker Autopilot: a white box AutoML solution at scale
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AutoML systems provide a black-box solution to machine learning problems by selecting the right way of processing features, choosing an algorithm and tuning the hyperparameters of the entire pipeline. Although these systems perform well on many datasets, there is still a non-negligible number of datasets for which the one-shot solution produced by each particular system would provide sub-par performance. In this paper, we present Amazon SageMaker Autopilot: a fully managed system providing an automated ML solution that can be modified when needed. Given a tabular dataset and the target column name, Autopilot identifies the problem type, analyzes the data and produces a diverse set of complete ML pipelines including feature preprocessing and ML algorithms, which are tuned to generate a leaderboard of candidate models. In the scenario where the performance is not satisfactory, a data scientist is able to view and edit the proposed ML pipelines in order to infuse their expertise and business knowledge without having to revert to a fully manual solution. This paper describes the different components of Autopilot, emphasizing the infrastructure choices that allow scalability, high quality models, editable ML pipelines, consumption of artifacts of offline meta-learning, and a convenient integration with the entire SageMaker suite allowing these trained models to be used in a production setting.
[ { "created": "Tue, 15 Dec 2020 18:29:04 GMT", "version": "v1" }, { "created": "Wed, 16 Dec 2020 18:51:27 GMT", "version": "v2" } ]
2020-12-17
[ [ "Das", "Piali", "" ], [ "Perrone", "Valerio", "" ], [ "Ivkin", "Nikita", "" ], [ "Bansal", "Tanya", "" ], [ "Karnin", "Zohar", "" ], [ "Shen", "Huibin", "" ], [ "Shcherbatyi", "Iaroslav", "" ], [ "Elor", "Yotam", "" ], [ "Wu", "Wilton", "" ], [ "Zolic", "Aida", "" ], [ "Lienart", "Thibaut", "" ], [ "Tang", "Alex", "" ], [ "Ahmed", "Amr", "" ], [ "Faddoul", "Jean Baptiste", "" ], [ "Jenatton", "Rodolphe", "" ], [ "Winkelmolen", "Fela", "" ], [ "Gautier", "Philip", "" ], [ "Dirac", "Leo", "" ], [ "Perunicic", "Andre", "" ], [ "Miladinovic", "Miroslav", "" ], [ "Zappella", "Giovanni", "" ], [ "Archambeau", "Cédric", "" ], [ "Seeger", "Matthias", "" ], [ "Dutt", "Bhaskar", "" ], [ "Rouesnel", "Laurence", "" ] ]
AutoML systems provide a black-box solution to machine learning problems by selecting the right way of processing features, choosing an algorithm and tuning the hyperparameters of the entire pipeline. Although these systems perform well on many datasets, there is still a non-negligible number of datasets for which the one-shot solution produced by each particular system would provide sub-par performance. In this paper, we present Amazon SageMaker Autopilot: a fully managed system providing an automated ML solution that can be modified when needed. Given a tabular dataset and the target column name, Autopilot identifies the problem type, analyzes the data and produces a diverse set of complete ML pipelines including feature preprocessing and ML algorithms, which are tuned to generate a leaderboard of candidate models. In the scenario where the performance is not satisfactory, a data scientist is able to view and edit the proposed ML pipelines in order to infuse their expertise and business knowledge without having to revert to a fully manual solution. This paper describes the different components of Autopilot, emphasizing the infrastructure choices that allow scalability, high quality models, editable ML pipelines, consumption of artifacts of offline meta-learning, and a convenient integration with the entire SageMaker suite allowing these trained models to be used in a production setting.
1905.06641
Lumin Liu
Lumin Liu, Jun Zhang, S. H. Song, Khaled B. Letaief
Client-Edge-Cloud Hierarchical Federated Learning
6 pages, 4 figures
null
null
null
cs.NI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning is a collaborative machine learning framework to train a deep learning model without accessing clients' private data. Previous works assume one central parameter server either at the cloud or at the edge. The cloud server can access more data but with excessive communication overhead and long latency, while the edge server enjoys more efficient communications with the clients. To combine their advantages, we propose a client-edge-cloud hierarchical Federated Learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation. In this way, the model can be trained faster and better communication-computation trade-offs can be achieved. Convergence analysis is provided for HierFAVG and the effects of key parameters are also investigated, which lead to qualitative design guidelines. Empirical experiments verify the analysis and demonstrate the benefits of this hierarchical architecture in different data distribution scenarios. Particularly, it is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
[ { "created": "Thu, 16 May 2019 10:23:36 GMT", "version": "v1" }, { "created": "Thu, 31 Oct 2019 14:45:01 GMT", "version": "v2" } ]
2019-11-01
[ [ "Liu", "Lumin", "" ], [ "Zhang", "Jun", "" ], [ "Song", "S. H.", "" ], [ "Letaief", "Khaled B.", "" ] ]
Federated Learning is a collaborative machine learning framework to train a deep learning model without accessing clients' private data. Previous works assume one central parameter server either at the cloud or at the edge. The cloud server can access more data but with excessive communication overhead and long latency, while the edge server enjoys more efficient communications with the clients. To combine their advantages, we propose a client-edge-cloud hierarchical Federated Learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation. In this way, the model can be trained faster and better communication-computation trade-offs can be achieved. Convergence analysis is provided for HierFAVG and the effects of key parameters are also investigated, which lead to qualitative design guidelines. Empirical experiments verify the analysis and demonstrate the benefits of this hierarchical architecture in different data distribution scenarios. Particularly, it is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
1810.08313
Jianyu Wang
Jianyu Wang, Gauri Joshi
Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Accepted to SysML 2019
null
null
null
cs.LG cs.DC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale machine learning training, in particular distributed stochastic gradient descent, needs to be robust to inherent system variability such as node straggling and random communication delays. This work considers a distributed training framework where each worker node is allowed to perform local model updates and the resulting models are averaged periodically. We analyze the true speed of error convergence with respect to wall-clock time (instead of the number of iterations), and analyze how it is affected by the frequency of averaging. The main contribution is the design of AdaComm, an adaptive communication strategy that starts with infrequent averaging to save communication delay and improve convergence speed, and then increases the communication frequency in order to achieve a low error floor. Rigorous experiments on training deep neural networks show that AdaComm can take $3 \times$ less time than fully synchronous SGD, and still reach the same final training loss.
[ { "created": "Fri, 19 Oct 2018 00:04:05 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2019 16:45:02 GMT", "version": "v2" } ]
2019-03-08
[ [ "Wang", "Jianyu", "" ], [ "Joshi", "Gauri", "" ] ]
Large-scale machine learning training, in particular distributed stochastic gradient descent, needs to be robust to inherent system variability such as node straggling and random communication delays. This work considers a distributed training framework where each worker node is allowed to perform local model updates and the resulting models are averaged periodically. We analyze the true speed of error convergence with respect to wall-clock time (instead of the number of iterations), and analyze how it is affected by the frequency of averaging. The main contribution is the design of AdaComm, an adaptive communication strategy that starts with infrequent averaging to save communication delay and improve convergence speed, and then increases the communication frequency in order to achieve a low error floor. Rigorous experiments on training deep neural networks show that AdaComm can take $3 \times$ less time than fully synchronous SGD, and still reach the same final training loss.
1311.5058
EPTCS
Sebastian Maneth (University of Edinburgh)
Proceedings Second International Workshop on Trends in Tree Automata and Tree Transducers
null
EPTCS 134, 2013
10.4204/EPTCS.134
null
cs.FL cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume contains the papers that were presented at the second international workshop on Trends in Tree Automata and Transducers (TTATT 2013) which took place on October 19th, 2013 in Hanoi/Vietnam. The workshop was colocated with the verification conference ATVA. The first edition of the workshop was colocated with RTA and took place in Nagoya/Japan. The interest of the workshop lies at the intersection of programming languages, verification, and database theory, which are areas to which tree automata and transducers are applied recently.
[ { "created": "Wed, 20 Nov 2013 14:11:27 GMT", "version": "v1" } ]
2013-11-21
[ [ "Maneth", "Sebastian", "", "University of Edinburgh" ] ]
This volume contains the papers that were presented at the second international workshop on Trends in Tree Automata and Transducers (TTATT 2013) which took place on October 19th, 2013 in Hanoi/Vietnam. The workshop was colocated with the verification conference ATVA. The first edition of the workshop was colocated with RTA and took place in Nagoya/Japan. The interest of the workshop lies at the intersection of programming languages, verification, and database theory, which are areas to which tree automata and transducers are applied recently.
1711.03588
Carroll Morgan
Annabelle McIver, Carroll Morgan, Benjamin Lucien Kaminski, Joost-Pieter Katoen
A New Proof Rule for Almost-Sure Termination
V1 to appear in PoPL18. This version collects some existing text into new example subsection 5.5 and adds a new example 5.6 and makes further remarks about uncountable branching. The new example 5.6 relates to work on lexicographic termination methods, also to appear in PoPL18 [Agrawal et al, 2018]
null
null
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important question for a probabilistic program is whether the probability mass of all its diverging runs is zero, that is that it terminates "almost surely". Proving that can be hard, and this paper presents a new method for doing so; it is expressed in a program logic, and so applies directly to source code. The programs may contain both probabilistic- and demonic choice, and the probabilistic choices may depend on the current state. As do other researchers, we use variant functions (a.k.a. "super-martingales") that are real-valued and probabilistically might decrease on each loop iteration; but our key innovation is that the amount as well as the probability of the decrease are parametric. We prove the soundness of the new rule, indicate where its applicability goes beyond existing rules, and explain its connection to classical results on denumerable (non-demonic) Markov chains.
[ { "created": "Thu, 9 Nov 2017 20:29:00 GMT", "version": "v1" }, { "created": "Tue, 26 Dec 2017 01:09:43 GMT", "version": "v2" } ]
2017-12-27
[ [ "McIver", "Annabelle", "" ], [ "Morgan", "Carroll", "" ], [ "Kaminski", "Benjamin Lucien", "" ], [ "Katoen", "Joost-Pieter", "" ] ]
An important question for a probabilistic program is whether the probability mass of all its diverging runs is zero, that is that it terminates "almost surely". Proving that can be hard, and this paper presents a new method for doing so; it is expressed in a program logic, and so applies directly to source code. The programs may contain both probabilistic- and demonic choice, and the probabilistic choices may depend on the current state. As do other researchers, we use variant functions (a.k.a. "super-martingales") that are real-valued and probabilistically might decrease on each loop iteration; but our key innovation is that the amount as well as the probability of the decrease are parametric. We prove the soundness of the new rule, indicate where its applicability goes beyond existing rules, and explain its connection to classical results on denumerable (non-demonic) Markov chains.
2209.15149
Alexandros Hollender
Argyrios Deligkas, John Fearnley, Alexandros Hollender, Themistoklis Melissourgos
Pure-Circuit: Strong Inapproximability for PPAD
Improved inapproximability result for approximate NE in polymatrix games
null
null
null
cs.CC cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current state-of-the-art methods for showing inapproximability in PPAD arise from the $\varepsilon$-Generalized-Circuit ($\varepsilon$-GCircuit) problem. Rubinstein (2018) showed that there exists a small unknown constant $\varepsilon$ for which $\varepsilon$-GCircuit is PPAD-hard, and subsequent work has shown hardness results for other problems in PPAD by using $\varepsilon$-GCircuit as an intermediate problem. We introduce Pure-Circuit, a new intermediate problem for PPAD, which can be thought of as $\varepsilon$-GCircuit pushed to the limit as $\varepsilon \rightarrow 1$, and we show that the problem is PPAD-complete. We then prove that $\varepsilon$-GCircuit is PPAD-hard for all $\varepsilon < 0.1$ by a reduction from Pure-Circuit, and thus strengthen all prior work that has used GCircuit as an intermediate problem from the existential-constant regime to the large-constant regime. We show that stronger inapproximability results can be derived by reducing directly from Pure-Circuit. In particular, we prove tight inapproximability results for computing $\varepsilon$-well-supported Nash equilibria in two-action polymatrix games, as well as for finding approximate equilibria in threshold games.
[ { "created": "Fri, 30 Sep 2022 00:25:04 GMT", "version": "v1" }, { "created": "Fri, 3 Mar 2023 15:41:21 GMT", "version": "v2" } ]
2023-03-06
[ [ "Deligkas", "Argyrios", "" ], [ "Fearnley", "John", "" ], [ "Hollender", "Alexandros", "" ], [ "Melissourgos", "Themistoklis", "" ] ]
The current state-of-the-art methods for showing inapproximability in PPAD arise from the $\varepsilon$-Generalized-Circuit ($\varepsilon$-GCircuit) problem. Rubinstein (2018) showed that there exists a small unknown constant $\varepsilon$ for which $\varepsilon$-GCircuit is PPAD-hard, and subsequent work has shown hardness results for other problems in PPAD by using $\varepsilon$-GCircuit as an intermediate problem. We introduce Pure-Circuit, a new intermediate problem for PPAD, which can be thought of as $\varepsilon$-GCircuit pushed to the limit as $\varepsilon \rightarrow 1$, and we show that the problem is PPAD-complete. We then prove that $\varepsilon$-GCircuit is PPAD-hard for all $\varepsilon < 0.1$ by a reduction from Pure-Circuit, and thus strengthen all prior work that has used GCircuit as an intermediate problem from the existential-constant regime to the large-constant regime. We show that stronger inapproximability results can be derived by reducing directly from Pure-Circuit. In particular, we prove tight inapproximability results for computing $\varepsilon$-well-supported Nash equilibria in two-action polymatrix games, as well as for finding approximate equilibria in threshold games.
1804.07899
Markus Freitag
Markus Freitag, Scott Roy
Unsupervised Natural Language Generation with Denoising Autoencoders
Accepted at EMNLP 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating text from structured data is important for various tasks such as question answering and dialog systems. We show that in at least one domain, without any supervision and only based on unlabeled text, we are able to build a Natural Language Generation (NLG) system with higher performance than supervised approaches. In our approach, we interpret the structured data as a corrupt representation of the desired output and use a denoising auto-encoder to reconstruct the sentence. We show how to introduce noise into training examples that do not contain structured data, and that the resulting denoising auto-encoder generalizes to generate correct sentences when given structured data.
[ { "created": "Sat, 21 Apr 2018 06:16:57 GMT", "version": "v1" }, { "created": "Fri, 24 Aug 2018 19:53:33 GMT", "version": "v2" } ]
2018-08-28
[ [ "Freitag", "Markus", "" ], [ "Roy", "Scott", "" ] ]
Generating text from structured data is important for various tasks such as question answering and dialog systems. We show that in at least one domain, without any supervision and only based on unlabeled text, we are able to build a Natural Language Generation (NLG) system with higher performance than supervised approaches. In our approach, we interpret the structured data as a corrupt representation of the desired output and use a denoising auto-encoder to reconstruct the sentence. We show how to introduce noise into training examples that do not contain structured data, and that the resulting denoising auto-encoder generalizes to generate correct sentences when given structured data.
2404.10789
Dipkamal Bhusal
Dipkamal Bhusal, Md Tanvirul Alam, Monish K. Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
PASA: Attack Agnostic Unsupervised Adversarial Detection using Prediction & Attribution Sensitivity Analysis
9th IEEE European Symposium on Security and Privacy
null
null
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep neural networks for classification are vulnerable to adversarial attacks, where small perturbations to input samples lead to incorrect predictions. This susceptibility, combined with the black-box nature of such networks, limits their adoption in critical applications like autonomous driving. Feature-attribution-based explanation methods provide relevance of input features for model predictions on input samples, thus explaining model decisions. However, we observe that both model predictions and feature attributions for input samples are sensitive to noise. We develop a practical method for this characteristic of model prediction and feature attribution to detect adversarial samples. Our method, PASA, requires the computation of two test statistics using model prediction and feature attribution and can reliably detect adversarial samples using thresholds learned from benign samples. We validate our lightweight approach by evaluating the performance of PASA on varying strengths of FGSM, PGD, BIM, and CW attacks on multiple image and non-image datasets. On average, we outperform state-of-the-art statistical unsupervised adversarial detectors on CIFAR-10 and ImageNet by 14\% and 35\% ROC-AUC scores, respectively. Moreover, our approach demonstrates competitive performance even when an adversary is aware of the defense mechanism.
[ { "created": "Fri, 12 Apr 2024 21:22:21 GMT", "version": "v1" } ]
2024-04-18
[ [ "Bhusal", "Dipkamal", "" ], [ "Alam", "Md Tanvirul", "" ], [ "Veerabhadran", "Monish K.", "" ], [ "Clifford", "Michael", "" ], [ "Rampazzi", "Sara", "" ], [ "Rastogi", "Nidhi", "" ] ]
Deep neural networks for classification are vulnerable to adversarial attacks, where small perturbations to input samples lead to incorrect predictions. This susceptibility, combined with the black-box nature of such networks, limits their adoption in critical applications like autonomous driving. Feature-attribution-based explanation methods provide relevance of input features for model predictions on input samples, thus explaining model decisions. However, we observe that both model predictions and feature attributions for input samples are sensitive to noise. We develop a practical method for this characteristic of model prediction and feature attribution to detect adversarial samples. Our method, PASA, requires the computation of two test statistics using model prediction and feature attribution and can reliably detect adversarial samples using thresholds learned from benign samples. We validate our lightweight approach by evaluating the performance of PASA on varying strengths of FGSM, PGD, BIM, and CW attacks on multiple image and non-image datasets. On average, we outperform state-of-the-art statistical unsupervised adversarial detectors on CIFAR-10 and ImageNet by 14\% and 35\% ROC-AUC scores, respectively. Moreover, our approach demonstrates competitive performance even when an adversary is aware of the defense mechanism.
2308.08302
Ribhu Chopra
Ashish Pratap Singh, Ribhu Chopra
PSA Based Power Control for Cell-Free Massive MIMO under LoS/NLoS Channels
10 pages, 10 figures
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
A primary design goal of the cell-free~(CF) massive MIMO architecture is to provide uniformly good coverage to all the user equipments~(UEs) connected to the network. However, it has been found that this requirement may not be satisfied in case the channels between the access points~(APs) and the UEs are mixed LoS/NLoS. In this paper, we try to address this issue via the use of appropriate power control in both the uplink and downlink of a CF massive MIMO system under mixed LoS/NLoS channels. We find that simplistic power control techniques, such as channel inversion-based power control perform sub-optimally as compared to max-min power control. As a consequence, we propose a particle swarm algorithm~(PSA) based power control algorithm to optimize the performance of the system under study. We then use numerical simulations to evaluate the performance of the proposed PSA-based solution and show that it results in a significant improvement in the fairness of the underlying system while incurring a lower computational complexity.
[ { "created": "Wed, 16 Aug 2023 12:05:16 GMT", "version": "v1" } ]
2023-08-17
[ [ "Singh", "Ashish Pratap", "" ], [ "Chopra", "Ribhu", "" ] ]
A primary design goal of the cell-free~(CF) massive MIMO architecture is to provide uniformly good coverage to all the user equipments~(UEs) connected to the network. However, it has been found that this requirement may not be satisfied in case the channels between the access points~(APs) and the UEs are mixed LoS/NLoS. In this paper, we try to address this issue via the use of appropriate power control in both the uplink and downlink of a CF massive MIMO system under mixed LoS/NLoS channels. We find that simplistic power control techniques, such as channel inversion-based power control perform sub-optimally as compared to max-min power control. As a consequence, we propose a particle swarm algorithm~(PSA) based power control algorithm to optimize the performance of the system under study. We then use numerical simulations to evaluate the performance of the proposed PSA-based solution and show that it results in a significant improvement in the fairness of the underlying system while incurring a lower computational complexity.
2106.00934
Nada Almarwani
Nada Almarwani and Mona Diab
Discrete Cosine Transform as Universal Sentence Encoder
to be published in ACL-IJCNLP 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern sentence encoders are used to generate dense vector representations that capture the underlying linguistic characteristics for a sequence of words, including phrases, sentences, or paragraphs. These kinds of representations are ideal for training a classifier for an end task such as sentiment analysis, question answering and text classification. Different models have been proposed to efficiently generate general purpose sentence representations to be used in pretraining protocols. While averaging is the most commonly used efficient sentence encoder, Discrete Cosine Transform (DCT) was recently proposed as an alternative that captures the underlying syntactic characteristics of a given text without compromising practical efficiency compared to averaging. However, as with most other sentence encoders, the DCT sentence encoder was only evaluated in English. To this end, we utilize DCT encoder to generate universal sentence representation for different languages such as German, French, Spanish and Russian. The experimental results clearly show the superior effectiveness of DCT encoding in which consistent performance improvements are achieved over strong baselines on multiple standardized datasets.
[ { "created": "Wed, 2 Jun 2021 04:43:54 GMT", "version": "v1" } ]
2021-06-03
[ [ "Almarwani", "Nada", "" ], [ "Diab", "Mona", "" ] ]
Modern sentence encoders are used to generate dense vector representations that capture the underlying linguistic characteristics for a sequence of words, including phrases, sentences, or paragraphs. These kinds of representations are ideal for training a classifier for an end task such as sentiment analysis, question answering and text classification. Different models have been proposed to efficiently generate general purpose sentence representations to be used in pretraining protocols. While averaging is the most commonly used efficient sentence encoder, Discrete Cosine Transform (DCT) was recently proposed as an alternative that captures the underlying syntactic characteristics of a given text without compromising practical efficiency compared to averaging. However, as with most other sentence encoders, the DCT sentence encoder was only evaluated in English. To this end, we utilize DCT encoder to generate universal sentence representation for different languages such as German, French, Spanish and Russian. The experimental results clearly show the superior effectiveness of DCT encoding in which consistent performance improvements are achieved over strong baselines on multiple standardized datasets.
2011.08463
R\'emy Portelas
R\'emy Portelas, Cl\'ement Romac, Katja Hofmann, Pierre-Yves Oudeyer
Meta Automatic Curriculum Learning
This paper extends and generalizes work in arXiv:2004.03168
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major challenge in the Deep RL (DRL) community is to train agents able to generalize their control policy over situations never seen in training. Training on diverse tasks has been identified as a key ingredient for good generalization, which pushed researchers towards using rich procedural task generation systems controlled through complex continuous parameter spaces. In such complex task spaces, it is essential to rely on some form of Automatic Curriculum Learning (ACL) to adapt the task sampling distribution to a given learning agent, instead of randomly sampling tasks, as many could end up being either trivial or unfeasible. Since it is hard to get prior knowledge on such task spaces, many ACL algorithms explore the task space to detect progress niches over time, a costly tabula-rasa process that needs to be performed for each new learning agents, although they might have similarities in their capabilities profiles. To address this limitation, we introduce the concept of Meta-ACL, and formalize it in the context of black-box RL learners, i.e. algorithms seeking to generalize curriculum generation to an (unknown) distribution of learners. In this work, we present AGAIN, a first instantiation of Meta-ACL, and showcase its benefits for curriculum generation over classical ACL in multiple simulated environments including procedurally generated parkour environments with learners of varying morphologies. Videos and code are available at https://sites.google.com/view/meta-acl .
[ { "created": "Mon, 16 Nov 2020 14:56:42 GMT", "version": "v1" }, { "created": "Thu, 4 Mar 2021 16:19:46 GMT", "version": "v2" }, { "created": "Wed, 1 Sep 2021 15:41:34 GMT", "version": "v3" } ]
2021-09-02
[ [ "Portelas", "Rémy", "" ], [ "Romac", "Clément", "" ], [ "Hofmann", "Katja", "" ], [ "Oudeyer", "Pierre-Yves", "" ] ]
A major challenge in the Deep RL (DRL) community is to train agents able to generalize their control policy over situations never seen in training. Training on diverse tasks has been identified as a key ingredient for good generalization, which pushed researchers towards using rich procedural task generation systems controlled through complex continuous parameter spaces. In such complex task spaces, it is essential to rely on some form of Automatic Curriculum Learning (ACL) to adapt the task sampling distribution to a given learning agent, instead of randomly sampling tasks, as many could end up being either trivial or unfeasible. Since it is hard to get prior knowledge on such task spaces, many ACL algorithms explore the task space to detect progress niches over time, a costly tabula-rasa process that needs to be performed for each new learning agents, although they might have similarities in their capabilities profiles. To address this limitation, we introduce the concept of Meta-ACL, and formalize it in the context of black-box RL learners, i.e. algorithms seeking to generalize curriculum generation to an (unknown) distribution of learners. In this work, we present AGAIN, a first instantiation of Meta-ACL, and showcase its benefits for curriculum generation over classical ACL in multiple simulated environments including procedurally generated parkour environments with learners of varying morphologies. Videos and code are available at https://sites.google.com/view/meta-acl .
2302.03640
Junwen Huang
Junwen Huang, Alexey Artemov, Yujin Chen, Shuaifeng Zhi, Kai Xu, Matthias Nie{\ss}ner
SSR-2D: Semantic 3D Scene Reconstruction from 2D Images
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Most deep learning approaches to comprehensive semantic modeling of 3D indoor spaces require costly dense annotations in the 3D domain. In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations. The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images, fusing cross-domain features into volumetric embeddings to predict complete 3D geometry, color, and semantics with only 2D labeling which can be either manual or machine-generated. Our key technical innovation is to leverage differentiable rendering of color and semantics to bridge 2D observations and unknown 3D space, using the observed RGB images and 2D semantics as supervision, respectively. We additionally develop a learning pipeline and corresponding method to enable learning from imperfect predicted 2D labels, which could be additionally acquired by synthesizing in an augmented set of virtual training views complementing the original real captures, enabling more efficient self-supervision loop for semantics. As a result, our end-to-end trainable solution jointly addresses geometry completion, colorization, and semantic mapping from limited RGB-D images, without relying on any 3D ground-truth information. Our method achieves the state-of-the-art performance of semantic scene completion on two large-scale benchmark datasets MatterPort3D and ScanNet, surpasses baselines even with costly 3D annotations in predicting both geometry and semantics. To our knowledge, our method is also the first 2D-driven method addressing completion and semantic segmentation of real-world 3D scans simultaneously.
[ { "created": "Tue, 7 Feb 2023 17:47:52 GMT", "version": "v1" }, { "created": "Tue, 21 Feb 2023 20:50:33 GMT", "version": "v2" }, { "created": "Thu, 20 Apr 2023 19:20:30 GMT", "version": "v3" }, { "created": "Wed, 5 Jun 2024 12:02:12 GMT", "version": "v4" } ]
2024-06-06
[ [ "Huang", "Junwen", "" ], [ "Artemov", "Alexey", "" ], [ "Chen", "Yujin", "" ], [ "Zhi", "Shuaifeng", "" ], [ "Xu", "Kai", "" ], [ "Nießner", "Matthias", "" ] ]
Most deep learning approaches to comprehensive semantic modeling of 3D indoor spaces require costly dense annotations in the 3D domain. In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations. The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images, fusing cross-domain features into volumetric embeddings to predict complete 3D geometry, color, and semantics with only 2D labeling which can be either manual or machine-generated. Our key technical innovation is to leverage differentiable rendering of color and semantics to bridge 2D observations and unknown 3D space, using the observed RGB images and 2D semantics as supervision, respectively. We additionally develop a learning pipeline and corresponding method to enable learning from imperfect predicted 2D labels, which could be additionally acquired by synthesizing in an augmented set of virtual training views complementing the original real captures, enabling more efficient self-supervision loop for semantics. As a result, our end-to-end trainable solution jointly addresses geometry completion, colorization, and semantic mapping from limited RGB-D images, without relying on any 3D ground-truth information. Our method achieves the state-of-the-art performance of semantic scene completion on two large-scale benchmark datasets MatterPort3D and ScanNet, surpasses baselines even with costly 3D annotations in predicting both geometry and semantics. To our knowledge, our method is also the first 2D-driven method addressing completion and semantic segmentation of real-world 3D scans simultaneously.
1212.2450
Salem Benferhat
Salem Benferhat, Sylvain Lagrue, Odile Papini
A possibilistic handling of partially ordered information
Appears in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI2003)
null
null
UAI-P-2003-PG-29-36
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a standard possibilistic logic, prioritized information are encoded by means of weighted knowledge base. This paper proposes an extension of possibilistic logic for dealing with partially ordered information. We Show that all basic notions of standard possibilitic logic (sumbsumption, syntactic and semantic inference, etc.) have natural couterparts when dealing with partially ordered information. We also propose an algorithm which computes possibilistic conclusions of a partial knowledge base of a partially ordered knowlege base.
[ { "created": "Fri, 19 Oct 2012 15:03:38 GMT", "version": "v1" } ]
2012-12-12
[ [ "Benferhat", "Salem", "" ], [ "Lagrue", "Sylvain", "" ], [ "Papini", "Odile", "" ] ]
In a standard possibilistic logic, prioritized information are encoded by means of weighted knowledge base. This paper proposes an extension of possibilistic logic for dealing with partially ordered information. We Show that all basic notions of standard possibilitic logic (sumbsumption, syntactic and semantic inference, etc.) have natural couterparts when dealing with partially ordered information. We also propose an algorithm which computes possibilistic conclusions of a partial knowledge base of a partially ordered knowlege base.
1801.01615
Hanbyul Joo
Hanbyul Joo, Tomas Simon, Yaser Sheikh
Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a unified deformation model for the markerless capture of multiple scales of human movement, including facial expressions, body motion, and hand gestures. An initial model is generated by locally stitching together models of the individual parts of the human body, which we refer to as the "Frankenstein" model. This model enables the full expression of part movements, including face and hands by a single seamless model. Using a large-scale capture of people wearing everyday clothes, we optimize the Frankenstein model to create "Adam". Adam is a calibrated model that shares the same skeleton hierarchy as the initial model but can express hair and clothing geometry, making it directly usable for fitting people as they normally appear in everyday life. Finally, we demonstrate the use of these models for total motion tracking, simultaneously capturing the large-scale body movements and the subtle face and hand motion of a social group of people.
[ { "created": "Fri, 5 Jan 2018 02:41:54 GMT", "version": "v1" } ]
2018-01-08
[ [ "Joo", "Hanbyul", "" ], [ "Simon", "Tomas", "" ], [ "Sheikh", "Yaser", "" ] ]
We present a unified deformation model for the markerless capture of multiple scales of human movement, including facial expressions, body motion, and hand gestures. An initial model is generated by locally stitching together models of the individual parts of the human body, which we refer to as the "Frankenstein" model. This model enables the full expression of part movements, including face and hands by a single seamless model. Using a large-scale capture of people wearing everyday clothes, we optimize the Frankenstein model to create "Adam". Adam is a calibrated model that shares the same skeleton hierarchy as the initial model but can express hair and clothing geometry, making it directly usable for fitting people as they normally appear in everyday life. Finally, we demonstrate the use of these models for total motion tracking, simultaneously capturing the large-scale body movements and the subtle face and hand motion of a social group of people.
2404.15980
Ali Ebnenasir
Ali Ebnenasir and Kieran Young
Minimizing the Number of Teleportations in Distributed Quantum Computing Using Alloy
null
null
null
null
cs.ET cs.DC quant-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a novel approach for minimizing the number of teleportations in Distributed Quantum Computing (DQC) using formal methods. Quantum teleportation plays a major role in communicating quantum information. As such, it is desirable to perform as few teleportations as possible when distributing a quantum algorithm on a network of quantum machines. Contrary to most existing methods which rely on graph-theoretic or heuristic search techniques, we propose a drastically different approach for minimizing the number of teleportations through utilizing formal methods. Specifically, the contributions of this paper include: the formal specification of the teleportation minimization problem in Alloy, the generalizability of the proposed Alloy specifications to quantum circuits with $n$-ary gates, the reusability of the Alloy specifications for different quantum circuits and networks, the simplicity of specifying and solving other problems such as load balancing and heterogeneity, and the compositionality of the proposed approach. We also develop a software tool, called qcAlloy, that takes as input the textual description of a quantum circuit, generates the corresponding Alloy model, and finally solves the minimization problem using the Alloy analyzer. We have experimentally evaluated qcAlloy for some of the circuits in the RevLib benchmark with more than 100 qubits and 1200 layers, and have demonstrated that qcAlloy outperforms one of the most efficient existing methods for most benchmark circuits in terms of minimizing the number of teleportations.
[ { "created": "Wed, 24 Apr 2024 16:55:29 GMT", "version": "v1" } ]
2024-04-25
[ [ "Ebnenasir", "Ali", "" ], [ "Young", "Kieran", "" ] ]
This paper presents a novel approach for minimizing the number of teleportations in Distributed Quantum Computing (DQC) using formal methods. Quantum teleportation plays a major role in communicating quantum information. As such, it is desirable to perform as few teleportations as possible when distributing a quantum algorithm on a network of quantum machines. Contrary to most existing methods which rely on graph-theoretic or heuristic search techniques, we propose a drastically different approach for minimizing the number of teleportations through utilizing formal methods. Specifically, the contributions of this paper include: the formal specification of the teleportation minimization problem in Alloy, the generalizability of the proposed Alloy specifications to quantum circuits with $n$-ary gates, the reusability of the Alloy specifications for different quantum circuits and networks, the simplicity of specifying and solving other problems such as load balancing and heterogeneity, and the compositionality of the proposed approach. We also develop a software tool, called qcAlloy, that takes as input the textual description of a quantum circuit, generates the corresponding Alloy model, and finally solves the minimization problem using the Alloy analyzer. We have experimentally evaluated qcAlloy for some of the circuits in the RevLib benchmark with more than 100 qubits and 1200 layers, and have demonstrated that qcAlloy outperforms one of the most efficient existing methods for most benchmark circuits in terms of minimizing the number of teleportations.
2105.00572
Alexis Conneau
Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau
Larger-Scale Transformers for Multilingual Masked Language Modeling
4 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
[ { "created": "Sun, 2 May 2021 23:15:02 GMT", "version": "v1" } ]
2021-05-04
[ [ "Goyal", "Naman", "" ], [ "Du", "Jingfei", "" ], [ "Ott", "Myle", "" ], [ "Anantharaman", "Giri", "" ], [ "Conneau", "Alexis", "" ] ]
Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
1803.10815
Piotr Mardziel
Shayak Sen and Piotr Mardziel and Anupam Datta and Matthew Fredrikson
Supervising Feature Influence
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Causal influence measures for machine learnt classifiers shed light on the reasons behind classification, and aid in identifying influential input features and revealing their biases. However, such analyses involve evaluating the classifier using datapoints that may be atypical of its training distribution. Standard methods for training classifiers that minimize empirical risk do not constrain the behavior of the classifier on such datapoints. As a result, training to minimize empirical risk does not distinguish among classifiers that agree on predictions in the training distribution but have wildly different causal influences. We term this problem covariate shift in causal testing and formally characterize conditions under which it arises. As a solution to this problem, we propose a novel active learning algorithm that constrains the influence measures of the trained model. We prove that any two predictors whose errors are close on both the original training distribution and the distribution of atypical points are guaranteed to have causal influences that are also close. Further, we empirically demonstrate with synthetic labelers that our algorithm trains models that (i) have similar causal influences as the labeler's model, and (ii) generalize better to out-of-distribution points while (iii) retaining their accuracy on in-distribution points.
[ { "created": "Wed, 28 Mar 2018 19:16:39 GMT", "version": "v1" }, { "created": "Sat, 7 Apr 2018 23:46:15 GMT", "version": "v2" } ]
2018-04-10
[ [ "Sen", "Shayak", "" ], [ "Mardziel", "Piotr", "" ], [ "Datta", "Anupam", "" ], [ "Fredrikson", "Matthew", "" ] ]
Causal influence measures for machine learnt classifiers shed light on the reasons behind classification, and aid in identifying influential input features and revealing their biases. However, such analyses involve evaluating the classifier using datapoints that may be atypical of its training distribution. Standard methods for training classifiers that minimize empirical risk do not constrain the behavior of the classifier on such datapoints. As a result, training to minimize empirical risk does not distinguish among classifiers that agree on predictions in the training distribution but have wildly different causal influences. We term this problem covariate shift in causal testing and formally characterize conditions under which it arises. As a solution to this problem, we propose a novel active learning algorithm that constrains the influence measures of the trained model. We prove that any two predictors whose errors are close on both the original training distribution and the distribution of atypical points are guaranteed to have causal influences that are also close. Further, we empirically demonstrate with synthetic labelers that our algorithm trains models that (i) have similar causal influences as the labeler's model, and (ii) generalize better to out-of-distribution points while (iii) retaining their accuracy on in-distribution points.
2312.14030
Erik Frisk
Fatemeh Hashemniya, Beno\"it Caillaud, Erik Frisk, Mattias Krysander, Mathias Malandain
Fault Diagnosability Analysis of Multi-Mode Systems
null
null
null
null
cs.LO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Multi-mode systems can operate in different modes, leading to large numbers of different dynamics. Consequently, applying traditional structural diagnostics to such systems is often untractable. To address this challenge, we present a multi-mode diagnostics algorithm that relies on a multi-mode extension of the Dulmage-Mendelsohn decomposition. We introduce two methodologies for modeling faults, either as signals or as Boolean variables, and apply them to a modular switched battery system in order to demonstrate their effectiveness and discuss their respective advantages.
[ { "created": "Thu, 21 Dec 2023 17:00:37 GMT", "version": "v1" } ]
2023-12-22
[ [ "Hashemniya", "Fatemeh", "" ], [ "Caillaud", "Benoït", "" ], [ "Frisk", "Erik", "" ], [ "Krysander", "Mattias", "" ], [ "Malandain", "Mathias", "" ] ]
Multi-mode systems can operate in different modes, leading to large numbers of different dynamics. Consequently, applying traditional structural diagnostics to such systems is often untractable. To address this challenge, we present a multi-mode diagnostics algorithm that relies on a multi-mode extension of the Dulmage-Mendelsohn decomposition. We introduce two methodologies for modeling faults, either as signals or as Boolean variables, and apply them to a modular switched battery system in order to demonstrate their effectiveness and discuss their respective advantages.
2302.00089
Hussein Hazimeh
Hussein Hazimeh, Natalia Ponomareva
Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets
Accepted to AISTATS 2023
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Adversarial nets have proved to be powerful in various domains including generative modeling (GANs), transfer learning, and fairness. However, successfully training adversarial nets using first-order methods remains a major challenge. Typically, careful choices of the learning rates are needed to maintain the delicate balance between the competing networks. In this paper, we design a novel learning rate scheduler that dynamically adapts the learning rate of the adversary to maintain the right balance. The scheduler is driven by the fact that the loss of an ideal adversarial net is a constant known a priori. The scheduler is thus designed to keep the loss of the optimized adversarial net close to that of an ideal network. We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation. Our experiments indicate that adversarial nets trained with the scheduler are less likely to diverge and require significantly less tuning. For example, on CelebA, a GAN with the scheduler requires only one-tenth of the tuning budget needed without a scheduler. Moreover, the scheduler leads to statistically significant improvements in model quality, reaching up to $27\%$ in Frechet Inception Distance for image generation and $3\%$ in test accuracy for domain adaptation.
[ { "created": "Tue, 31 Jan 2023 20:36:40 GMT", "version": "v1" } ]
2023-02-02
[ [ "Hazimeh", "Hussein", "" ], [ "Ponomareva", "Natalia", "" ] ]
Adversarial nets have proved to be powerful in various domains including generative modeling (GANs), transfer learning, and fairness. However, successfully training adversarial nets using first-order methods remains a major challenge. Typically, careful choices of the learning rates are needed to maintain the delicate balance between the competing networks. In this paper, we design a novel learning rate scheduler that dynamically adapts the learning rate of the adversary to maintain the right balance. The scheduler is driven by the fact that the loss of an ideal adversarial net is a constant known a priori. The scheduler is thus designed to keep the loss of the optimized adversarial net close to that of an ideal network. We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation. Our experiments indicate that adversarial nets trained with the scheduler are less likely to diverge and require significantly less tuning. For example, on CelebA, a GAN with the scheduler requires only one-tenth of the tuning budget needed without a scheduler. Moreover, the scheduler leads to statistically significant improvements in model quality, reaching up to $27\%$ in Frechet Inception Distance for image generation and $3\%$ in test accuracy for domain adaptation.
2209.01211
Yaping Zhao
Yaping Zhao, Haitian Zheng, Mengqi Ji, Ruqi Huang
Cross-Camera Deep Colorization
12 pages, 6 figures
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider the color-plus-mono dual-camera system and propose an end-to-end convolutional neural network to align and fuse images from it in an efficient and cost-effective way. Our method takes cross-domain and cross-scale images as input, and consequently synthesizes HR colorization results to facilitate the trade-off between spatial-temporal resolution and color depth in the single-camera imaging system. In contrast to the previous colorization methods, ours can adapt to color and monochrome cameras with distinctive spatial-temporal resolutions, rendering the flexibility and robustness in practical applications. The key ingredient of our method is a cross-camera alignment module that generates multi-scale correspondences for cross-domain image alignment. Through extensive experiments on various datasets and multiple settings, we validate the flexibility and effectiveness of our approach. Remarkably, our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain, upon the state-of-the-art methods. Code is at: https://github.com/IndigoPurple/CCDC
[ { "created": "Fri, 26 Aug 2022 11:02:14 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2022 04:00:27 GMT", "version": "v2" } ]
2022-09-08
[ [ "Zhao", "Yaping", "" ], [ "Zheng", "Haitian", "" ], [ "Ji", "Mengqi", "" ], [ "Huang", "Ruqi", "" ] ]
In this paper, we consider the color-plus-mono dual-camera system and propose an end-to-end convolutional neural network to align and fuse images from it in an efficient and cost-effective way. Our method takes cross-domain and cross-scale images as input, and consequently synthesizes HR colorization results to facilitate the trade-off between spatial-temporal resolution and color depth in the single-camera imaging system. In contrast to the previous colorization methods, ours can adapt to color and monochrome cameras with distinctive spatial-temporal resolutions, rendering the flexibility and robustness in practical applications. The key ingredient of our method is a cross-camera alignment module that generates multi-scale correspondences for cross-domain image alignment. Through extensive experiments on various datasets and multiple settings, we validate the flexibility and effectiveness of our approach. Remarkably, our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain, upon the state-of-the-art methods. Code is at: https://github.com/IndigoPurple/CCDC
2404.06721
Norrathep Rattanavipanon
Norrathep Rattanavipanon and Ivan De Oliveira Nunes
Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The rise in IoT-driven distributed data analytics, coupled with increasing privacy concerns, has led to a demand for effective privacy-preserving and federated data collection/model training mechanisms. In response, approaches such as Federated Learning (FL) and Local Differential Privacy (LDP) have been proposed and attracted much attention over the past few years. However, they still share the common limitation of being vulnerable to poisoning attacks wherein adversaries compromising edge devices feed forged (a.k.a. poisoned) data to aggregation back-ends, undermining the integrity of FL/LDP results. In this work, we propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution (PoSX) for IoT/embedded devices' software. To realize the PoSX concept, we design SLAPP: a System-Level Approach for Poisoning Prevention. SLAPP leverages commodity security features of embedded devices - in particular ARM TrustZoneM security extensions - to verifiably bind raw sensed data to their correct usage as part of FL/LDP edge device routines. As a consequence, it offers robust security guarantees against poisoning. Our evaluation, based on real-world prototypes featuring multiple cryptographic primitives and data collection schemes, showcases SLAPP's security and low overhead.
[ { "created": "Wed, 10 Apr 2024 04:18:26 GMT", "version": "v1" }, { "created": "Thu, 11 Apr 2024 12:05:52 GMT", "version": "v2" }, { "created": "Wed, 19 Jun 2024 03:01:31 GMT", "version": "v3" } ]
2024-06-21
[ [ "Rattanavipanon", "Norrathep", "" ], [ "Nunes", "Ivan De Oliveira", "" ] ]
The rise in IoT-driven distributed data analytics, coupled with increasing privacy concerns, has led to a demand for effective privacy-preserving and federated data collection/model training mechanisms. In response, approaches such as Federated Learning (FL) and Local Differential Privacy (LDP) have been proposed and attracted much attention over the past few years. However, they still share the common limitation of being vulnerable to poisoning attacks wherein adversaries compromising edge devices feed forged (a.k.a. poisoned) data to aggregation back-ends, undermining the integrity of FL/LDP results. In this work, we propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution (PoSX) for IoT/embedded devices' software. To realize the PoSX concept, we design SLAPP: a System-Level Approach for Poisoning Prevention. SLAPP leverages commodity security features of embedded devices - in particular ARM TrustZoneM security extensions - to verifiably bind raw sensed data to their correct usage as part of FL/LDP edge device routines. As a consequence, it offers robust security guarantees against poisoning. Our evaluation, based on real-world prototypes featuring multiple cryptographic primitives and data collection schemes, showcases SLAPP's security and low overhead.
1910.03126
Jiunn-Kai Huang
Jiunn-Kai Huang and Jessy W. Grizzle
Improvements to Target-Based 3D LiDAR to Camera Calibration
null
IEEE Access, vol. 8, 2020, pp. 134101-134110
10.1109/ACCESS.2020.3010734
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The homogeneous transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM. While determining such a transformation is not considered glamorous in any sense of the word, it is nonetheless crucial for many modern autonomous systems. Indeed, an error of a few degrees in rotation or a few percent in translation can lead to 20 cm translation errors at a distance of 5 m when overlaying a LiDAR image on a camera image. The biggest impediments to determining the transformation accurately are the relative sparsity of LiDAR point clouds and systematic errors in their distance measurements. This paper proposes (1) the use of targets of known dimension and geometry to ameliorate target pose estimation in face of the quantization and systematic errors inherent in a LiDAR image of a target, and (2) a fitting method for the LiDAR to monocular camera transformation that fundamentally assumes the camera image data is the most accurate information in one's possession.
[ { "created": "Mon, 7 Oct 2019 23:03:16 GMT", "version": "v1" }, { "created": "Wed, 11 Mar 2020 20:05:04 GMT", "version": "v2" }, { "created": "Sat, 18 Jul 2020 15:07:13 GMT", "version": "v3" } ]
2020-07-30
[ [ "Huang", "Jiunn-Kai", "" ], [ "Grizzle", "Jessy W.", "" ] ]
The homogeneous transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM. While determining such a transformation is not considered glamorous in any sense of the word, it is nonetheless crucial for many modern autonomous systems. Indeed, an error of a few degrees in rotation or a few percent in translation can lead to 20 cm translation errors at a distance of 5 m when overlaying a LiDAR image on a camera image. The biggest impediments to determining the transformation accurately are the relative sparsity of LiDAR point clouds and systematic errors in their distance measurements. This paper proposes (1) the use of targets of known dimension and geometry to ameliorate target pose estimation in face of the quantization and systematic errors inherent in a LiDAR image of a target, and (2) a fitting method for the LiDAR to monocular camera transformation that fundamentally assumes the camera image data is the most accurate information in one's possession.
1710.07096
Ribana Roscher
Anika Bettge, Ribana Roscher, Susanne Wenzel
Deep Self-taught Learning for Remote Sensing Image Classification
This is a corrected version of the final paper published in the proceedings
Proceedings of the 2017 conference on Big Data from Space
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the land cover classification task for remote sensing images by deep self-taught learning. Our self-taught learning approach learns suitable feature representations of the input data using sparse representation and undercomplete dictionary learning. We propose a deep learning framework which extracts representations in multiple layers and use the output of the deepest layer as input to a classification algorithm. We evaluate our approach using a multispectral Landsat 5 TM image of a study area in the North of Novo Progresso (South America) and the Zurich Summer Data Set provided by the University of Zurich. Experiments indicate that features learned by a deep self-taught learning framework can be used for classification and improve the results compared to classification results using the original feature representation.
[ { "created": "Thu, 19 Oct 2017 11:32:53 GMT", "version": "v1" }, { "created": "Tue, 19 Dec 2017 20:55:12 GMT", "version": "v2" } ]
2017-12-21
[ [ "Bettge", "Anika", "" ], [ "Roscher", "Ribana", "" ], [ "Wenzel", "Susanne", "" ] ]
This paper addresses the land cover classification task for remote sensing images by deep self-taught learning. Our self-taught learning approach learns suitable feature representations of the input data using sparse representation and undercomplete dictionary learning. We propose a deep learning framework which extracts representations in multiple layers and use the output of the deepest layer as input to a classification algorithm. We evaluate our approach using a multispectral Landsat 5 TM image of a study area in the North of Novo Progresso (South America) and the Zurich Summer Data Set provided by the University of Zurich. Experiments indicate that features learned by a deep self-taught learning framework can be used for classification and improve the results compared to classification results using the original feature representation.
cs/0509002
Zsolt I. L\'az\'ar
Zsolt I. L\'az\'ar, Jouke R. Heringa, Bazil P\^arv, Simon W. de Leeuw
Component Based Programming in Scientific Computing: The Viable Approach
null
null
null
null
cs.CE
null
Computational scientists are facing a new era where the old ways of developing and reusing code have to be left behind and a few daring steps are to be made towards new horizons. The present work analyzes the needs that drive this change, the factors that contribute to the inertia of the community and slow the transition, the status and perspective of present attempts, the principle, practical and technical problems that are to be addressed in the short and long run.
[ { "created": "Wed, 31 Aug 2005 21:57:04 GMT", "version": "v1" } ]
2021-08-23
[ [ "Lázár", "Zsolt I.", "" ], [ "Heringa", "Jouke R.", "" ], [ "Pârv", "Bazil", "" ], [ "de Leeuw", "Simon W.", "" ] ]
Computational scientists are facing a new era where the old ways of developing and reusing code have to be left behind and a few daring steps are to be made towards new horizons. The present work analyzes the needs that drive this change, the factors that contribute to the inertia of the community and slow the transition, the status and perspective of present attempts, the principle, practical and technical problems that are to be addressed in the short and long run.
1103.4875
Isabelle Stanton
Isabelle Stanton and Ali Pinar
Constructing and Sampling Graphs with a Prescribed Joint Degree Distribution
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.
[ { "created": "Thu, 24 Mar 2011 21:05:17 GMT", "version": "v1" }, { "created": "Wed, 31 Aug 2011 18:41:54 GMT", "version": "v2" } ]
2011-09-01
[ [ "Stanton", "Isabelle", "" ], [ "Pinar", "Ali", "" ] ]
One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.
1401.7583
Daniel Kulesz
Daniel Kulesz, Jan-Peter Ostberg
Practical Challenges with Spreadsheet Auditing Tools
13 Pages. 3 Detailed Colour Figures, Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2013, ISBN: 978-1-9054045-1-3
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Just like other software, spreadsheets can contain significant faults. Static analysis is an accepted and well-established technique in software engineering known for its capability to discover faults. In recent years, a growing number of tool vendors started offering tools that allow casual end-users to run various static analyses on spreadsheets as well. We supervised a study where three undergraduate software engineering students examined a selection of 14 spreadsheet auditing tools, trying to give a concrete recommendation for an industry partner. Reflecting on the study's results, we found that most of these tools do provide useful aids in finding problems in spreadsheets, but we have also spotted several areas where tools had significant issues. Some of these issues could be remedied if spreadsheet auditing tool vendors would pick up some ideas of static analysis tools for traditional software development and adopt some of their solution approaches.
[ { "created": "Tue, 28 Jan 2014 20:51:32 GMT", "version": "v1" } ]
2014-01-30
[ [ "Kulesz", "Daniel", "" ], [ "Ostberg", "Jan-Peter", "" ] ]
Just like other software, spreadsheets can contain significant faults. Static analysis is an accepted and well-established technique in software engineering known for its capability to discover faults. In recent years, a growing number of tool vendors started offering tools that allow casual end-users to run various static analyses on spreadsheets as well. We supervised a study where three undergraduate software engineering students examined a selection of 14 spreadsheet auditing tools, trying to give a concrete recommendation for an industry partner. Reflecting on the study's results, we found that most of these tools do provide useful aids in finding problems in spreadsheets, but we have also spotted several areas where tools had significant issues. Some of these issues could be remedied if spreadsheet auditing tool vendors would pick up some ideas of static analysis tools for traditional software development and adopt some of their solution approaches.