id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1207.5742
Andrei Romashchenko
Tarik Kaced and Andrei Romashchenko
Conditional Information Inequalities for Entropic and Almost Entropic Points
Submitted to the IEEE Transactions on Information Theory
IEEE Transactions on Information Theory 59(11), 2013, pp. 7149-7167
10.1109/TIT.2013.2274614
null
cs.IT cs.DM math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study conditional linear information inequalities, i.e., linear inequalities for Shannon entropy that hold for distributions whose entropies meet some linear constraints. We prove that some conditional information inequalities cannot be extended to any unconditional linear inequalities. Some of these conditional inequalities hold for almost entropic points, while others do not. We also discuss some counterparts of conditional information inequalities for Kolmogorov complexity.
[ { "created": "Tue, 24 Jul 2012 16:31:05 GMT", "version": "v1" }, { "created": "Sun, 29 Jul 2012 19:20:14 GMT", "version": "v2" }, { "created": "Mon, 22 Jul 2013 16:46:19 GMT", "version": "v3" }, { "created": "Fri, 16 Aug 2013 09:17:51 GMT", "version": "v4" } ]
2013-10-30
[ [ "Kaced", "Tarik", "" ], [ "Romashchenko", "Andrei", "" ] ]
We study conditional linear information inequalities, i.e., linear inequalities for Shannon entropy that hold for distributions whose entropies meet some linear constraints. We prove that some conditional information inequalities cannot be extended to any unconditional linear inequalities. Some of these conditional inequalities hold for almost entropic points, while others do not. We also discuss some counterparts of conditional information inequalities for Kolmogorov complexity.
2008.11714
Jia-Bin Huang
Chen Gao, Jiarui Xu, Yuliang Zou, Jia-Bin Huang
DRG: Dual Relation Graph for Human-Object Interaction Detection
ECCV 2020. Project: http://chengao.vision/DRG/ Code: https://github.com/vt-vl-lab/DRG
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tackle the challenging problem of human-object interaction (HOI) detection. Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features. In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph (one human-centric and one object-centric). Our proposed dual relation graph effectively captures discriminative cues from the scene to resolve ambiguity from local predictions. Our model is conceptually simple and leads to favorable results compared to the state-of-the-art HOI detection algorithms on two large-scale benchmark datasets.
[ { "created": "Wed, 26 Aug 2020 17:59:40 GMT", "version": "v1" } ]
2020-08-27
[ [ "Gao", "Chen", "" ], [ "Xu", "Jiarui", "" ], [ "Zou", "Yuliang", "" ], [ "Huang", "Jia-Bin", "" ] ]
We tackle the challenging problem of human-object interaction (HOI) detection. Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features. In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph (one human-centric and one object-centric). Our proposed dual relation graph effectively captures discriminative cues from the scene to resolve ambiguity from local predictions. Our model is conceptually simple and leads to favorable results compared to the state-of-the-art HOI detection algorithms on two large-scale benchmark datasets.
2407.09033
Byeonghyun Pak
Byeonghyun Pak, Byeongju Woo, Sunghwan Kim, Dae-hwan Kim, Hoseong Kim
Textual Query-Driven Mask Transformer for Domain Generalized Segmentation
ECCV 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a method to tackle Domain Generalized Semantic Segmentation (DGSS) by utilizing domain-invariant semantic knowledge from text embeddings of vision-language models. We employ the text embeddings as object queries within a transformer-based segmentation framework (textual object queries). These queries are regarded as a domain-invariant basis for pixel grouping in DGSS. To leverage the power of textual object queries, we introduce a novel framework named the textual query-driven mask transformer (tqdm). Our tqdm aims to (1) generate textual object queries that maximally encode domain-invariant semantics and (2) enhance the semantic clarity of dense visual features. Additionally, we suggest three regularization losses to improve the efficacy of tqdm by aligning between visual and textual features. By utilizing our method, the model can comprehend inherent semantic information for classes of interest, enabling it to generalize to extreme domains (e.g., sketch style). Our tqdm achieves 68.9 mIoU on GTA5$\rightarrow$Cityscapes, outperforming the prior state-of-the-art method by 2.5 mIoU. The project page is available at https://byeonghyunpak.github.io/tqdm.
[ { "created": "Fri, 12 Jul 2024 06:49:16 GMT", "version": "v1" }, { "created": "Wed, 31 Jul 2024 14:27:06 GMT", "version": "v2" } ]
2024-08-01
[ [ "Pak", "Byeonghyun", "" ], [ "Woo", "Byeongju", "" ], [ "Kim", "Sunghwan", "" ], [ "Kim", "Dae-hwan", "" ], [ "Kim", "Hoseong", "" ] ]
In this paper, we introduce a method to tackle Domain Generalized Semantic Segmentation (DGSS) by utilizing domain-invariant semantic knowledge from text embeddings of vision-language models. We employ the text embeddings as object queries within a transformer-based segmentation framework (textual object queries). These queries are regarded as a domain-invariant basis for pixel grouping in DGSS. To leverage the power of textual object queries, we introduce a novel framework named the textual query-driven mask transformer (tqdm). Our tqdm aims to (1) generate textual object queries that maximally encode domain-invariant semantics and (2) enhance the semantic clarity of dense visual features. Additionally, we suggest three regularization losses to improve the efficacy of tqdm by aligning between visual and textual features. By utilizing our method, the model can comprehend inherent semantic information for classes of interest, enabling it to generalize to extreme domains (e.g., sketch style). Our tqdm achieves 68.9 mIoU on GTA5$\rightarrow$Cityscapes, outperforming the prior state-of-the-art method by 2.5 mIoU. The project page is available at https://byeonghyunpak.github.io/tqdm.
1812.10998
Preeti Gopal Ms.
Preeti Gopal and Sharat Chandran and Imants Svalbe and Ajit Rajwade
Learning from past scans: Tomographic reconstruction to detect new structures
5 pages, 8 figures, 1 table
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The need for tomographic reconstruction from sparse measurements arises when the measurement process is potentially harmful, needs to be rapid, or is uneconomical. In such cases, prior information from previous longitudinal scans of the same or similar objects helps to reconstruct the current object whilst requiring significantly fewer `updating' measurements. However, a significant limitation of all prior-based methods is the possible dominance of the prior over the reconstruction of new localised information that has evolved within the test object. In this paper, we improve the state of the art by (1) detecting potential regions where new changes may have occurred, and (2) effectively reconstructing both the old and new structures by computing regional weights that moderate the local influence of the priors. We have tested the efficacy of our method on synthetic as well as real volume data. The results demonstrate that using weighted priors significantly improves the overall quality of the reconstructed data whilst minimising their impact on regions that contain new information.
[ { "created": "Sun, 23 Dec 2018 09:45:15 GMT", "version": "v1" } ]
2018-12-31
[ [ "Gopal", "Preeti", "" ], [ "Chandran", "Sharat", "" ], [ "Svalbe", "Imants", "" ], [ "Rajwade", "Ajit", "" ] ]
The need for tomographic reconstruction from sparse measurements arises when the measurement process is potentially harmful, needs to be rapid, or is uneconomical. In such cases, prior information from previous longitudinal scans of the same or similar objects helps to reconstruct the current object whilst requiring significantly fewer `updating' measurements. However, a significant limitation of all prior-based methods is the possible dominance of the prior over the reconstruction of new localised information that has evolved within the test object. In this paper, we improve the state of the art by (1) detecting potential regions where new changes may have occurred, and (2) effectively reconstructing both the old and new structures by computing regional weights that moderate the local influence of the priors. We have tested the efficacy of our method on synthetic as well as real volume data. The results demonstrate that using weighted priors significantly improves the overall quality of the reconstructed data whilst minimising their impact on regions that contain new information.
2310.19848
Lenart Treven
Lenart Treven, Jonas H\"ubotter, Bhavya Sukhija, Florian D\"orfler, Andreas Krause
Efficient Exploration in Continuous-time Model-based Reinforcement Learning
null
null
null
null
cs.LG cs.RO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning algorithms typically consider discrete-time dynamics, even though the underlying systems are often continuous in time. In this paper, we introduce a model-based reinforcement learning algorithm that represents continuous-time dynamics using nonlinear ordinary differential equations (ODEs). We capture epistemic uncertainty using well-calibrated probabilistic models, and use the optimistic principle for exploration. Our regret bounds surface the importance of the measurement selection strategy(MSS), since in continuous time we not only must decide how to explore, but also when to observe the underlying system. Our analysis demonstrates that the regret is sublinear when modeling ODEs with Gaussian Processes (GP) for common choices of MSS, such as equidistant sampling. Additionally, we propose an adaptive, data-dependent, practical MSS that, when combined with GP dynamics, also achieves sublinear regret with significantly fewer samples. We showcase the benefits of continuous-time modeling over its discrete-time counterpart, as well as our proposed adaptive MSS over standard baselines, on several applications.
[ { "created": "Mon, 30 Oct 2023 15:04:40 GMT", "version": "v1" } ]
2023-11-01
[ [ "Treven", "Lenart", "" ], [ "Hübotter", "Jonas", "" ], [ "Sukhija", "Bhavya", "" ], [ "Dörfler", "Florian", "" ], [ "Krause", "Andreas", "" ] ]
Reinforcement learning algorithms typically consider discrete-time dynamics, even though the underlying systems are often continuous in time. In this paper, we introduce a model-based reinforcement learning algorithm that represents continuous-time dynamics using nonlinear ordinary differential equations (ODEs). We capture epistemic uncertainty using well-calibrated probabilistic models, and use the optimistic principle for exploration. Our regret bounds surface the importance of the measurement selection strategy(MSS), since in continuous time we not only must decide how to explore, but also when to observe the underlying system. Our analysis demonstrates that the regret is sublinear when modeling ODEs with Gaussian Processes (GP) for common choices of MSS, such as equidistant sampling. Additionally, we propose an adaptive, data-dependent, practical MSS that, when combined with GP dynamics, also achieves sublinear regret with significantly fewer samples. We showcase the benefits of continuous-time modeling over its discrete-time counterpart, as well as our proposed adaptive MSS over standard baselines, on several applications.
2306.09109
Kevis-Kokitsi Maninis
Varun Jampani, Kevis-Kokitsi Maninis, Andreas Engelhardt, Arjun Karpur, Karen Truong, Kyle Sargent, Stefan Popov, Andr\'e Araujo, Ricardo Martin-Brualla, Kaushal Patel, Daniel Vlasic, Vittorio Ferrari, Ameesh Makadia, Ce Liu, Yuanzhen Li, Howard Zhou
NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and Pose Annotations
NeurIPS 2023 camera ready. Project page: https://navidataset.github.io
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in neural reconstruction enable high-quality 3D object reconstruction from casually captured image collections. Current techniques mostly analyze their progress on relatively simple image collections where Structure-from-Motion (SfM) techniques can provide ground-truth (GT) camera poses. We note that SfM techniques tend to fail on in-the-wild image collections such as image search results with varying backgrounds and illuminations. To enable systematic research progress on 3D reconstruction from casual image captures, we propose NAVI: a new dataset of category-agnostic image collections of objects with high-quality 3D scans along with per-image 2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D alignments allow us to extract accurate derivative annotations such as dense pixel correspondences, depth and segmentation maps. We demonstrate the use of NAVI image collections on different problem settings and show that NAVI enables more thorough evaluations that were not possible with existing datasets. We believe NAVI is beneficial for systematic research progress on 3D reconstruction and correspondence estimation. Project page: https://navidataset.github.io
[ { "created": "Thu, 15 Jun 2023 13:11:30 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 16:12:32 GMT", "version": "v2" } ]
2023-10-16
[ [ "Jampani", "Varun", "" ], [ "Maninis", "Kevis-Kokitsi", "" ], [ "Engelhardt", "Andreas", "" ], [ "Karpur", "Arjun", "" ], [ "Truong", "Karen", "" ], [ "Sargent", "Kyle", "" ], [ "Popov", "Stefan", "" ], [ "Araujo", "André", "" ], [ "Martin-Brualla", "Ricardo", "" ], [ "Patel", "Kaushal", "" ], [ "Vlasic", "Daniel", "" ], [ "Ferrari", "Vittorio", "" ], [ "Makadia", "Ameesh", "" ], [ "Liu", "Ce", "" ], [ "Li", "Yuanzhen", "" ], [ "Zhou", "Howard", "" ] ]
Recent advances in neural reconstruction enable high-quality 3D object reconstruction from casually captured image collections. Current techniques mostly analyze their progress on relatively simple image collections where Structure-from-Motion (SfM) techniques can provide ground-truth (GT) camera poses. We note that SfM techniques tend to fail on in-the-wild image collections such as image search results with varying backgrounds and illuminations. To enable systematic research progress on 3D reconstruction from casual image captures, we propose NAVI: a new dataset of category-agnostic image collections of objects with high-quality 3D scans along with per-image 2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D alignments allow us to extract accurate derivative annotations such as dense pixel correspondences, depth and segmentation maps. We demonstrate the use of NAVI image collections on different problem settings and show that NAVI enables more thorough evaluations that were not possible with existing datasets. We believe NAVI is beneficial for systematic research progress on 3D reconstruction and correspondence estimation. Project page: https://navidataset.github.io
1610.05726
Amol Patwardhan
Amol Patwardhan
Structured Unit Testable Templated Code for Efficient Code Review Process
13 pages, 12 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern software development teams are distributed across onsite and off-shore locations. Each team has developers with varying experience levels and English communication skills. In such a diverse development environment it is important to maintain the software quality, coding standards, timely delivery of features and bug fixes. It is also important to reduce testing effort, minimize side effects such as change in functionality, user experience or application performance. Code reviews are intended to control code quality. Unfortunately, many projects lack enforcement of processes and standards because of approaching deadlines, live production issues and lack of resource availability. This study examines a novel structured, unit testable templated code method to enforce code review standards with an intent to reduce coding effort, minimize revisions and eliminate functional and performance side effects on the system. The proposed method would also result in unit-testable code that can also be easily rolled back and increase team productivity. The baseline for traditional code review processes using metrics such as code review duration, bug regression rate, revision count was measured. These metrics were then compared with results from the proposed code review process that used structured unit testable templated code. The performance on 2 large enterprise level applications spanning over 2 years and 9 feature and maintenance release cycles was evaluated. The structured unit testable templated code method resulted in a decrease in total code review time, revision count and coding effort. It also decreased the number of live production issues caused by code churn or side effects of bug fix when compared to traditional code review process.
[ { "created": "Tue, 9 Aug 2016 05:26:21 GMT", "version": "v1" } ]
2016-10-19
[ [ "Patwardhan", "Amol", "" ] ]
Modern software development teams are distributed across onsite and off-shore locations. Each team has developers with varying experience levels and English communication skills. In such a diverse development environment it is important to maintain the software quality, coding standards, timely delivery of features and bug fixes. It is also important to reduce testing effort, minimize side effects such as change in functionality, user experience or application performance. Code reviews are intended to control code quality. Unfortunately, many projects lack enforcement of processes and standards because of approaching deadlines, live production issues and lack of resource availability. This study examines a novel structured, unit testable templated code method to enforce code review standards with an intent to reduce coding effort, minimize revisions and eliminate functional and performance side effects on the system. The proposed method would also result in unit-testable code that can also be easily rolled back and increase team productivity. The baseline for traditional code review processes using metrics such as code review duration, bug regression rate, revision count was measured. These metrics were then compared with results from the proposed code review process that used structured unit testable templated code. The performance on 2 large enterprise level applications spanning over 2 years and 9 feature and maintenance release cycles was evaluated. The structured unit testable templated code method resulted in a decrease in total code review time, revision count and coding effort. It also decreased the number of live production issues caused by code churn or side effects of bug fix when compared to traditional code review process.
2212.07720
Benny Kimelfeld
Majd Khalil, Benny Kimelfeld
The Complexity of the Shapley Value for Regular Path Queries
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
A path query extracts vertex tuples from a labeled graph, based on the words that are formed by the paths connecting the vertices. We study the computational complexity of measuring the contribution of edges and vertices to an answer to a path query, focusing on the class of conjunctive regular path queries. To measure this contribution, we adopt the traditional Shapley value from cooperative game theory. This value has been recently proposed and studied in the context of relational database queries and has uses in a plethora of other domains. We first study the contribution of edges and show that the exact Shapley value is almost always hard to compute. Specifically, it is #P-hard to calculate the contribution of an edge whenever at least one (non-redundant) conjunct allows for a word of length three or more. In the case of regular path queries (i.e., no conjunction), the problem is tractable if the query has only words of length at most two; hence, this property fully characterizes the tractability of the problem. On the other hand, if we allow for an approximation error, then it is straightforward to obtain an efficient scheme (FPRAS) for an additive approximation. Yet, a multiplicative approximation is harder to obtain. We establish that in the case of conjunctive regular path queries, a multiplicative approximation of the Shapley value of an edge can be computed in polynomial time if and only if all query atoms are finite languages (assuming non-redundancy and conventional complexity limitations). We also study the analogous situation where we wish to determine the contribution of a vertex, rather than an edge, and establish complexity results of similar nature.
[ { "created": "Thu, 15 Dec 2022 10:55:04 GMT", "version": "v1" } ]
2022-12-16
[ [ "Khalil", "Majd", "" ], [ "Kimelfeld", "Benny", "" ] ]
A path query extracts vertex tuples from a labeled graph, based on the words that are formed by the paths connecting the vertices. We study the computational complexity of measuring the contribution of edges and vertices to an answer to a path query, focusing on the class of conjunctive regular path queries. To measure this contribution, we adopt the traditional Shapley value from cooperative game theory. This value has been recently proposed and studied in the context of relational database queries and has uses in a plethora of other domains. We first study the contribution of edges and show that the exact Shapley value is almost always hard to compute. Specifically, it is #P-hard to calculate the contribution of an edge whenever at least one (non-redundant) conjunct allows for a word of length three or more. In the case of regular path queries (i.e., no conjunction), the problem is tractable if the query has only words of length at most two; hence, this property fully characterizes the tractability of the problem. On the other hand, if we allow for an approximation error, then it is straightforward to obtain an efficient scheme (FPRAS) for an additive approximation. Yet, a multiplicative approximation is harder to obtain. We establish that in the case of conjunctive regular path queries, a multiplicative approximation of the Shapley value of an edge can be computed in polynomial time if and only if all query atoms are finite languages (assuming non-redundancy and conventional complexity limitations). We also study the analogous situation where we wish to determine the contribution of a vertex, rather than an edge, and establish complexity results of similar nature.
1206.0430
Richard Southwell
Richard Southwell, Jianwei Huang, Biying Shou
Congestion Games on Weighted Directed Graphs, with Applications to Spectrum Sharing
null
null
null
null
cs.NI cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advance of complex large-scale networks, it is becoming increasingly important to understand how selfish and spatially distributed individuals will share network resources without centralized coordinations. In this paper, we introduce the graphical congestion game with weighted edges (GCGWE) as a general theoretical model to study this problem. In GCGWE, we view the players as vertices in a weighted graph. The amount of negative impact (e.g. congestion) caused by two close-by players to each other is determined by the weight of the edge linking them. The GCGWE unifies and significantly generalizes several simpler models considered in the previous literature, and is well suited for modeling a wide range of networking scenarios. One good example is to use the GCGWE to model spectrum sharing in wireless networks, where we can properly define the edge weights and payoff functions to capture the rather complicated interference relationship between wireless nodes. By identifying which GCGWEs possess pure Nash equilibria and the very desirable finite improvement property, we gain insight into when spatially distributed wireless nodes will be able to self-organize into a mutually acceptable resource allocation. We also consider the efficiency of the pure Nash equilibria, and the computational complexity of finding them.
[ { "created": "Sun, 3 Jun 2012 10:57:16 GMT", "version": "v1" } ]
2012-06-05
[ [ "Southwell", "Richard", "" ], [ "Huang", "Jianwei", "" ], [ "Shou", "Biying", "" ] ]
With the advance of complex large-scale networks, it is becoming increasingly important to understand how selfish and spatially distributed individuals will share network resources without centralized coordinations. In this paper, we introduce the graphical congestion game with weighted edges (GCGWE) as a general theoretical model to study this problem. In GCGWE, we view the players as vertices in a weighted graph. The amount of negative impact (e.g. congestion) caused by two close-by players to each other is determined by the weight of the edge linking them. The GCGWE unifies and significantly generalizes several simpler models considered in the previous literature, and is well suited for modeling a wide range of networking scenarios. One good example is to use the GCGWE to model spectrum sharing in wireless networks, where we can properly define the edge weights and payoff functions to capture the rather complicated interference relationship between wireless nodes. By identifying which GCGWEs possess pure Nash equilibria and the very desirable finite improvement property, we gain insight into when spatially distributed wireless nodes will be able to self-organize into a mutually acceptable resource allocation. We also consider the efficiency of the pure Nash equilibria, and the computational complexity of finding them.
1906.07887
Kedar Tatwawadi
Kedar Tatwawadi, Shubham Chandak
Tutorial on algebraic deletion correction codes
null
null
null
null
cs.DS cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
The deletion channel is known to be a notoriously diffcult channel to design error-correction codes for. In spite of this difficulty, there are some beautiful code constructions which give some intuition about the channel and about what good deletion codes look like. In this tutorial we will take a look at some of them. This document is a transcript of my talk at the coding theory reading group on some interesting works on deletion channel. It is not intended to be an exhaustive survey of works on deletion channel, but more as a tutorial to some of the important and cute ideas in this area. For a comprehensive survey, we refer the reader to the cited sources and surveys. We also provide an implementation of VT codes that correct single insertion/deletion errors for general alphabets at https://github.com/shubhamchandak94/VT_codes/.
[ { "created": "Wed, 19 Jun 2019 02:56:11 GMT", "version": "v1" } ]
2019-06-20
[ [ "Tatwawadi", "Kedar", "" ], [ "Chandak", "Shubham", "" ] ]
The deletion channel is known to be a notoriously diffcult channel to design error-correction codes for. In spite of this difficulty, there are some beautiful code constructions which give some intuition about the channel and about what good deletion codes look like. In this tutorial we will take a look at some of them. This document is a transcript of my talk at the coding theory reading group on some interesting works on deletion channel. It is not intended to be an exhaustive survey of works on deletion channel, but more as a tutorial to some of the important and cute ideas in this area. For a comprehensive survey, we refer the reader to the cited sources and surveys. We also provide an implementation of VT codes that correct single insertion/deletion errors for general alphabets at https://github.com/shubhamchandak94/VT_codes/.
2403.10484
Rosana Montes
Rosana Montes and Liliana Herrera and Emilio Crisol
Moodle Usability Assessment Methodology using the Universal Design for Learning perspective
preprint second version
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
The application of the Universal Design for Learning framework favors the creation of virtual educational environments for all. It requires developing accessible content, having a usable platform, and the use of flexible didactics and evaluations that promote constant student motivation. The present study aims to design a methodology to evaluate the usability of the Moodle platform based on the principles of Universal Design for Learning, recognizing the importance of accessibility, usability and the availability of Assistive Technologies. We developed and applied a methodology to assess the usability level of Moodle platforms, taking into consideration that they integrate Assistive Technologies or are used for MOOC contexts. We provide the results of a use case that assesses two instances for the respective Moodle v.2.x and v.3.x family versions. We employed the framework of mixed design research in order to assess a MOOC-type educational program devised under the principles of Universal Design for Learning. As a result of the assessment of Moodle v.2.x and v.3.x, we conclude that the platforms must improve some key elements (e.g. contrasting colors, incorporation of alternative text and links) in order to comply with international accessibility standards. With respect to usability, we can confirm that the principles and guidelines of Universal Design for Learning are applicable to MOOC-type Virtual Learning Environments, are positively valued by students, and have a positive impact on certification rates.
[ { "created": "Fri, 15 Mar 2024 17:19:04 GMT", "version": "v1" }, { "created": "Tue, 2 Apr 2024 15:25:51 GMT", "version": "v2" } ]
2024-04-03
[ [ "Montes", "Rosana", "" ], [ "Herrera", "Liliana", "" ], [ "Crisol", "Emilio", "" ] ]
The application of the Universal Design for Learning framework favors the creation of virtual educational environments for all. It requires developing accessible content, having a usable platform, and the use of flexible didactics and evaluations that promote constant student motivation. The present study aims to design a methodology to evaluate the usability of the Moodle platform based on the principles of Universal Design for Learning, recognizing the importance of accessibility, usability and the availability of Assistive Technologies. We developed and applied a methodology to assess the usability level of Moodle platforms, taking into consideration that they integrate Assistive Technologies or are used for MOOC contexts. We provide the results of a use case that assesses two instances for the respective Moodle v.2.x and v.3.x family versions. We employed the framework of mixed design research in order to assess a MOOC-type educational program devised under the principles of Universal Design for Learning. As a result of the assessment of Moodle v.2.x and v.3.x, we conclude that the platforms must improve some key elements (e.g. contrasting colors, incorporation of alternative text and links) in order to comply with international accessibility standards. With respect to usability, we can confirm that the principles and guidelines of Universal Design for Learning are applicable to MOOC-type Virtual Learning Environments, are positively valued by students, and have a positive impact on certification rates.
2112.13310
Wenchi Ma
Wenchi Ma, Tianxiao Zhang, Guanghui Wang
Miti-DETR: Object Detection based on Transformers with Mitigatory Self-Attention Convergence
null
AAAI 2022 workshop
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Object Detection with Transformers (DETR) and related works reach or even surpass the highly-optimized Faster-RCNN baseline with self-attention network architectures. Inspired by the evidence that pure self-attention possesses a strong inductive bias that leads to the transformer losing the expressive power with respect to network depth, we propose a transformer architecture with a mitigatory self-attention mechanism by applying possible direct mapping connections in the transformer architecture to mitigate the rank collapse so as to counteract feature expression loss and enhance the model performance. We apply this proposal in object detection tasks and develop a model named Miti-DETR. Miti-DETR reserves the inputs of each single attention layer to the outputs of that layer so that the "non-attention" information has participated in any attention propagation. The formed residual self-attention network addresses two critical issues: (1) stop the self-attention networks from degenerating to rank-1 to the maximized degree; and (2) further diversify the path distribution of parameter update so that easier attention learning is expected. Miti-DETR significantly enhances the average detection precision and convergence speed towards existing DETR-based models on the challenging COCO object detection dataset. Moreover, the proposed transformer with the residual self-attention network can be easily generalized or plugged in other related task models without specific customization.
[ { "created": "Sun, 26 Dec 2021 03:23:59 GMT", "version": "v1" } ]
2021-12-28
[ [ "Ma", "Wenchi", "" ], [ "Zhang", "Tianxiao", "" ], [ "Wang", "Guanghui", "" ] ]
Object Detection with Transformers (DETR) and related works reach or even surpass the highly-optimized Faster-RCNN baseline with self-attention network architectures. Inspired by the evidence that pure self-attention possesses a strong inductive bias that leads to the transformer losing the expressive power with respect to network depth, we propose a transformer architecture with a mitigatory self-attention mechanism by applying possible direct mapping connections in the transformer architecture to mitigate the rank collapse so as to counteract feature expression loss and enhance the model performance. We apply this proposal in object detection tasks and develop a model named Miti-DETR. Miti-DETR reserves the inputs of each single attention layer to the outputs of that layer so that the "non-attention" information has participated in any attention propagation. The formed residual self-attention network addresses two critical issues: (1) stop the self-attention networks from degenerating to rank-1 to the maximized degree; and (2) further diversify the path distribution of parameter update so that easier attention learning is expected. Miti-DETR significantly enhances the average detection precision and convergence speed towards existing DETR-based models on the challenging COCO object detection dataset. Moreover, the proposed transformer with the residual self-attention network can be easily generalized or plugged in other related task models without specific customization.
1311.6264
J\"urgen M\"unch
Frank Elberzhager, Alla Rosbach, J\"urgen M\"unch, Robert Eschbach
Inspection and Test Process Integration Based on Explicit Test Prioritization Strategies
12 pages. The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-642-27213-4_12
Proceedings of the Software Quality Days (SWQD), pages 181-192, Vienna, Austria, January 17-19 2012
10.1007/978-3-642-27213-4_12
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today's software quality assurance techniques are often applied in isolation. Consequently, synergies resulting from systematically integrating different quality assurance activities are often not exploited. Such combinations promise benefits, such as a reduction in quality assurance effort or higher defect detection rates. The integration of inspection and testing, for instance, can be used to guide testing activities. For example, testing activities can be focused on defect-prone parts based upon inspection results. Existing approaches for predicting defect-prone parts do not make systematic use of the results from inspections. This article gives an overview of an integrated inspection and testing approach, and presents a preliminary case study aiming at verifying a study design for evaluating the approach. First results from this preliminary case study indicate that synergies resulting from the integration of inspection and testing might exist, and show a trend that testing activities could be guided based on inspection results.
[ { "created": "Mon, 25 Nov 2013 11:14:07 GMT", "version": "v1" } ]
2013-11-26
[ [ "Elberzhager", "Frank", "" ], [ "Rosbach", "Alla", "" ], [ "Münch", "Jürgen", "" ], [ "Eschbach", "Robert", "" ] ]
Today's software quality assurance techniques are often applied in isolation. Consequently, synergies resulting from systematically integrating different quality assurance activities are often not exploited. Such combinations promise benefits, such as a reduction in quality assurance effort or higher defect detection rates. The integration of inspection and testing, for instance, can be used to guide testing activities. For example, testing activities can be focused on defect-prone parts based upon inspection results. Existing approaches for predicting defect-prone parts do not make systematic use of the results from inspections. This article gives an overview of an integrated inspection and testing approach, and presents a preliminary case study aiming at verifying a study design for evaluating the approach. First results from this preliminary case study indicate that synergies resulting from the integration of inspection and testing might exist, and show a trend that testing activities could be guided based on inspection results.
2306.10153
Komal Teru
Komal K. Teru
Semi-supervised Relation Extraction via Data Augmentation and Consistency-training
Previously published at INTERPOLATE @ NeurIPS 2022 workshop
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023, 1112--1124
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
Due to the semantic complexity of the Relation extraction (RE) task, obtaining high-quality human labelled data is an expensive and noisy process. To improve the sample efficiency of the models, semi-supervised learning (SSL) methods aim to leverage unlabelled data in addition to learning from limited labelled data points. Recently, strong data augmentation combined with consistency-based semi-supervised learning methods have advanced the state of the art in several SSL tasks. However, adapting these methods to the RE task has been challenging due to the difficulty of data augmentation for RE. In this work, we leverage the recent advances in controlled text generation to perform high quality data augmentation for the RE task. We further introduce small but significant changes to model architecture that allows for generation of more training data by interpolating different data points in their latent space. These data augmentations along with consistency training result in very competitive results for semi-supervised relation extraction on four benchmark datasets.
[ { "created": "Fri, 16 Jun 2023 19:45:42 GMT", "version": "v1" } ]
2023-06-21
[ [ "Teru", "Komal K.", "" ] ]
Due to the semantic complexity of the Relation extraction (RE) task, obtaining high-quality human labelled data is an expensive and noisy process. To improve the sample efficiency of the models, semi-supervised learning (SSL) methods aim to leverage unlabelled data in addition to learning from limited labelled data points. Recently, strong data augmentation combined with consistency-based semi-supervised learning methods have advanced the state of the art in several SSL tasks. However, adapting these methods to the RE task has been challenging due to the difficulty of data augmentation for RE. In this work, we leverage the recent advances in controlled text generation to perform high quality data augmentation for the RE task. We further introduce small but significant changes to model architecture that allows for generation of more training data by interpolating different data points in their latent space. These data augmentations along with consistency training result in very competitive results for semi-supervised relation extraction on four benchmark datasets.
2403.10988
Li-Yuan Tsao
Li-Yuan Tsao, Yi-Chen Lo, Chia-Che Chang, Hao-Wei Chen, Roy Tseng, Chien Feng, Chun-Yi Lee
Boosting Flow-based Generative Super-Resolution Models via Learned Prior
Accepted to CVPR2024
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Flow-based super-resolution (SR) models have demonstrated astonishing capabilities in generating high-quality images. However, these methods encounter several challenges during image generation, such as grid artifacts, exploding inverses, and suboptimal results due to a fixed sampling temperature. To overcome these issues, this work introduces a conditional learned prior to the inference phase of a flow-based SR model. This prior is a latent code predicted by our proposed latent module conditioned on the low-resolution image, which is then transformed by the flow model into an SR image. Our framework is designed to seamlessly integrate with any contemporary flow-based SR model without modifying its architecture or pre-trained weights. We evaluate the effectiveness of our proposed framework through extensive experiments and ablation analyses. The proposed framework successfully addresses all the inherent issues in flow-based SR models and enhances their performance in various SR scenarios. Our code is available at: https://github.com/liyuantsao/BFSR
[ { "created": "Sat, 16 Mar 2024 18:04:12 GMT", "version": "v1" }, { "created": "Sat, 30 Mar 2024 04:56:05 GMT", "version": "v2" }, { "created": "Wed, 29 May 2024 03:12:58 GMT", "version": "v3" } ]
2024-05-30
[ [ "Tsao", "Li-Yuan", "" ], [ "Lo", "Yi-Chen", "" ], [ "Chang", "Chia-Che", "" ], [ "Chen", "Hao-Wei", "" ], [ "Tseng", "Roy", "" ], [ "Feng", "Chien", "" ], [ "Lee", "Chun-Yi", "" ] ]
Flow-based super-resolution (SR) models have demonstrated astonishing capabilities in generating high-quality images. However, these methods encounter several challenges during image generation, such as grid artifacts, exploding inverses, and suboptimal results due to a fixed sampling temperature. To overcome these issues, this work introduces a conditional learned prior to the inference phase of a flow-based SR model. This prior is a latent code predicted by our proposed latent module conditioned on the low-resolution image, which is then transformed by the flow model into an SR image. Our framework is designed to seamlessly integrate with any contemporary flow-based SR model without modifying its architecture or pre-trained weights. We evaluate the effectiveness of our proposed framework through extensive experiments and ablation analyses. The proposed framework successfully addresses all the inherent issues in flow-based SR models and enhances their performance in various SR scenarios. Our code is available at: https://github.com/liyuantsao/BFSR
2008.09607
Magnus Lie Hetland
Magnus Lie Hetland
Optimal Metric Search Is Equivalent to the Minimum Dominating Set Problem
null
null
10.1007/978-3-030-60936-8_9
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In metric search, worst-case analysis is of little value, as the search invariably degenerates to a linear scan for ill-behaved data. Consequently, much effort has been expended on more nuanced descriptions of what performance might in fact be attainable, including heuristic baselines like the AESA family, as well as statistical proxies such as intrinsic dimensionality. This paper gets to the heart of the matter with an exact characterization of the best performance actually achievable for any given data set and query. Specifically, linear-time objective-preserving reductions are established in both directions between optimal metric search and the minimum dominating set problem, whose greedy approximation becomes the equivalent of an oracle-based AESA, repeatedly selecting the pivot that eliminates the most of the remaining points. As an illustration, the AESA heuristic is adapted to downplay the role of previously eliminated points, yielding some modest performance improvements over the original, as well as its younger relative iAESA2.
[ { "created": "Fri, 21 Aug 2020 17:59:41 GMT", "version": "v1" } ]
2020-11-03
[ [ "Hetland", "Magnus Lie", "" ] ]
In metric search, worst-case analysis is of little value, as the search invariably degenerates to a linear scan for ill-behaved data. Consequently, much effort has been expended on more nuanced descriptions of what performance might in fact be attainable, including heuristic baselines like the AESA family, as well as statistical proxies such as intrinsic dimensionality. This paper gets to the heart of the matter with an exact characterization of the best performance actually achievable for any given data set and query. Specifically, linear-time objective-preserving reductions are established in both directions between optimal metric search and the minimum dominating set problem, whose greedy approximation becomes the equivalent of an oracle-based AESA, repeatedly selecting the pivot that eliminates the most of the remaining points. As an illustration, the AESA heuristic is adapted to downplay the role of previously eliminated points, yielding some modest performance improvements over the original, as well as its younger relative iAESA2.
2406.10602
Daniil Gurgurov
Daniil Gurgurov, Tanja B\"aumel, Tatiana Anikina
Multilingual Large Language Models and Curse of Multilinguality
null
null
10.48550/arXiv.2406.10602
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Multilingual Large Language Models (LLMs) have gained large popularity among Natural Language Processing (NLP) researchers and practitioners. These models, trained on huge datasets, show proficiency across various languages and demonstrate effectiveness in numerous downstream tasks. This paper navigates the landscape of multilingual LLMs, providing an introductory overview of their technical aspects. It explains underlying architectures, objective functions, pre-training data sources, and tokenization methods. This work explores the unique features of different model types: encoder-only (mBERT, XLM-R), decoder-only (XGLM, PALM, BLOOM, GPT-3), and encoder-decoder models (mT5, mBART). Additionally, it addresses one of the significant limitations of multilingual LLMs - the curse of multilinguality - and discusses current attempts to overcome it.
[ { "created": "Sat, 15 Jun 2024 11:31:39 GMT", "version": "v1" } ]
2024-07-02
[ [ "Gurgurov", "Daniil", "" ], [ "Bäumel", "Tanja", "" ], [ "Anikina", "Tatiana", "" ] ]
Multilingual Large Language Models (LLMs) have gained large popularity among Natural Language Processing (NLP) researchers and practitioners. These models, trained on huge datasets, show proficiency across various languages and demonstrate effectiveness in numerous downstream tasks. This paper navigates the landscape of multilingual LLMs, providing an introductory overview of their technical aspects. It explains underlying architectures, objective functions, pre-training data sources, and tokenization methods. This work explores the unique features of different model types: encoder-only (mBERT, XLM-R), decoder-only (XGLM, PALM, BLOOM, GPT-3), and encoder-decoder models (mT5, mBART). Additionally, it addresses one of the significant limitations of multilingual LLMs - the curse of multilinguality - and discusses current attempts to overcome it.
2407.15570
Zehra Yigit
Zehra Yigit, Ertugrul Basar
Hybrid STAR-RIS Enabled Integrated Sensing and Communication
10 pages, 7 figures
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Integrated sensing and communication (ISAC) is recognized as one of the key enabling technologies for sixth-generation (6G) wireless communication networks, facilitating diverse emerging applications and services in an energy and cost-efficient manner. This paper proposes a multi-user multi-target ISAC system to enable full-space coverage for communication and sensing tasks. The proposed system employs a hybrid simultaneous transmission and reflection reconfigurable intelligent surface (STAR-RIS) comprising active transmissive and passive reflective elements. In the proposed scheme, the passive reflective elements support communication and sensing links for nearby communication users and sensing targets, while low-power active transmissive elements are deployed to improve sensing performance and overcome high path attenuation due to multi-hop transmission for remote targets. Moreover, to optimize the transmissive/reflective coefficients of the hybrid STAR-RIS, a semi-definite relaxation (SDR)-based algorithm is proposed. Furthermore, to evaluate sensing performance, signal-to-interference-noise ratio (SINR) and Cramer-Rao bound (CRB) metrics have been derived and investigated via conducting extensive computer simulations.
[ { "created": "Mon, 22 Jul 2024 11:55:39 GMT", "version": "v1" } ]
2024-07-23
[ [ "Yigit", "Zehra", "" ], [ "Basar", "Ertugrul", "" ] ]
Integrated sensing and communication (ISAC) is recognized as one of the key enabling technologies for sixth-generation (6G) wireless communication networks, facilitating diverse emerging applications and services in an energy and cost-efficient manner. This paper proposes a multi-user multi-target ISAC system to enable full-space coverage for communication and sensing tasks. The proposed system employs a hybrid simultaneous transmission and reflection reconfigurable intelligent surface (STAR-RIS) comprising active transmissive and passive reflective elements. In the proposed scheme, the passive reflective elements support communication and sensing links for nearby communication users and sensing targets, while low-power active transmissive elements are deployed to improve sensing performance and overcome high path attenuation due to multi-hop transmission for remote targets. Moreover, to optimize the transmissive/reflective coefficients of the hybrid STAR-RIS, a semi-definite relaxation (SDR)-based algorithm is proposed. Furthermore, to evaluate sensing performance, signal-to-interference-noise ratio (SINR) and Cramer-Rao bound (CRB) metrics have been derived and investigated via conducting extensive computer simulations.
2007.15538
Filippo Gabriele Prattic\`o
F. Gabriele Prattic\`o, Fabrizio Lamberti (Politecnico di Torino)
Mixed-Reality Robotic Games: Design Guidelines for Effective Entertainment with Consumer Robots
This paper is accepted for inclusion in future issue of IEEE Consumer Electronic Magazine. Copyright IEEE 2020
IEEE Consumer Electronics Magazine, vol. 10, no. 1, pp. 6-16, Jan. 2021
10.1109/MCE.2020.2988578
null
cs.HC cs.GR cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, there has been an increasing interest in the use of robotic technology at home. A number of service robots appeared on the market, supporting customers in the execution of everyday tasks. Roughly at the same time, consumer level robots started to be used also as toys or gaming companions. However, gaming possibilities provided by current off-the-shelf robotic products are generally quite limited, and this fact makes them quickly loose their attractiveness. A way that has been proven capable to boost robotic gaming and related devices consists in creating playful experiences in which physical and digital elements are combined together using Mixed Reality technologies. However, these games differ significantly from digital- or physical only experiences, and new design principles are required to support developers in their creative work. This papers addresses such need, by drafting a set of guidelines which summarize developments carried out by the research community and their findings.
[ { "created": "Thu, 30 Jul 2020 15:47:17 GMT", "version": "v1" } ]
2020-12-08
[ [ "Pratticò", "F. Gabriele", "", "Politecnico di Torino" ], [ "Lamberti", "Fabrizio", "", "Politecnico di Torino" ] ]
In recent years, there has been an increasing interest in the use of robotic technology at home. A number of service robots appeared on the market, supporting customers in the execution of everyday tasks. Roughly at the same time, consumer level robots started to be used also as toys or gaming companions. However, gaming possibilities provided by current off-the-shelf robotic products are generally quite limited, and this fact makes them quickly loose their attractiveness. A way that has been proven capable to boost robotic gaming and related devices consists in creating playful experiences in which physical and digital elements are combined together using Mixed Reality technologies. However, these games differ significantly from digital- or physical only experiences, and new design principles are required to support developers in their creative work. This papers addresses such need, by drafting a set of guidelines which summarize developments carried out by the research community and their findings.
1603.00536
EPTCS
C\'esar A. Mu\~noz (NASA Langley Research Center), Jorge A. P\'erez (University of Groningen)
Proceedings of the Eleventh International Workshop on Developments in Computational Models
null
EPTCS 204, 2016
10.4204/EPTCS.204
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume contains the proceedings of DCM 2015, the 11th International Workshop on Developments in Computational Models held on October 28, 2015 in Cali, Colombia. DCM 2015 was organized as a one-day satellite event of the 12th International Colloquium on Theoretical Aspects of Computing (ICTAC 2015). Several new models of computation have emerged in the last few years, and many developments of traditional computational models have been proposed with the aim of taking into account the new demands of computer systems users and the new capabilities of computation engines. A new computational model, or a new feature in a traditional one, usually is reflected in a new family of programming languages, and new paradigms of software development. The aim of the DCM workshop series is to bring together researchers who are currently developing new computational models or new features for traditional computational models, in order to foster their interaction, to provide a forum for presenting new ideas and work in progress, and to enable newcomers to learn about current activities in this area. Topics of interest include all abstract models of computation and their applications to the development of programming languages and systems.
[ { "created": "Wed, 2 Mar 2016 00:49:28 GMT", "version": "v1" } ]
2016-03-03
[ [ "Muñoz", "César A.", "", "NASA Langley Research Center" ], [ "Pérez", "Jorge A.", "", "University of Groningen" ] ]
This volume contains the proceedings of DCM 2015, the 11th International Workshop on Developments in Computational Models held on October 28, 2015 in Cali, Colombia. DCM 2015 was organized as a one-day satellite event of the 12th International Colloquium on Theoretical Aspects of Computing (ICTAC 2015). Several new models of computation have emerged in the last few years, and many developments of traditional computational models have been proposed with the aim of taking into account the new demands of computer systems users and the new capabilities of computation engines. A new computational model, or a new feature in a traditional one, usually is reflected in a new family of programming languages, and new paradigms of software development. The aim of the DCM workshop series is to bring together researchers who are currently developing new computational models or new features for traditional computational models, in order to foster their interaction, to provide a forum for presenting new ideas and work in progress, and to enable newcomers to learn about current activities in this area. Topics of interest include all abstract models of computation and their applications to the development of programming languages and systems.
2405.00846
Duy Nguyen
Duy P. Nguyen and Kai-Chieh Hsu and Wenhao Yu and Jie Tan and Jaime F. Fisac
Gameplay Filters: Safe Robot Walking through Adversarial Imagination
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Ensuring the safe operation of legged robots in uncertain, novel environments is crucial to their widespread adoption. Despite recent advances in safety filters that can keep arbitrary task-driven policies from incurring safety failures, existing solutions for legged robot locomotion still rely on simplified dynamics and may fail when the robot is perturbed away from predefined stable gaits. This paper presents a general approach that leverages offline game-theoretic reinforcement learning to synthesize a highly robust safety filter for high-order nonlinear dynamics. This gameplay filter then maintains runtime safety by continually simulating adversarial futures and precluding task-driven actions that would cause it to lose future games (and thereby violate safety). Validated on a 36-dimensional quadruped robot locomotion task, the gameplay safety filter exhibits inherent robustness to the sim-to-real gap without manual tuning or heuristic designs. Physical experiments demonstrate the effectiveness of the gameplay safety filter under perturbations, such as tugging and unmodeled irregular terrains, while simulation studies shed light on how to trade off computation and conservativeness without compromising safety.
[ { "created": "Wed, 1 May 2024 20:21:44 GMT", "version": "v1" }, { "created": "Fri, 31 May 2024 14:26:47 GMT", "version": "v2" } ]
2024-06-03
[ [ "Nguyen", "Duy P.", "" ], [ "Hsu", "Kai-Chieh", "" ], [ "Yu", "Wenhao", "" ], [ "Tan", "Jie", "" ], [ "Fisac", "Jaime F.", "" ] ]
Ensuring the safe operation of legged robots in uncertain, novel environments is crucial to their widespread adoption. Despite recent advances in safety filters that can keep arbitrary task-driven policies from incurring safety failures, existing solutions for legged robot locomotion still rely on simplified dynamics and may fail when the robot is perturbed away from predefined stable gaits. This paper presents a general approach that leverages offline game-theoretic reinforcement learning to synthesize a highly robust safety filter for high-order nonlinear dynamics. This gameplay filter then maintains runtime safety by continually simulating adversarial futures and precluding task-driven actions that would cause it to lose future games (and thereby violate safety). Validated on a 36-dimensional quadruped robot locomotion task, the gameplay safety filter exhibits inherent robustness to the sim-to-real gap without manual tuning or heuristic designs. Physical experiments demonstrate the effectiveness of the gameplay safety filter under perturbations, such as tugging and unmodeled irregular terrains, while simulation studies shed light on how to trade off computation and conservativeness without compromising safety.
2206.11728
Gunnar Kudrjavets
Gunnar Kudrjavets (University of Groningen), Jeff Thomas (Meta Platforms, Inc.), Aditya Kumar (Snap, Inc.), Nachiappan Nagappan (Meta Platforms, Inc.), and Ayushi Rastogi (University of Groningen)
There Ain't No Such Thing as a Free Custom Memory Allocator
4 pages. To be published in 38th IEEE International Conference on Software Maintenance and Evolution (ICSME 2022), Oct 3-7, 2022, Limassol, Cyprus
null
10.1109/ICSME55016.2022.00079
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using custom memory allocators is an efficient performance optimization technique. However, dependency on a custom allocator can introduce several maintenance-related issues. We present lessons learned from the industry and provide critical guidance for using custom memory allocators and enumerate various challenges associated with integrating them. These recommendations are based on years of experience incorporating custom allocators into different industrial software projects.
[ { "created": "Thu, 23 Jun 2022 14:26:50 GMT", "version": "v1" } ]
2022-12-23
[ [ "Kudrjavets", "Gunnar", "", "University of Groningen" ], [ "Thomas", "Jeff", "", "Meta\n Platforms, Inc." ], [ "Kumar", "Aditya", "", "Snap, Inc." ], [ "Nagappan", "Nachiappan", "", "Meta\n Platforms, Inc." ], [ "Rastogi", "Ayushi", "", "University of Groningen" ] ]
Using custom memory allocators is an efficient performance optimization technique. However, dependency on a custom allocator can introduce several maintenance-related issues. We present lessons learned from the industry and provide critical guidance for using custom memory allocators and enumerate various challenges associated with integrating them. These recommendations are based on years of experience incorporating custom allocators into different industrial software projects.
2407.10943
Hanqing Wang
Hanqing Wang, Jiahe Chen, Wensi Huang, Qingwei Ben, Tai Wang, Boyu Mi, Tao Huang, Siheng Zhao, Yilun Chen, Sizhe Yang, Peizhou Cao, Wenye Yu, Zichao Ye, Jialun Li, Junfeng Long, Zirui Wang, Huiling Wang, Ying Zhao, Zhongying Tu, Yu Qiao, Dahua Lin, Jiangmiao Pang
GRUtopia: Dream General Robots in a City at Scale
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works have been exploring the scaling laws in the field of Embodied AI. Given the prohibitive costs of collecting real-world data, we believe the Simulation-to-Real (Sim2Real) paradigm is a crucial step for scaling the learning of embodied models. This paper introduces project GRUtopia, the first simulated interactive 3D society designed for various robots. It features several advancements: (a) The scene dataset, GRScenes, includes 100k interactive, finely annotated scenes, which can be freely combined into city-scale environments. In contrast to previous works mainly focusing on home, GRScenes covers 89 diverse scene categories, bridging the gap of service-oriented environments where general robots would be initially deployed. (b) GRResidents, a Large Language Model (LLM) driven Non-Player Character (NPC) system that is responsible for social interaction, task generation, and task assignment, thus simulating social scenarios for embodied AI applications. (c) The benchmark, GRBench, supports various robots but focuses on legged robots as primary agents and poses moderately challenging tasks involving Object Loco-Navigation, Social Loco-Navigation, and Loco-Manipulation. We hope that this work can alleviate the scarcity of high-quality data in this field and provide a more comprehensive assessment of Embodied AI research. The project is available at https://github.com/OpenRobotLab/GRUtopia.
[ { "created": "Mon, 15 Jul 2024 17:40:46 GMT", "version": "v1" } ]
2024-07-16
[ [ "Wang", "Hanqing", "" ], [ "Chen", "Jiahe", "" ], [ "Huang", "Wensi", "" ], [ "Ben", "Qingwei", "" ], [ "Wang", "Tai", "" ], [ "Mi", "Boyu", "" ], [ "Huang", "Tao", "" ], [ "Zhao", "Siheng", "" ], [ "Chen", "Yilun", "" ], [ "Yang", "Sizhe", "" ], [ "Cao", "Peizhou", "" ], [ "Yu", "Wenye", "" ], [ "Ye", "Zichao", "" ], [ "Li", "Jialun", "" ], [ "Long", "Junfeng", "" ], [ "Wang", "Zirui", "" ], [ "Wang", "Huiling", "" ], [ "Zhao", "Ying", "" ], [ "Tu", "Zhongying", "" ], [ "Qiao", "Yu", "" ], [ "Lin", "Dahua", "" ], [ "Pang", "Jiangmiao", "" ] ]
Recent works have been exploring the scaling laws in the field of Embodied AI. Given the prohibitive costs of collecting real-world data, we believe the Simulation-to-Real (Sim2Real) paradigm is a crucial step for scaling the learning of embodied models. This paper introduces project GRUtopia, the first simulated interactive 3D society designed for various robots. It features several advancements: (a) The scene dataset, GRScenes, includes 100k interactive, finely annotated scenes, which can be freely combined into city-scale environments. In contrast to previous works mainly focusing on home, GRScenes covers 89 diverse scene categories, bridging the gap of service-oriented environments where general robots would be initially deployed. (b) GRResidents, a Large Language Model (LLM) driven Non-Player Character (NPC) system that is responsible for social interaction, task generation, and task assignment, thus simulating social scenarios for embodied AI applications. (c) The benchmark, GRBench, supports various robots but focuses on legged robots as primary agents and poses moderately challenging tasks involving Object Loco-Navigation, Social Loco-Navigation, and Loco-Manipulation. We hope that this work can alleviate the scarcity of high-quality data in this field and provide a more comprehensive assessment of Embodied AI research. The project is available at https://github.com/OpenRobotLab/GRUtopia.
1903.04554
Ali Rahmati
Ali Rahmati, Seyyedali Hosseinalipour, Huaiyu Dai
Optimal Time Allocation in VANETs Advertising: A Price-based Approach using Stackelberg Game
6 pages, 4 figures
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicular ad-hoc networks (VANETs) have recently attracted a lot of attention due to their immense potentials and applications. Wide range of coverage and accessibility to end users make VANETs a good target for commercial companies. In this paper, we consider a scenario in which advertising companies aim to disseminate their advertisements in different areas of a city by utilizing VANETs infrastructure. These companies compete for renting the VANETs infrastructure to spread their advertisements. We partition the city map into different blocks, and consider a manager for all the blocks who is in charge of splitting the time between interested advertising companies. Each advertising company (AdC) is charged proportional to the allocated time. In order to find the best time splitting between AdCs, we propose a Stackelberg game scheme in which the block manager assigns the companies to the blocks and imposes the renting prices to different companies in order to maximize its own profit. Based on this, AdCs request the amount of time they desire to rent the infrastructure in order to maximize their utilities. To obtain the Stackelberg equilibrium of the game, a mixed integer nonlinear optimization problem is solved using the proposed optimal and sub-optimal algorithms. The simulation results demonstrate that the sub-optimal algorithm approaches the optimal one in performance with lower complexity.
[ { "created": "Mon, 11 Mar 2019 19:21:54 GMT", "version": "v1" } ]
2019-03-13
[ [ "Rahmati", "Ali", "" ], [ "Hosseinalipour", "Seyyedali", "" ], [ "Dai", "Huaiyu", "" ] ]
Vehicular ad-hoc networks (VANETs) have recently attracted a lot of attention due to their immense potentials and applications. Wide range of coverage and accessibility to end users make VANETs a good target for commercial companies. In this paper, we consider a scenario in which advertising companies aim to disseminate their advertisements in different areas of a city by utilizing VANETs infrastructure. These companies compete for renting the VANETs infrastructure to spread their advertisements. We partition the city map into different blocks, and consider a manager for all the blocks who is in charge of splitting the time between interested advertising companies. Each advertising company (AdC) is charged proportional to the allocated time. In order to find the best time splitting between AdCs, we propose a Stackelberg game scheme in which the block manager assigns the companies to the blocks and imposes the renting prices to different companies in order to maximize its own profit. Based on this, AdCs request the amount of time they desire to rent the infrastructure in order to maximize their utilities. To obtain the Stackelberg equilibrium of the game, a mixed integer nonlinear optimization problem is solved using the proposed optimal and sub-optimal algorithms. The simulation results demonstrate that the sub-optimal algorithm approaches the optimal one in performance with lower complexity.
1703.08440
Kojo Sarfo Gyamfi
Kojo Sarfo Gyamfi, James Brusey and Andrew Hunt
K-Means Clustering using Tabu Search with Quantized Means
World Conference on Engineering and Computer Science
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Tabu Search (TS) metaheuristic has been proposed for K-Means clustering as an alternative to Lloyd's algorithm, which for all its ease of implementation and fast runtime, has the major drawback of being trapped at local optima. While the TS approach can yield superior performance, it involves a high computational complexity. Moreover, the difficulty in parameter selection in the existing TS approach does not make it any more attractive. This paper presents an alternative, low-complexity formulation of the TS optimization procedure for K-Means clustering. This approach does not require many parameter settings. We initially constrain the centers to points in the dataset. We then aim at evolving these centers using a unique neighborhood structure that makes use of gradient information of the objective function. This results in an efficient exploration of the search space, after which the means are refined. The proposed scheme is implemented in MATLAB and tested on four real-world datasets, and it achieves a significant improvement over the existing TS approach in terms of the intra cluster sum of squares and computational time.
[ { "created": "Fri, 24 Mar 2017 14:59:06 GMT", "version": "v1" } ]
2017-03-27
[ [ "Gyamfi", "Kojo Sarfo", "" ], [ "Brusey", "James", "" ], [ "Hunt", "Andrew", "" ] ]
The Tabu Search (TS) metaheuristic has been proposed for K-Means clustering as an alternative to Lloyd's algorithm, which for all its ease of implementation and fast runtime, has the major drawback of being trapped at local optima. While the TS approach can yield superior performance, it involves a high computational complexity. Moreover, the difficulty in parameter selection in the existing TS approach does not make it any more attractive. This paper presents an alternative, low-complexity formulation of the TS optimization procedure for K-Means clustering. This approach does not require many parameter settings. We initially constrain the centers to points in the dataset. We then aim at evolving these centers using a unique neighborhood structure that makes use of gradient information of the objective function. This results in an efficient exploration of the search space, after which the means are refined. The proposed scheme is implemented in MATLAB and tested on four real-world datasets, and it achieves a significant improvement over the existing TS approach in terms of the intra cluster sum of squares and computational time.
2205.05192
Cynthia Liem
Han-Yin Huang and Cynthia C. S. Liem
Social Inclusion in Curated Contexts: Insights from Museum Practices
in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22), June 21-24, 2022, Seoul, Republic of Korea
null
10.1145/3531146.3533095
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence literature suggests that minority and fragile communities in society can be negatively impacted by machine learning algorithms due to inherent biases in the design process, which lead to socially exclusive decisions and policies. Faced with similar challenges in dealing with an increasingly diversified audience, the museum sector has seen changes in theory and practice, particularly in the areas of representation and meaning-making. While rarity and grandeur used to be at the centre stage of the early museum practices, folk life and museums' relationships with the diverse communities they serve become a widely integrated part of the contemporary practices. These changes address issues of diversity and accessibility in order to offer more socially inclusive services. Drawing on these changes and reflecting back on the AI world, we argue that the museum experience provides useful lessons for building AI with socially inclusive approaches, especially in situations in which both a collection and access to it will need to be curated or filtered, as frequently happens in search engines, recommender systems and digital libraries. We highlight three principles: (1) Instead of upholding the value of neutrality, practitioners are aware of the influences of their own backgrounds and those of others on their work. By not claiming to be neutral but practising cultural humility, the chances of addressing potential biases can be increased. (2) There should be room for situational interpretation beyond the stages of data collection and machine learning. Before applying models and predictions, the contexts in which relevant parties exist should be taken into account. (3) Community participation serves the needs of communities and has the added benefit of bringing practitioners and communities together.
[ { "created": "Tue, 10 May 2022 22:22:12 GMT", "version": "v1" } ]
2022-05-12
[ [ "Huang", "Han-Yin", "" ], [ "Liem", "Cynthia C. S.", "" ] ]
Artificial intelligence literature suggests that minority and fragile communities in society can be negatively impacted by machine learning algorithms due to inherent biases in the design process, which lead to socially exclusive decisions and policies. Faced with similar challenges in dealing with an increasingly diversified audience, the museum sector has seen changes in theory and practice, particularly in the areas of representation and meaning-making. While rarity and grandeur used to be at the centre stage of the early museum practices, folk life and museums' relationships with the diverse communities they serve become a widely integrated part of the contemporary practices. These changes address issues of diversity and accessibility in order to offer more socially inclusive services. Drawing on these changes and reflecting back on the AI world, we argue that the museum experience provides useful lessons for building AI with socially inclusive approaches, especially in situations in which both a collection and access to it will need to be curated or filtered, as frequently happens in search engines, recommender systems and digital libraries. We highlight three principles: (1) Instead of upholding the value of neutrality, practitioners are aware of the influences of their own backgrounds and those of others on their work. By not claiming to be neutral but practising cultural humility, the chances of addressing potential biases can be increased. (2) There should be room for situational interpretation beyond the stages of data collection and machine learning. Before applying models and predictions, the contexts in which relevant parties exist should be taken into account. (3) Community participation serves the needs of communities and has the added benefit of bringing practitioners and communities together.
2107.04020
Xuejing Lei
Xuejing Lei, Ganning Zhao, Kaitai Zhang, C.-C. Jay Kuo
TGHop: An Explainable, Efficient and Lightweight Method for Texture Generation
arXiv admin note: substantial text overlap with arXiv:2009.01376
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An explainable, efficient and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.
[ { "created": "Thu, 8 Jul 2021 17:56:58 GMT", "version": "v1" } ]
2021-07-09
[ [ "Lei", "Xuejing", "" ], [ "Zhao", "Ganning", "" ], [ "Zhang", "Kaitai", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
An explainable, efficient and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.
2011.13056
Huseyin Birkan Yilmaz
Mehmet Sukru Kuran and H. Birkan Yilmaz and Ilker Demirkol and Nariman Farsad and Andrea Goldsmith
A Survey on Modulation Techniques in Molecular Communication via Diffusion
Preprint of the accepted manuscript for publication in IEEE Surveys and Tutorials
null
null
null
cs.ET cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This survey paper focuses on modulation aspects of molecular communication, an emerging field focused on building biologically-inspired systems that embed data within chemical signals. The primary challenges in designing these systems are how to encode and modulate information onto chemical signals, and how to design a receiver that can detect and decode the information from the corrupted chemical signal observed at the destination. In this paper, we focus on modulation design for molecular communication via diffusion systems. In these systems, chemical signals are transported using diffusion, possibly assisted by flow, from the transmitter to the receiver. This tutorial presents recent advancements in modulation and demodulation schemes for molecular communication via diffusion. We compare five different modulation types: concentration-based, type-based, timing-based, spatial, and higher-order modulation techniques. The end-to-end system designs for each modulation scheme are presented. In addition, the key metrics used in the literature to evaluate the performance of these techniques are also presented. Finally, we provide a numerical bit error rate comparison of prominent modulation techniques using analytical models. We close the tutorial with a discussion of key open issues and future research directions for design of molecular communication via diffusion systems.
[ { "created": "Wed, 25 Nov 2020 23:00:50 GMT", "version": "v1" }, { "created": "Mon, 28 Dec 2020 09:32:37 GMT", "version": "v2" } ]
2020-12-29
[ [ "Kuran", "Mehmet Sukru", "" ], [ "Yilmaz", "H. Birkan", "" ], [ "Demirkol", "Ilker", "" ], [ "Farsad", "Nariman", "" ], [ "Goldsmith", "Andrea", "" ] ]
This survey paper focuses on modulation aspects of molecular communication, an emerging field focused on building biologically-inspired systems that embed data within chemical signals. The primary challenges in designing these systems are how to encode and modulate information onto chemical signals, and how to design a receiver that can detect and decode the information from the corrupted chemical signal observed at the destination. In this paper, we focus on modulation design for molecular communication via diffusion systems. In these systems, chemical signals are transported using diffusion, possibly assisted by flow, from the transmitter to the receiver. This tutorial presents recent advancements in modulation and demodulation schemes for molecular communication via diffusion. We compare five different modulation types: concentration-based, type-based, timing-based, spatial, and higher-order modulation techniques. The end-to-end system designs for each modulation scheme are presented. In addition, the key metrics used in the literature to evaluate the performance of these techniques are also presented. Finally, we provide a numerical bit error rate comparison of prominent modulation techniques using analytical models. We close the tutorial with a discussion of key open issues and future research directions for design of molecular communication via diffusion systems.
1105.4540
Matt Malloy
Matthew Malloy and Robert Nowak
On the Limits of Sequential Testing in High Dimensions
Asilomar 2011
null
null
null
cs.IT math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. Specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then log s / D(f0||f1) where s is the level of sparsity, and D(f0||f1) the Kullback-Leibler divergence between the underlying distributions. For comparison, we show any non-sequential procedure fails provided the number of measurements grows at a rate less than log n / D(f1||f0), where n is the total dimension of the problem. Lastly, we show that a simple procedure termed sequential thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than (log s + log log n) / D(f0||f1), a mere additive factor more than the lower bound.
[ { "created": "Mon, 23 May 2011 15:58:03 GMT", "version": "v1" }, { "created": "Mon, 25 Jul 2011 20:13:41 GMT", "version": "v2" }, { "created": "Tue, 18 Oct 2011 16:12:53 GMT", "version": "v3" } ]
2011-10-19
[ [ "Malloy", "Matthew", "" ], [ "Nowak", "Robert", "" ] ]
This paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. Specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then log s / D(f0||f1) where s is the level of sparsity, and D(f0||f1) the Kullback-Leibler divergence between the underlying distributions. For comparison, we show any non-sequential procedure fails provided the number of measurements grows at a rate less than log n / D(f1||f0), where n is the total dimension of the problem. Lastly, we show that a simple procedure termed sequential thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than (log s + log log n) / D(f0||f1), a mere additive factor more than the lower bound.
1905.00919
Mohamed Baza
Ahmed Shafee, Mohamed Baza, Douglas A. Talbert, Mostafa M. Fouda, Mahmoud Nabil, Mohamed Mahmoud
Mimic Learning to Generate a Shareable Network Intrusion Detection Model
null
null
null
null
cs.CR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purveyors of malicious network attacks continue to increase the complexity and the sophistication of their techniques, and their ability to evade detection continues to improve as well. Hence, intrusion detection systems must also evolve to meet these increasingly challenging threats. Machine learning is often used to support this needed improvement. However, training a good prediction model can require a large set of labelled training data. Such datasets are difficult to obtain because privacy concerns prevent the majority of intrusion detection agencies from sharing their sensitive data. In this paper, we propose the use of mimic learning to enable the transfer of intrusion detection knowledge through a teacher model trained on private data to a student model. This student model provides a mean of publicly sharing knowledge extracted from private data without sharing the data itself. Our results confirm that the proposed scheme can produce a student intrusion detection model that mimics the teacher model without requiring access to the original dataset.
[ { "created": "Thu, 2 May 2019 18:14:24 GMT", "version": "v1" }, { "created": "Sat, 5 Oct 2019 17:39:51 GMT", "version": "v2" }, { "created": "Tue, 18 Feb 2020 20:14:47 GMT", "version": "v3" } ]
2020-02-20
[ [ "Shafee", "Ahmed", "" ], [ "Baza", "Mohamed", "" ], [ "Talbert", "Douglas A.", "" ], [ "Fouda", "Mostafa M.", "" ], [ "Nabil", "Mahmoud", "" ], [ "Mahmoud", "Mohamed", "" ] ]
Purveyors of malicious network attacks continue to increase the complexity and the sophistication of their techniques, and their ability to evade detection continues to improve as well. Hence, intrusion detection systems must also evolve to meet these increasingly challenging threats. Machine learning is often used to support this needed improvement. However, training a good prediction model can require a large set of labelled training data. Such datasets are difficult to obtain because privacy concerns prevent the majority of intrusion detection agencies from sharing their sensitive data. In this paper, we propose the use of mimic learning to enable the transfer of intrusion detection knowledge through a teacher model trained on private data to a student model. This student model provides a mean of publicly sharing knowledge extracted from private data without sharing the data itself. Our results confirm that the proposed scheme can produce a student intrusion detection model that mimics the teacher model without requiring access to the original dataset.
1610.06721
Cosimo Anglano
Cosimo Anglano, Massimo Canonico, Marco Guazzone
Forensic Analysis of the ChatSecure Instant Messaging Application on Android Smartphones
null
Digital Investigation, Volume 19, December 2016, Pages 44-59
10.1016/j.diin.2016.10.001
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the forensic analysis of the artifacts generated on Android smartphones by ChatSecure, a secure Instant Messaging application that provides strong encryption for transmitted and locally-stored data to ensure the privacy of its users. We show that ChatSecure stores local copies of both exchanged messages and files into two distinct, AES-256 encrypted databases, and we devise a technique able to decrypt them when the secret passphrase, chosen by the user as the initial step of the encryption process, is known. Furthermore, we show how this passphrase can be identified and extracted from the volatile memory of the device, where it persists for the entire execution of ChatSecure after having been entered by the user, thus allowing one to carry out decryption even if the passphrase is not revealed by the user. Finally, we discuss how to analyze and correlate the data stored in the databases used by ChatSecure to identify the IM accounts used by the user and his/her buddies to communicate, as well as to reconstruct the chronology and contents of the messages and files that have been exchanged among them. For our study we devise and use an experimental methodology, based on the use of emulated devices, that provides a very high degree of reproducibility of the results, and we validate the results it yields against those obtained from real smartphones.
[ { "created": "Fri, 21 Oct 2016 09:34:33 GMT", "version": "v1" } ]
2016-10-24
[ [ "Anglano", "Cosimo", "" ], [ "Canonico", "Massimo", "" ], [ "Guazzone", "Marco", "" ] ]
We present the forensic analysis of the artifacts generated on Android smartphones by ChatSecure, a secure Instant Messaging application that provides strong encryption for transmitted and locally-stored data to ensure the privacy of its users. We show that ChatSecure stores local copies of both exchanged messages and files into two distinct, AES-256 encrypted databases, and we devise a technique able to decrypt them when the secret passphrase, chosen by the user as the initial step of the encryption process, is known. Furthermore, we show how this passphrase can be identified and extracted from the volatile memory of the device, where it persists for the entire execution of ChatSecure after having been entered by the user, thus allowing one to carry out decryption even if the passphrase is not revealed by the user. Finally, we discuss how to analyze and correlate the data stored in the databases used by ChatSecure to identify the IM accounts used by the user and his/her buddies to communicate, as well as to reconstruct the chronology and contents of the messages and files that have been exchanged among them. For our study we devise and use an experimental methodology, based on the use of emulated devices, that provides a very high degree of reproducibility of the results, and we validate the results it yields against those obtained from real smartphones.
2401.11107
Zhen Chen
Zhen Chen, Jingping Liu, Deqing Yang, Yanghua Xiao, Huimin Xu, Zongyu Wang, Rui Xie and Yunsen Xian
Exploiting Duality in Open Information Extraction with Predicate Prompt
null
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence. Compared with general information extraction (IE), OpenIE poses more challenges for the IE models, {especially when multiple complicated triplets exist in a sentence. To extract these complicated triplets more effectively, in this paper we propose a novel generative OpenIE model, namely \emph{DualOIE}, which achieves a dual task at the same time as extracting some triplets from the sentence, i.e., converting the triplets into the sentence.} Such dual task encourages the model to correctly recognize the structure of the given sentence and thus is helpful to extract all potential triplets from the sentence. Specifically, DualOIE extracts the triplets in two steps: 1) first extracting a sequence of all potential predicates, 2) then using the predicate sequence as a prompt to induce the generation of triplets. Our experiments on two benchmarks and our dataset constructed from Meituan demonstrate that DualOIE achieves the best performance among the state-of-the-art baselines. Furthermore, the online A/B test on Meituan platform shows that 0.93\% improvement of QV-CTR and 0.56\% improvement of UV-CTR have been obtained when the triplets extracted by DualOIE were leveraged in Meituan's search system.
[ { "created": "Sat, 20 Jan 2024 03:55:17 GMT", "version": "v1" } ]
2024-01-23
[ [ "Chen", "Zhen", "" ], [ "Liu", "Jingping", "" ], [ "Yang", "Deqing", "" ], [ "Xiao", "Yanghua", "" ], [ "Xu", "Huimin", "" ], [ "Wang", "Zongyu", "" ], [ "Xie", "Rui", "" ], [ "Xian", "Yunsen", "" ] ]
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence. Compared with general information extraction (IE), OpenIE poses more challenges for the IE models, {especially when multiple complicated triplets exist in a sentence. To extract these complicated triplets more effectively, in this paper we propose a novel generative OpenIE model, namely \emph{DualOIE}, which achieves a dual task at the same time as extracting some triplets from the sentence, i.e., converting the triplets into the sentence.} Such dual task encourages the model to correctly recognize the structure of the given sentence and thus is helpful to extract all potential triplets from the sentence. Specifically, DualOIE extracts the triplets in two steps: 1) first extracting a sequence of all potential predicates, 2) then using the predicate sequence as a prompt to induce the generation of triplets. Our experiments on two benchmarks and our dataset constructed from Meituan demonstrate that DualOIE achieves the best performance among the state-of-the-art baselines. Furthermore, the online A/B test on Meituan platform shows that 0.93\% improvement of QV-CTR and 0.56\% improvement of UV-CTR have been obtained when the triplets extracted by DualOIE were leveraged in Meituan's search system.
2404.03555
Botond Barta
Botond Barta, Dorina Lakatos, Attila Nagy, Mil\'an Konor Nyist, Judit \'Acs
From News to Summaries: Building a Hungarian Corpus for Extractive and Abstractive Summarization
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Training summarization models requires substantial amounts of training data. However for less resourceful languages like Hungarian, openly available models and datasets are notably scarce. To address this gap our paper introduces HunSum-2 an open-source Hungarian corpus suitable for training abstractive and extractive summarization models. The dataset is assembled from segments of the Common Crawl corpus undergoing thorough cleaning, preprocessing and deduplication. In addition to abstractive summarization we generate sentence-level labels for extractive summarization using sentence similarity. We train baseline models for both extractive and abstractive summarization using the collected dataset. To demonstrate the effectiveness of the trained models, we perform both quantitative and qualitative evaluation. Our dataset, models and code are publicly available, encouraging replication, further research, and real-world applications across various domains.
[ { "created": "Thu, 4 Apr 2024 16:07:06 GMT", "version": "v1" }, { "created": "Fri, 12 Apr 2024 08:05:13 GMT", "version": "v2" } ]
2024-04-15
[ [ "Barta", "Botond", "" ], [ "Lakatos", "Dorina", "" ], [ "Nagy", "Attila", "" ], [ "Nyist", "Milán Konor", "" ], [ "Ács", "Judit", "" ] ]
Training summarization models requires substantial amounts of training data. However for less resourceful languages like Hungarian, openly available models and datasets are notably scarce. To address this gap our paper introduces HunSum-2 an open-source Hungarian corpus suitable for training abstractive and extractive summarization models. The dataset is assembled from segments of the Common Crawl corpus undergoing thorough cleaning, preprocessing and deduplication. In addition to abstractive summarization we generate sentence-level labels for extractive summarization using sentence similarity. We train baseline models for both extractive and abstractive summarization using the collected dataset. To demonstrate the effectiveness of the trained models, we perform both quantitative and qualitative evaluation. Our dataset, models and code are publicly available, encouraging replication, further research, and real-world applications across various domains.
2406.13012
Chi-Hua Wang
Joshua Ward, Chi-Hua Wang, Guang Cheng
Data Plagiarism Index: Characterizing the Privacy Risk of Data-Copying in Tabular Generative Models
null
null
null
null
cs.LG cs.CR stat.ML
http://creativecommons.org/licenses/by/4.0/
The promise of tabular generative models is to produce realistic synthetic data that can be shared and safely used without dangerous leakage of information from the training set. In evaluating these models, a variety of methods have been proposed to measure the tendency to copy data from the training dataset when generating a sample. However, these methods suffer from either not considering data-copying from a privacy threat perspective, not being motivated by recent results in the data-copying literature or being difficult to make compatible with the high dimensional, mixed type nature of tabular data. This paper proposes a new similarity metric and Membership Inference Attack called Data Plagiarism Index (DPI) for tabular data. We show that DPI evaluates a new intuitive definition of data-copying and characterizes the corresponding privacy risk. We show that the data-copying identified by DPI poses both privacy and fairness threats to common, high performing architectures; underscoring the necessity for more sophisticated generative modeling techniques to mitigate this issue.
[ { "created": "Tue, 18 Jun 2024 19:05:24 GMT", "version": "v1" } ]
2024-06-21
[ [ "Ward", "Joshua", "" ], [ "Wang", "Chi-Hua", "" ], [ "Cheng", "Guang", "" ] ]
The promise of tabular generative models is to produce realistic synthetic data that can be shared and safely used without dangerous leakage of information from the training set. In evaluating these models, a variety of methods have been proposed to measure the tendency to copy data from the training dataset when generating a sample. However, these methods suffer from either not considering data-copying from a privacy threat perspective, not being motivated by recent results in the data-copying literature or being difficult to make compatible with the high dimensional, mixed type nature of tabular data. This paper proposes a new similarity metric and Membership Inference Attack called Data Plagiarism Index (DPI) for tabular data. We show that DPI evaluates a new intuitive definition of data-copying and characterizes the corresponding privacy risk. We show that the data-copying identified by DPI poses both privacy and fairness threats to common, high performing architectures; underscoring the necessity for more sophisticated generative modeling techniques to mitigate this issue.
2102.01723
Amir Yazdanbakhsh
Amir Yazdanbakhsh, Christof Angermueller, Berkin Akin, Yanqi Zhou, Albin Jones, Milad Hashemi, Kevin Swersky, Satrajit Chatterjee, Ravi Narayanaswami, James Laudon
Apollo: Transferable Architecture Exploration
10 pages, 5 figures, Accepted to Workshop on ML for Systems at the 34th Conference on Neural Information Processing Systems (NeurIPS 2020)
null
null
null
cs.LG cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The looming end of Moore's Law and ascending use of deep learning drives the design of custom accelerators that are optimized for specific neural architectures. Architecture exploration for such accelerators forms a challenging constrained optimization problem over a complex, high-dimensional, and structured input space with a costly to evaluate objective function. Existing approaches for accelerator design are sample-inefficient and do not transfer knowledge between related optimizations tasks with different design constraints, such as area and/or latency budget, or neural architecture configurations. In this work, we propose a transferable architecture exploration framework, dubbed Apollo, that leverages recent advances in black-box function optimization for sample-efficient accelerator design. We use this framework to optimize accelerator configurations of a diverse set of neural architectures with alternative design constraints. We show that our framework finds high reward design configurations (up to 24.6% speedup) more sample-efficiently than a baseline black-box optimization approach. We further show that by transferring knowledge between target architectures with different design constraints, Apollo is able to find optimal configurations faster and often with better objective value (up to 25% improvements). This encouraging outcome portrays a promising path forward to facilitate generating higher quality accelerators.
[ { "created": "Tue, 2 Feb 2021 19:36:02 GMT", "version": "v1" } ]
2021-02-04
[ [ "Yazdanbakhsh", "Amir", "" ], [ "Angermueller", "Christof", "" ], [ "Akin", "Berkin", "" ], [ "Zhou", "Yanqi", "" ], [ "Jones", "Albin", "" ], [ "Hashemi", "Milad", "" ], [ "Swersky", "Kevin", "" ], [ "Chatterjee", "Satrajit", "" ], [ "Narayanaswami", "Ravi", "" ], [ "Laudon", "James", "" ] ]
The looming end of Moore's Law and ascending use of deep learning drives the design of custom accelerators that are optimized for specific neural architectures. Architecture exploration for such accelerators forms a challenging constrained optimization problem over a complex, high-dimensional, and structured input space with a costly to evaluate objective function. Existing approaches for accelerator design are sample-inefficient and do not transfer knowledge between related optimizations tasks with different design constraints, such as area and/or latency budget, or neural architecture configurations. In this work, we propose a transferable architecture exploration framework, dubbed Apollo, that leverages recent advances in black-box function optimization for sample-efficient accelerator design. We use this framework to optimize accelerator configurations of a diverse set of neural architectures with alternative design constraints. We show that our framework finds high reward design configurations (up to 24.6% speedup) more sample-efficiently than a baseline black-box optimization approach. We further show that by transferring knowledge between target architectures with different design constraints, Apollo is able to find optimal configurations faster and often with better objective value (up to 25% improvements). This encouraging outcome portrays a promising path forward to facilitate generating higher quality accelerators.
1406.1557
EPTCS
Harsh Raju Chamarthi (Northeastern Univeristy), Peter C. Dillinger (Northeastern Univeristy), Panagiotis Manolios (Northeastern Univeristy)
Data Definitions in the ACL2 Sedan
In Proceedings ACL2 2014, arXiv:1406.1238
EPTCS 152, 2014, pp. 27-48
10.4204/EPTCS.152.3
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a data definition framework that enables the convenient specification of data types in ACL2s, the ACL2 Sedan. Our primary motivation for developing the data definition framework was pedagogical. We were teaching undergraduate students how to reason about programs using ACL2s and wanted to provide them with an effective method for defining, testing, and reasoning about data types in the context of an untyped theorem prover. Our framework is now routinely used not only for pedagogical purposes, but also by advanced users. Our framework concisely supports common data definition patterns, e.g. list types, map types, and record types. It also provides support for polymorphic functions. A distinguishing feature of our approach is that we maintain both a predicative and an enumerative characterization of data definitions. In this paper we present our data definition framework via a sequence of examples. We give a complete characterization in terms of tau rules of the inclusion/exclusion relations a data definition induces, under suitable restrictions. The data definition framework is a key component of counterexample generation support in ACL2s, but can be independently used in ACL2, and is available as a community book.
[ { "created": "Fri, 6 Jun 2014 01:47:21 GMT", "version": "v1" } ]
2014-06-09
[ [ "Chamarthi", "Harsh Raju", "", "Northeastern Univeristy" ], [ "Dillinger", "Peter C.", "", "Northeastern Univeristy" ], [ "Manolios", "Panagiotis", "", "Northeastern Univeristy" ] ]
We present a data definition framework that enables the convenient specification of data types in ACL2s, the ACL2 Sedan. Our primary motivation for developing the data definition framework was pedagogical. We were teaching undergraduate students how to reason about programs using ACL2s and wanted to provide them with an effective method for defining, testing, and reasoning about data types in the context of an untyped theorem prover. Our framework is now routinely used not only for pedagogical purposes, but also by advanced users. Our framework concisely supports common data definition patterns, e.g. list types, map types, and record types. It also provides support for polymorphic functions. A distinguishing feature of our approach is that we maintain both a predicative and an enumerative characterization of data definitions. In this paper we present our data definition framework via a sequence of examples. We give a complete characterization in terms of tau rules of the inclusion/exclusion relations a data definition induces, under suitable restrictions. The data definition framework is a key component of counterexample generation support in ACL2s, but can be independently used in ACL2, and is available as a community book.
2112.01402
Rahul Rahaman
Dipika Singhania, Rahul Rahaman, Angela Yao
Iterative Contrast-Classify For Semi-supervised Temporal Action Segmentation
AAAI-2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Temporal action segmentation classifies the action of each frame in (long) video sequences. Due to the high cost of frame-wise labeling, we propose the first semi-supervised method for temporal action segmentation. Our method hinges on unsupervised representation learning, which, for temporal action segmentation, poses unique challenges. Actions in untrimmed videos vary in length and have unknown labels and start/end times. Ordering of actions across videos may also vary. We propose a novel way to learn frame-wise representations from temporal convolutional networks (TCNs) by clustering input features with added time-proximity condition and multi-resolution similarity. By merging representation learning with conventional supervised learning, we develop an "Iterative-Contrast-Classify (ICC)" semi-supervised learning scheme. With more labelled data, ICC progressively improves in performance; ICC semi-supervised learning, with 40% labelled videos, performs similar to fully-supervised counterparts. Our ICC improves MoF by {+1.8, +5.6, +2.5}% on Breakfast, 50Salads and GTEA respectively for 100% labelled videos.
[ { "created": "Thu, 2 Dec 2021 16:47:24 GMT", "version": "v1" }, { "created": "Wed, 8 Dec 2021 14:56:40 GMT", "version": "v2" } ]
2021-12-09
[ [ "Singhania", "Dipika", "" ], [ "Rahaman", "Rahul", "" ], [ "Yao", "Angela", "" ] ]
Temporal action segmentation classifies the action of each frame in (long) video sequences. Due to the high cost of frame-wise labeling, we propose the first semi-supervised method for temporal action segmentation. Our method hinges on unsupervised representation learning, which, for temporal action segmentation, poses unique challenges. Actions in untrimmed videos vary in length and have unknown labels and start/end times. Ordering of actions across videos may also vary. We propose a novel way to learn frame-wise representations from temporal convolutional networks (TCNs) by clustering input features with added time-proximity condition and multi-resolution similarity. By merging representation learning with conventional supervised learning, we develop an "Iterative-Contrast-Classify (ICC)" semi-supervised learning scheme. With more labelled data, ICC progressively improves in performance; ICC semi-supervised learning, with 40% labelled videos, performs similar to fully-supervised counterparts. Our ICC improves MoF by {+1.8, +5.6, +2.5}% on Breakfast, 50Salads and GTEA respectively for 100% labelled videos.
2211.02448
Dongchao Yang
Dongchao Yang, Songxiang Liu, Jianwei Yu, Helin Wang, Chao Weng, Yuexian Zou
NoreSpeech: Knowledge Distillation based Conditional Diffusion Model for Noise-robust Expressive TTS
Submitted to ICASSP2023
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Expressive text-to-speech (TTS) can synthesize a new speaking style by imiating prosody and timbre from a reference audio, which faces the following challenges: (1) The highly dynamic prosody information in the reference audio is difficult to extract, especially, when the reference audio contains background noise. (2) The TTS systems should have good generalization for unseen speaking styles. In this paper, we present a \textbf{no}ise-\textbf{r}obust \textbf{e}xpressive TTS model (NoreSpeech), which can robustly transfer speaking style in a noisy reference utterance to synthesized speech. Specifically, our NoreSpeech includes several components: (1) a novel DiffStyle module, which leverages powerful probabilistic denoising diffusion models to learn noise-agnostic speaking style features from a teacher model by knowledge distillation; (2) a VQ-VAE block, which maps the style features into a controllable quantized latent space for improving the generalization of style transfer; and (3) a straight-forward but effective parameter-free text-style alignment module, which enables NoreSpeech to transfer style to a textual input from a length-mismatched reference utterance. Experiments demonstrate that NoreSpeech is more effective than previous expressive TTS models in noise environments. Audio samples and code are available at: \href{http://dongchaoyang.top/NoreSpeech\_demo/}{http://dongchaoyang.top/NoreSpeech\_demo/}
[ { "created": "Fri, 4 Nov 2022 13:32:58 GMT", "version": "v1" } ]
2022-11-07
[ [ "Yang", "Dongchao", "" ], [ "Liu", "Songxiang", "" ], [ "Yu", "Jianwei", "" ], [ "Wang", "Helin", "" ], [ "Weng", "Chao", "" ], [ "Zou", "Yuexian", "" ] ]
Expressive text-to-speech (TTS) can synthesize a new speaking style by imiating prosody and timbre from a reference audio, which faces the following challenges: (1) The highly dynamic prosody information in the reference audio is difficult to extract, especially, when the reference audio contains background noise. (2) The TTS systems should have good generalization for unseen speaking styles. In this paper, we present a \textbf{no}ise-\textbf{r}obust \textbf{e}xpressive TTS model (NoreSpeech), which can robustly transfer speaking style in a noisy reference utterance to synthesized speech. Specifically, our NoreSpeech includes several components: (1) a novel DiffStyle module, which leverages powerful probabilistic denoising diffusion models to learn noise-agnostic speaking style features from a teacher model by knowledge distillation; (2) a VQ-VAE block, which maps the style features into a controllable quantized latent space for improving the generalization of style transfer; and (3) a straight-forward but effective parameter-free text-style alignment module, which enables NoreSpeech to transfer style to a textual input from a length-mismatched reference utterance. Experiments demonstrate that NoreSpeech is more effective than previous expressive TTS models in noise environments. Audio samples and code are available at: \href{http://dongchaoyang.top/NoreSpeech\_demo/}{http://dongchaoyang.top/NoreSpeech\_demo/}
2403.02810
Chu Wang
Chu Wang, Jinhong Wu, Yanzhi Wang, Zhijian Zha, Qi Zhou
Dynamic Gaussian Graph Operator: Learning parametric partial differential equations in arbitrary discrete mechanics problems
The number of figures is 13. The number of tables is 7. The number of words is 9854
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.
[ { "created": "Tue, 5 Mar 2024 09:25:31 GMT", "version": "v1" } ]
2024-03-06
[ [ "Wang", "Chu", "" ], [ "Wu", "Jinhong", "" ], [ "Wang", "Yanzhi", "" ], [ "Zha", "Zhijian", "" ], [ "Zhou", "Qi", "" ] ]
Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.
1908.07668
Sara Billey
Molly Baird, Sara C. Billey, Erik D. Demaine, Martin L. Demaine, David Eppstein, S\'andor Fekete, Graham Gordon, Sean Griffin, Joseph S. B. Mitchell, Joshua P. Swanson
Existence and hardness of conveyor belts
null
Electronic J. Combinatorics 27 (4), Paper 4.25, 2020
10.37236/9782
null
cs.CG math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An open problem of Manuel Abellanas asks whether every set of disjoint closed unit disks in the plane can be connected by a conveyor belt, which means a tight simple closed curve that touches the boundary of each disk, possibly multiple times. We prove three main results. First, for unit disks whose centers are both $x$-monotone and $y$-monotone, or whose centers have $x$-coordinates that differ by at least two units, a conveyor belt always exists and can be found efficiently. Second, it is NP-complete to determine whether disks of varying radii have a conveyor belt, and it remains NP-complete when we constrain the belt to touch disks exactly once. Third, any disjoint set of $n$ disks of arbitrary radii can be augmented by $O(n)$ "guide" disks so that the augmented system has a conveyor belt touching each disk exactly once, answering a conjecture of Demaine, Demaine, and Palop.
[ { "created": "Wed, 21 Aug 2019 01:38:33 GMT", "version": "v1" } ]
2020-11-09
[ [ "Baird", "Molly", "" ], [ "Billey", "Sara C.", "" ], [ "Demaine", "Erik D.", "" ], [ "Demaine", "Martin L.", "" ], [ "Eppstein", "David", "" ], [ "Fekete", "Sándor", "" ], [ "Gordon", "Graham", "" ], [ "Griffin", "Sean", "" ], [ "Mitchell", "Joseph S. B.", "" ], [ "Swanson", "Joshua P.", "" ] ]
An open problem of Manuel Abellanas asks whether every set of disjoint closed unit disks in the plane can be connected by a conveyor belt, which means a tight simple closed curve that touches the boundary of each disk, possibly multiple times. We prove three main results. First, for unit disks whose centers are both $x$-monotone and $y$-monotone, or whose centers have $x$-coordinates that differ by at least two units, a conveyor belt always exists and can be found efficiently. Second, it is NP-complete to determine whether disks of varying radii have a conveyor belt, and it remains NP-complete when we constrain the belt to touch disks exactly once. Third, any disjoint set of $n$ disks of arbitrary radii can be augmented by $O(n)$ "guide" disks so that the augmented system has a conveyor belt touching each disk exactly once, answering a conjecture of Demaine, Demaine, and Palop.
1806.08274
Pallavi Athe
Pallavi Athe, Yatindra Nath Singh
Impact Zone Analysis of p-Cycle
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-configured cycle (p-Cycle) method has been studied in literature extensively for optical network protection. A large p-cycle has high capacity efficiency and can protect a large number of nodes against the single link failure scenarios. All the links protected by such a p-cycle lose protection when the p-cycle is consumed to restore traffic after a failure. As the probability of multiple link failure is high for a large network, it also means that with higher probability, on the second failure, protection may not be there for the failed link. Thus, if the number of links protected by a p-cycle is large, it makes the network unprotected with high probability on the advent of the second failure. In this paper, we study the impact zone due to a first link failure in the various configurations of the p-cycles. The study gives insight into how to choose the p-cycle configuration to reduce the impact zone while using minimum spare capacity. We propose few methods and compare them to show how the impact zone analysis can be used to improve the fault tolerance in an optical network.
[ { "created": "Thu, 21 Jun 2018 14:50:15 GMT", "version": "v1" } ]
2018-06-22
[ [ "Athe", "Pallavi", "" ], [ "Singh", "Yatindra Nath", "" ] ]
Pre-configured cycle (p-Cycle) method has been studied in literature extensively for optical network protection. A large p-cycle has high capacity efficiency and can protect a large number of nodes against the single link failure scenarios. All the links protected by such a p-cycle lose protection when the p-cycle is consumed to restore traffic after a failure. As the probability of multiple link failure is high for a large network, it also means that with higher probability, on the second failure, protection may not be there for the failed link. Thus, if the number of links protected by a p-cycle is large, it makes the network unprotected with high probability on the advent of the second failure. In this paper, we study the impact zone due to a first link failure in the various configurations of the p-cycles. The study gives insight into how to choose the p-cycle configuration to reduce the impact zone while using minimum spare capacity. We propose few methods and compare them to show how the impact zone analysis can be used to improve the fault tolerance in an optical network.
2105.10310
Arnaud Boutillon
Arnaud Boutillon, Pierre-Henri Conze, Christelle Pons, Val\'erie Burdin, Bhushan Borotikar
Multi-Task, Multi-Domain Deep Segmentation with Shared Representations and Contrastive Regularization for Sparse Pediatric Datasets
11 pages, 4 figures, 2 tables, accepted at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2021)
null
10.1007/978-3-030-87193-2_23
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic segmentation of magnetic resonance (MR) images is crucial for morphological evaluation of the pediatric musculoskeletal system in clinical practice. However, the accuracy and generalization performance of individual segmentation models are limited due to the restricted amount of annotated pediatric data. Hence, we propose to train a segmentation model on multiple datasets, arising from different parts of the anatomy, in a multi-task and multi-domain learning framework. This approach allows to overcome the inherent scarcity of pediatric data while benefiting from a more robust shared representation. The proposed segmentation network comprises shared convolutional filters, domain-specific batch normalization parameters that compute the respective dataset statistics and a domain-specific segmentation layer. Furthermore, a supervised contrastive regularization is integrated to further improve generalization capabilities, by promoting intra-domain similarity and impose inter-domain margins in embedded space. We evaluate our contributions on two pediatric imaging datasets of the ankle and shoulder joints for bone segmentation. Results demonstrate that the proposed model outperforms state-of-the-art approaches.
[ { "created": "Fri, 21 May 2021 12:26:05 GMT", "version": "v1" }, { "created": "Wed, 2 Feb 2022 09:11:30 GMT", "version": "v2" } ]
2022-02-03
[ [ "Boutillon", "Arnaud", "" ], [ "Conze", "Pierre-Henri", "" ], [ "Pons", "Christelle", "" ], [ "Burdin", "Valérie", "" ], [ "Borotikar", "Bhushan", "" ] ]
Automatic segmentation of magnetic resonance (MR) images is crucial for morphological evaluation of the pediatric musculoskeletal system in clinical practice. However, the accuracy and generalization performance of individual segmentation models are limited due to the restricted amount of annotated pediatric data. Hence, we propose to train a segmentation model on multiple datasets, arising from different parts of the anatomy, in a multi-task and multi-domain learning framework. This approach allows to overcome the inherent scarcity of pediatric data while benefiting from a more robust shared representation. The proposed segmentation network comprises shared convolutional filters, domain-specific batch normalization parameters that compute the respective dataset statistics and a domain-specific segmentation layer. Furthermore, a supervised contrastive regularization is integrated to further improve generalization capabilities, by promoting intra-domain similarity and impose inter-domain margins in embedded space. We evaluate our contributions on two pediatric imaging datasets of the ankle and shoulder joints for bone segmentation. Results demonstrate that the proposed model outperforms state-of-the-art approaches.
2107.05278
Erwin de Gelder
Erwin de Gelder, Eric Cator, Jan-Pieter Paardekooper, Olaf Op den Camp, Bart De Schutter
Constrained Sampling from a Kernel Density Estimator to Generate Scenarios for the Assessment of Automated Vehicles
6 pages, 3 figures, to be published in the proceedings of the IEEE Intelligent Vehicle Symposium Workshops (IV workshop)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The safety assessment of automated vehicles (AVs) is an important aspect of the development cycle of AVs. A scenario-based assessment approach is accepted by many players in the field as part of the complete safety assessment. A scenario is a representation of a situation on the road to which the AV needs to respond appropriately. One way to generate the required scenario-based test descriptions is to parameterize the scenarios and to draw these parameters from a probability density function (pdf). Because the shape of the pdf is unknown beforehand, assuming a functional form of the pdf and fitting the parameters to the data may lead to inaccurate fits. As an alternative, Kernel Density Estimation (KDE) is a promising candidate for estimating the underlying pdf, because it is flexible with the underlying distribution of the parameters. Drawing random samples from a pdf estimated with KDE is possible without the need of evaluating the actual pdf, which makes it suitable for drawing random samples for, e.g., Monte Carlo methods. Sampling from a KDE while the samples satisfy a linear equality constraint, however, has not been described in the literature, as far as the authors know. In this paper, we propose a method to sample from a pdf estimated using KDE, such that the samples satisfy a linear equality constraint. We also present an algorithm of our method in pseudo-code. The method can be used to generating scenarios that have, e.g., a predetermined starting speed or to generate different types of scenarios. This paper also shows that the method for sampling scenarios can be used in case a Singular Value Decomposition (SVD) is used to reduce the dimension of the parameter vectors.
[ { "created": "Mon, 12 Jul 2021 09:28:25 GMT", "version": "v1" } ]
2021-07-13
[ [ "de Gelder", "Erwin", "" ], [ "Cator", "Eric", "" ], [ "Paardekooper", "Jan-Pieter", "" ], [ "Camp", "Olaf Op den", "" ], [ "De Schutter", "Bart", "" ] ]
The safety assessment of automated vehicles (AVs) is an important aspect of the development cycle of AVs. A scenario-based assessment approach is accepted by many players in the field as part of the complete safety assessment. A scenario is a representation of a situation on the road to which the AV needs to respond appropriately. One way to generate the required scenario-based test descriptions is to parameterize the scenarios and to draw these parameters from a probability density function (pdf). Because the shape of the pdf is unknown beforehand, assuming a functional form of the pdf and fitting the parameters to the data may lead to inaccurate fits. As an alternative, Kernel Density Estimation (KDE) is a promising candidate for estimating the underlying pdf, because it is flexible with the underlying distribution of the parameters. Drawing random samples from a pdf estimated with KDE is possible without the need of evaluating the actual pdf, which makes it suitable for drawing random samples for, e.g., Monte Carlo methods. Sampling from a KDE while the samples satisfy a linear equality constraint, however, has not been described in the literature, as far as the authors know. In this paper, we propose a method to sample from a pdf estimated using KDE, such that the samples satisfy a linear equality constraint. We also present an algorithm of our method in pseudo-code. The method can be used to generating scenarios that have, e.g., a predetermined starting speed or to generate different types of scenarios. This paper also shows that the method for sampling scenarios can be used in case a Singular Value Decomposition (SVD) is used to reduce the dimension of the parameter vectors.
2307.16713
Haozhen Zhang
Haozhen Zhang, Le Yu, Xi Xiao, Qing Li, Francesco Mercaldo, Xiapu Luo, Qixu Liu
TFE-GNN: A Temporal Fusion Encoder Using Graph Neural Networks for Fine-grained Encrypted Traffic Classification
Accepted by The Web Conference 2023 (WWW'23). The code will be available with our incoming future work
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Encrypted traffic classification is receiving widespread attention from researchers and industrial companies. However, the existing methods only extract flow-level features, failing to handle short flows because of unreliable statistical properties, or treat the header and payload equally, failing to mine the potential correlation between bytes. Therefore, in this paper, we propose a byte-level traffic graph construction approach based on point-wise mutual information (PMI), and a model named Temporal Fusion Encoder using Graph Neural Networks (TFE-GNN) for feature extraction. In particular, we design a dual embedding layer, a GNN-based traffic graph encoder as well as a cross-gated feature fusion mechanism, which can first embed the header and payload bytes separately and then fuses them together to obtain a stronger feature representation. The experimental results on two real datasets demonstrate that TFE-GNN outperforms multiple state-of-the-art methods in fine-grained encrypted traffic classification tasks.
[ { "created": "Mon, 31 Jul 2023 14:32:40 GMT", "version": "v1" } ]
2023-08-01
[ [ "Zhang", "Haozhen", "" ], [ "Yu", "Le", "" ], [ "Xiao", "Xi", "" ], [ "Li", "Qing", "" ], [ "Mercaldo", "Francesco", "" ], [ "Luo", "Xiapu", "" ], [ "Liu", "Qixu", "" ] ]
Encrypted traffic classification is receiving widespread attention from researchers and industrial companies. However, the existing methods only extract flow-level features, failing to handle short flows because of unreliable statistical properties, or treat the header and payload equally, failing to mine the potential correlation between bytes. Therefore, in this paper, we propose a byte-level traffic graph construction approach based on point-wise mutual information (PMI), and a model named Temporal Fusion Encoder using Graph Neural Networks (TFE-GNN) for feature extraction. In particular, we design a dual embedding layer, a GNN-based traffic graph encoder as well as a cross-gated feature fusion mechanism, which can first embed the header and payload bytes separately and then fuses them together to obtain a stronger feature representation. The experimental results on two real datasets demonstrate that TFE-GNN outperforms multiple state-of-the-art methods in fine-grained encrypted traffic classification tasks.
2407.08135
Yinfeng Zhu
Yinfeng Zhu
A quadratic upper bound on the reset thresholds of synchronizing automata containing a transitive permutation group
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For any synchronizing $n$-state deterministic automaton, \v{C}ern\'{y} conjectures the existence of a synchronizing word of length at most $(n-1)^2$. We prove that there exists a synchronizing word of length at most $2n^2 - 7n + 7$ for every synchronizing $n$-state deterministic automaton that satisfies the following two properties: 1. The image of the action of each letter contains at least $n-1$ states; 2. The actions of bijective letters generate a transitive permutation group on the state set.
[ { "created": "Thu, 11 Jul 2024 02:17:49 GMT", "version": "v1" } ]
2024-07-12
[ [ "Zhu", "Yinfeng", "" ] ]
For any synchronizing $n$-state deterministic automaton, \v{C}ern\'{y} conjectures the existence of a synchronizing word of length at most $(n-1)^2$. We prove that there exists a synchronizing word of length at most $2n^2 - 7n + 7$ for every synchronizing $n$-state deterministic automaton that satisfies the following two properties: 1. The image of the action of each letter contains at least $n-1$ states; 2. The actions of bijective letters generate a transitive permutation group on the state set.
2103.05424
Mauricio Aniche
Chris Langhout and Maur\'icio Aniche
Atoms of Confusion in Java
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although writing code seems trivial at times, problems arise when humans misinterpret what the code actually does. One of the potential causes are "atoms of confusion", the smallest possible patterns of misinterpretable source code. Previous research has investigated the impact of atoms of confusion in C code. Results show that developers make significantly more mistakes in code where atoms are present. In this paper, we replicate the work of Gopstein et al. to the Java language. After deriving a set of atoms of confusion for Java, we perform a two-phase experiment with 132 computer science students (i.e., novice developers). Our results show that participants are 2.7 up to 56 times more likely to make mistakes in code snippets affected by 7 out of the 14 studied atoms of confusion, and when faced with both versions of the code snippets, participants perceived the version affected by the atom of confusion to be more confusing and/or less readable in 10 out of the 14 studied atoms of confusion.
[ { "created": "Mon, 8 Mar 2021 09:04:05 GMT", "version": "v1" }, { "created": "Wed, 10 Mar 2021 07:31:27 GMT", "version": "v2" } ]
2021-03-11
[ [ "Langhout", "Chris", "" ], [ "Aniche", "Maurício", "" ] ]
Although writing code seems trivial at times, problems arise when humans misinterpret what the code actually does. One of the potential causes are "atoms of confusion", the smallest possible patterns of misinterpretable source code. Previous research has investigated the impact of atoms of confusion in C code. Results show that developers make significantly more mistakes in code where atoms are present. In this paper, we replicate the work of Gopstein et al. to the Java language. After deriving a set of atoms of confusion for Java, we perform a two-phase experiment with 132 computer science students (i.e., novice developers). Our results show that participants are 2.7 up to 56 times more likely to make mistakes in code snippets affected by 7 out of the 14 studied atoms of confusion, and when faced with both versions of the code snippets, participants perceived the version affected by the atom of confusion to be more confusing and/or less readable in 10 out of the 14 studied atoms of confusion.
1003.4627
Dejan Spasov
Dejan Spasov
Unique and Minimum Distance Decoding of Linear Codes with Reduced Complexity
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that for (systematic) linear codes the time complexity of unique decoding is O(n^{2}q^{nRH(delta/2/R)}) and the time complexity of minimum distance decoding is O(n^{2}q^{nRH(delta/R)}). The proposed algorithm inspects all error patterns in the information set of the received message of weight less than d/2 or d, respectively.
[ { "created": "Wed, 24 Mar 2010 12:45:27 GMT", "version": "v1" } ]
2010-03-25
[ [ "Spasov", "Dejan", "" ] ]
We show that for (systematic) linear codes the time complexity of unique decoding is O(n^{2}q^{nRH(delta/2/R)}) and the time complexity of minimum distance decoding is O(n^{2}q^{nRH(delta/R)}). The proposed algorithm inspects all error patterns in the information set of the received message of weight less than d/2 or d, respectively.
2112.04036
Mohammad Wardat
Mohammad Wardat, Breno Dantas Cruz, Wei Le, Hridesh Rajan
DeepDiagnosis: Automatically Diagnosing Faults and Recommending Actionable Fixes in Deep Learning Programs
Accepted at ICSE 2022
null
null
null
cs.SE cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep Neural Networks (DNNs) are used in a wide variety of applications. However, as in any software application, DNN-based apps are afflicted with bugs. Previous work observed that DNN bug fix patterns are different from traditional bug fix patterns. Furthermore, those buggy models are non-trivial to diagnose and fix due to inexplicit errors with several options to fix them. To support developers in locating and fixing bugs, we propose DeepDiagnosis, a novel debugging approach that localizes the faults, reports error symptoms and suggests fixes for DNN programs. In the first phase, our technique monitors a training model, periodically checking for eight types of error conditions. Then, in case of problems, it reports messages containing sufficient information to perform actionable repairs to the model. In the evaluation, we thoroughly examine 444 models -53 real-world from GitHub and Stack Overflow, and 391 curated by AUTOTRAINER. DeepDiagnosis provides superior accuracy when compared to UMLUAT and DeepLocalize. Our technique is faster than AUTOTRAINER for fault localization. The results show that our approach can support additional types of models, while state-of-the-art was only able to handle classification ones. Our technique was able to report bugs that do not manifest as numerical errors during training. Also, it can provide actionable insights for fix whereas DeepLocalize can only report faults that lead to numerical errors during training. DeepDiagnosis manifests the best capabilities of fault detection, bug localization, and symptoms identification when compared to other approaches.
[ { "created": "Tue, 7 Dec 2021 23:15:23 GMT", "version": "v1" } ]
2021-12-09
[ [ "Wardat", "Mohammad", "" ], [ "Cruz", "Breno Dantas", "" ], [ "Le", "Wei", "" ], [ "Rajan", "Hridesh", "" ] ]
Deep Neural Networks (DNNs) are used in a wide variety of applications. However, as in any software application, DNN-based apps are afflicted with bugs. Previous work observed that DNN bug fix patterns are different from traditional bug fix patterns. Furthermore, those buggy models are non-trivial to diagnose and fix due to inexplicit errors with several options to fix them. To support developers in locating and fixing bugs, we propose DeepDiagnosis, a novel debugging approach that localizes the faults, reports error symptoms and suggests fixes for DNN programs. In the first phase, our technique monitors a training model, periodically checking for eight types of error conditions. Then, in case of problems, it reports messages containing sufficient information to perform actionable repairs to the model. In the evaluation, we thoroughly examine 444 models -53 real-world from GitHub and Stack Overflow, and 391 curated by AUTOTRAINER. DeepDiagnosis provides superior accuracy when compared to UMLUAT and DeepLocalize. Our technique is faster than AUTOTRAINER for fault localization. The results show that our approach can support additional types of models, while state-of-the-art was only able to handle classification ones. Our technique was able to report bugs that do not manifest as numerical errors during training. Also, it can provide actionable insights for fix whereas DeepLocalize can only report faults that lead to numerical errors during training. DeepDiagnosis manifests the best capabilities of fault detection, bug localization, and symptoms identification when compared to other approaches.
2407.11852
Marcel Parciak
Marcel Parciak, Brecht Vandevoort, Frank Neven, Liesbet M. Peeters, Stijn Vansummeren
Schema Matching with Large Language Models: an Experimental Study
Accepted at the 2nd International Workshop on Tabular Data Analysis (TaDA24), collocated with the 50th International Conference on Very Large Data Bases (VLDB 2024) Guangzhou, China - August 29, 2024
null
null
null
cs.DB cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have shown useful applications in a variety of tasks, including data wrangling. In this paper, we investigate the use of an off-the-shelf LLM for schema matching. Our objective is to identify semantic correspondences between elements of two relational schemas using only names and descriptions. Using a newly created benchmark from the health domain, we propose different so-called task scopes. These are methods for prompting the LLM to do schema matching, which vary in the amount of context information contained in the prompt. Using these task scopes we compare LLM-based schema matching against a string similarity baseline, investigating matching quality, verification effort, decisiveness, and complementarity of the approaches. We find that matching quality suffers from a lack of context information, but also from providing too much context information. In general, using newer LLM versions increases decisiveness. We identify task scopes that have acceptable verification effort and succeed in identifying a significant number of true semantic matches. Our study shows that LLMs have potential in bootstrapping the schema matching process and are able to assist data engineers in speeding up this task solely based on schema element names and descriptions without the need for data instances.
[ { "created": "Tue, 16 Jul 2024 15:33:00 GMT", "version": "v1" } ]
2024-07-17
[ [ "Parciak", "Marcel", "" ], [ "Vandevoort", "Brecht", "" ], [ "Neven", "Frank", "" ], [ "Peeters", "Liesbet M.", "" ], [ "Vansummeren", "Stijn", "" ] ]
Large Language Models (LLMs) have shown useful applications in a variety of tasks, including data wrangling. In this paper, we investigate the use of an off-the-shelf LLM for schema matching. Our objective is to identify semantic correspondences between elements of two relational schemas using only names and descriptions. Using a newly created benchmark from the health domain, we propose different so-called task scopes. These are methods for prompting the LLM to do schema matching, which vary in the amount of context information contained in the prompt. Using these task scopes we compare LLM-based schema matching against a string similarity baseline, investigating matching quality, verification effort, decisiveness, and complementarity of the approaches. We find that matching quality suffers from a lack of context information, but also from providing too much context information. In general, using newer LLM versions increases decisiveness. We identify task scopes that have acceptable verification effort and succeed in identifying a significant number of true semantic matches. Our study shows that LLMs have potential in bootstrapping the schema matching process and are able to assist data engineers in speeding up this task solely based on schema element names and descriptions without the need for data instances.
2301.03364
Rodrigo Hernang\'omez
Rodrigo Hernang\'omez, Alexandros Palaios, Cara Watermann, Daniel Sch\"aufele, Philipp Geuer, Rafail Ismayilov, Mohammad Parvini, Anton Krause, Martin Kasparick, Thomas Neugebauer, Oscar D. Ramos-Cantor, Hugues Tchouankem, Jose Leon Calvo, Bo Chen, Gerhard Fettweis, S{\l}awomir Sta\'nczak
Toward an AI-enabled Connected Industry: AGV Communication and Sensor Measurement Datasets
7 pages, 3 figures. Published at IEEE Communications Magazine. IEEE Copyright protected. Datasets available at https://ieee-dataport.org/open-access/ai4mobile-industrial-wireless-datasets-iv2v-and-iv2i
in IEEE Communications Magazine, vol. 62, no. 4, pp. 90-95, April 2024
10.1109/MCOM.001.2300494
null
cs.NI cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents two wireless measurement campaigns in industrial testbeds: industrial Vehicle-to-vehicle (iV2V) and industrial Vehicle-to-infrastructure plus Sensor (iV2I+), together with detailed information about the two captured datasets. iV2V covers sidelink communication scenarios between Automated Guided Vehicles (AGVs), while iV2I+ is conducted at an industrial setting where an autonomous cleaning robot is connected to a private cellular network. The combination of different communication technologies within a common measurement methodology provides insights that can be exploited by Machine Learning (ML) for tasks such as fingerprinting, line-of-sight detection, prediction of quality of service or link selection. Moreover, the datasets are publicly available, labelled and prefiltered for fast on-boarding and applicability.
[ { "created": "Tue, 20 Dec 2022 15:04:20 GMT", "version": "v1" }, { "created": "Tue, 10 Jan 2023 11:29:39 GMT", "version": "v2" }, { "created": "Mon, 20 Mar 2023 13:41:37 GMT", "version": "v3" }, { "created": "Tue, 29 Aug 2023 11:18:43 GMT", "version": "v4" }, { "created": "Mon, 15 Apr 2024 11:42:03 GMT", "version": "v5" } ]
2024-04-16
[ [ "Hernangómez", "Rodrigo", "" ], [ "Palaios", "Alexandros", "" ], [ "Watermann", "Cara", "" ], [ "Schäufele", "Daniel", "" ], [ "Geuer", "Philipp", "" ], [ "Ismayilov", "Rafail", "" ], [ "Parvini", "Mohammad", "" ], [ "Krause", "Anton", "" ], [ "Kasparick", "Martin", "" ], [ "Neugebauer", "Thomas", "" ], [ "Ramos-Cantor", "Oscar D.", "" ], [ "Tchouankem", "Hugues", "" ], [ "Calvo", "Jose Leon", "" ], [ "Chen", "Bo", "" ], [ "Fettweis", "Gerhard", "" ], [ "Stańczak", "Sławomir", "" ] ]
This paper presents two wireless measurement campaigns in industrial testbeds: industrial Vehicle-to-vehicle (iV2V) and industrial Vehicle-to-infrastructure plus Sensor (iV2I+), together with detailed information about the two captured datasets. iV2V covers sidelink communication scenarios between Automated Guided Vehicles (AGVs), while iV2I+ is conducted at an industrial setting where an autonomous cleaning robot is connected to a private cellular network. The combination of different communication technologies within a common measurement methodology provides insights that can be exploited by Machine Learning (ML) for tasks such as fingerprinting, line-of-sight detection, prediction of quality of service or link selection. Moreover, the datasets are publicly available, labelled and prefiltered for fast on-boarding and applicability.
2109.10254
Youngseog Chung
Youngseog Chung, Ian Char, Han Guo, Jeff Schneider, Willie Neiswanger
Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With increasing deployment of machine learning systems in various real-world tasks, there is a greater need for accurate quantification of predictive uncertainty. While the common goal in uncertainty quantification (UQ) in machine learning is to approximate the true distribution of the target data, many works in UQ tend to be disjoint in the evaluation metrics utilized, and disparate implementations for each metric lead to numerical results that are not directly comparable across different works. To address this, we introduce Uncertainty Toolbox, an open-source python library that helps to assess, visualize, and improve UQ. Uncertainty Toolbox additionally provides pedagogical resources, such as a glossary of key terms and an organized collection of key paper references. We hope that this toolbox is useful for accelerating and uniting research efforts in uncertainty in machine learning.
[ { "created": "Tue, 21 Sep 2021 15:32:06 GMT", "version": "v1" } ]
2021-09-22
[ [ "Chung", "Youngseog", "" ], [ "Char", "Ian", "" ], [ "Guo", "Han", "" ], [ "Schneider", "Jeff", "" ], [ "Neiswanger", "Willie", "" ] ]
With increasing deployment of machine learning systems in various real-world tasks, there is a greater need for accurate quantification of predictive uncertainty. While the common goal in uncertainty quantification (UQ) in machine learning is to approximate the true distribution of the target data, many works in UQ tend to be disjoint in the evaluation metrics utilized, and disparate implementations for each metric lead to numerical results that are not directly comparable across different works. To address this, we introduce Uncertainty Toolbox, an open-source python library that helps to assess, visualize, and improve UQ. Uncertainty Toolbox additionally provides pedagogical resources, such as a glossary of key terms and an organized collection of key paper references. We hope that this toolbox is useful for accelerating and uniting research efforts in uncertainty in machine learning.
2311.14007
Liangrun Da
Liangrun Da, Martin Kleppmann
Extending JSON CRDTs with Move Operations
7 pages, 4 figures
null
10.1145/3642976.3653030
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Conflict-Free Replicated Data Types (CRDTs) for JSON allow users to concurrently update a JSON document and automatically merge the updates into a consistent state. Moving a subtree in a map or reordering elements in a list within a JSON CRDT is challenging: naive merge algorithms may introduce unexpected results such as duplicates or cycles. In this paper, we introduce an algorithm for move operations in a JSON CRDT that handles the interaction with concurrent non-move operations, and uses novel optimisations to improve performance. We plan to integrate this algorithm into the Automerge CRDT library.
[ { "created": "Thu, 23 Nov 2023 13:48:52 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 21:05:16 GMT", "version": "v2" } ]
2024-03-21
[ [ "Da", "Liangrun", "" ], [ "Kleppmann", "Martin", "" ] ]
Conflict-Free Replicated Data Types (CRDTs) for JSON allow users to concurrently update a JSON document and automatically merge the updates into a consistent state. Moving a subtree in a map or reordering elements in a list within a JSON CRDT is challenging: naive merge algorithms may introduce unexpected results such as duplicates or cycles. In this paper, we introduce an algorithm for move operations in a JSON CRDT that handles the interaction with concurrent non-move operations, and uses novel optimisations to improve performance. We plan to integrate this algorithm into the Automerge CRDT library.
1601.04689
Santhosh Kumar
Shrinivas Kudekar, Santhosh Kumar, Marco Mondelli, Henry D. Pfister, Eren \c{S}a\c{s}o\u{g}lu, R\"udiger Urbanke
Reed-Muller Codes Achieve Capacity on Erasure Channels
This article combines our previous articles arXiv:1505.05123 and arXiv:1505.05831
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new approach to proving that a sequence of deterministic linear codes achieves capacity on an erasure channel under maximum a posteriori decoding. Rather than relying on the precise structure of the codes our method exploits code symmetry. In particular, the technique applies to any sequence of linear codes where the blocklengths are strictly increasing, the code rates converge, and the permutation group of each code is doubly transitive. In other words, we show that symmetry alone implies near-optimal performance. An important consequence of this result is that a sequence of Reed-Muller codes with increasing blocklength and converging rate achieves capacity. This possibility has been suggested previously in the literature but it has only been proven for cases where the limiting code rate is 0 or 1. Moreover, these results extend naturally to all affine-invariant codes and, thus, to extended primitive narrow-sense BCH codes. This also resolves, in the affirmative, the existence question for capacity-achieving sequences of binary cyclic codes. The primary tools used in the proof are the sharp threshold property for symmetric monotone boolean functions and the area theorem for extrinsic information transfer functions.
[ { "created": "Mon, 18 Jan 2016 20:50:08 GMT", "version": "v1" } ]
2016-01-19
[ [ "Kudekar", "Shrinivas", "" ], [ "Kumar", "Santhosh", "" ], [ "Mondelli", "Marco", "" ], [ "Pfister", "Henry D.", "" ], [ "Şaşoğlu", "Eren", "" ], [ "Urbanke", "Rüdiger", "" ] ]
We introduce a new approach to proving that a sequence of deterministic linear codes achieves capacity on an erasure channel under maximum a posteriori decoding. Rather than relying on the precise structure of the codes our method exploits code symmetry. In particular, the technique applies to any sequence of linear codes where the blocklengths are strictly increasing, the code rates converge, and the permutation group of each code is doubly transitive. In other words, we show that symmetry alone implies near-optimal performance. An important consequence of this result is that a sequence of Reed-Muller codes with increasing blocklength and converging rate achieves capacity. This possibility has been suggested previously in the literature but it has only been proven for cases where the limiting code rate is 0 or 1. Moreover, these results extend naturally to all affine-invariant codes and, thus, to extended primitive narrow-sense BCH codes. This also resolves, in the affirmative, the existence question for capacity-achieving sequences of binary cyclic codes. The primary tools used in the proof are the sharp threshold property for symmetric monotone boolean functions and the area theorem for extrinsic information transfer functions.
2306.01940
Elijah Pelofske
Kyle Henke, Elijah Pelofske, Georg Hahn, Garrett T. Kenyon
Sampling binary sparse coding QUBO models using a spiking neuromorphic processor
null
null
10.1145/3589737.3606003
null
cs.NE cs.CV cs.ET cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of computing a sparse binary representation of an image. To be precise, given an image and an overcomplete, non-orthonormal basis, we aim to find a sparse binary vector indicating the minimal set of basis vectors that when added together best reconstruct the given input. We formulate this problem with an $L_2$ loss on the reconstruction error, and an $L_0$ (or, equivalently, an $L_1$) loss on the binary vector enforcing sparsity. This yields a so-called Quadratic Unconstrained Binary Optimization (QUBO) problem, whose solution is generally NP-hard to find. The contribution of this work is twofold. First, the method of unsupervised and unnormalized dictionary feature learning for a desired sparsity level to best match the data is presented. Second, the binary sparse coding problem is then solved on the Loihi 1 neuromorphic chip by the use of stochastic networks of neurons to traverse the non-convex energy landscape. The solutions are benchmarked against the classical heuristic simulated annealing. We demonstrate neuromorphic computing is suitable for sampling low energy solutions of binary sparse coding QUBO models, and although Loihi 1 is capable of sampling very sparse solutions of the QUBO models, there needs to be improvement in the implementation in order to be competitive with simulated annealing.
[ { "created": "Fri, 2 Jun 2023 22:47:18 GMT", "version": "v1" }, { "created": "Wed, 2 Aug 2023 16:55:29 GMT", "version": "v2" } ]
2023-11-28
[ [ "Henke", "Kyle", "" ], [ "Pelofske", "Elijah", "" ], [ "Hahn", "Georg", "" ], [ "Kenyon", "Garrett T.", "" ] ]
We consider the problem of computing a sparse binary representation of an image. To be precise, given an image and an overcomplete, non-orthonormal basis, we aim to find a sparse binary vector indicating the minimal set of basis vectors that when added together best reconstruct the given input. We formulate this problem with an $L_2$ loss on the reconstruction error, and an $L_0$ (or, equivalently, an $L_1$) loss on the binary vector enforcing sparsity. This yields a so-called Quadratic Unconstrained Binary Optimization (QUBO) problem, whose solution is generally NP-hard to find. The contribution of this work is twofold. First, the method of unsupervised and unnormalized dictionary feature learning for a desired sparsity level to best match the data is presented. Second, the binary sparse coding problem is then solved on the Loihi 1 neuromorphic chip by the use of stochastic networks of neurons to traverse the non-convex energy landscape. The solutions are benchmarked against the classical heuristic simulated annealing. We demonstrate neuromorphic computing is suitable for sampling low energy solutions of binary sparse coding QUBO models, and although Loihi 1 is capable of sampling very sparse solutions of the QUBO models, there needs to be improvement in the implementation in order to be competitive with simulated annealing.
2001.01491
Waqas Ahmed
Saira Khan, Khalid Iqbal, Safi Faizullah, Muhammad Fahad, Jawad Ali, Waqas Ahmed
Clustering based Privacy Preserving of Big Data using Fuzzification and Anonymization Operation
08 Page, 07 figures
International Journal of Advanced Computer Science and Applications, Volume 10 Issue 12, 2019
10.14569/IJACSA.2019.0101239
null
cs.DB cs.CR
http://creativecommons.org/licenses/by/4.0/
Big Data is used by data miner for analysis purpose which may contain sensitive information. During the procedures it raises certain privacy challenges for researchers. The existing privacy preserving methods use different algorithms that results into limitation of data reconstruction while securing the sensitive data. This paper presents a clustering based privacy preservation probabilistic model of big data to secure sensitive information..model to attain minimum perturbation and maximum privacy. In our model, sensitive information is secured after identifying the sensitive data from data clusters to modify or generalize it.The resulting dataset is analysed to calculate the accuracy level of our model in terms of hidden data, lossed data as result of reconstruction. Extensive experiements are carried out in order to demonstrate the results of our proposed model. Clustering based Privacy preservation of individual data in big data with minimum perturbation and successful reconstruction highlights the significance of our model in addition to the use of standard performance evaluation measures.
[ { "created": "Mon, 6 Jan 2020 11:31:12 GMT", "version": "v1" } ]
2020-01-07
[ [ "Khan", "Saira", "" ], [ "Iqbal", "Khalid", "" ], [ "Faizullah", "Safi", "" ], [ "Fahad", "Muhammad", "" ], [ "Ali", "Jawad", "" ], [ "Ahmed", "Waqas", "" ] ]
Big Data is used by data miner for analysis purpose which may contain sensitive information. During the procedures it raises certain privacy challenges for researchers. The existing privacy preserving methods use different algorithms that results into limitation of data reconstruction while securing the sensitive data. This paper presents a clustering based privacy preservation probabilistic model of big data to secure sensitive information..model to attain minimum perturbation and maximum privacy. In our model, sensitive information is secured after identifying the sensitive data from data clusters to modify or generalize it.The resulting dataset is analysed to calculate the accuracy level of our model in terms of hidden data, lossed data as result of reconstruction. Extensive experiements are carried out in order to demonstrate the results of our proposed model. Clustering based Privacy preservation of individual data in big data with minimum perturbation and successful reconstruction highlights the significance of our model in addition to the use of standard performance evaluation measures.
2211.09783
Yulong Chen
Yulong Chen, Yang Liu, Ruochen Xu, Ziyi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang
UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot Summarization
ACL2023 main conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization. However, despite the emergence of many summarization tasks and datasets, the current training paradigm for few-shot summarization systems ignores potentially shareable knowledge in heterogeneous datasets. To this end, we propose \textsc{UniSumm}, a unified few-shot summarization model pre-trained with multiple summarization tasks and can be prefix-tuned to excel at any few-shot summarization task. Meanwhile, to better evaluate few-shot summarizers, under the principles of diversity and robustness, we assemble and release a new benchmark \textsc{SummZoo}. It consists of $8$ summarization tasks with multiple sets of few-shot samples for each task, covering diverse domains. Experimental results and analysis show that \textsc{UniSumm} outperforms strong baselines by a large margin across all sub-tasks in \textsc{SummZoo} under both automatic and human evaluations and achieves comparable results in human evaluation compared with a GPT-3.5 model.
[ { "created": "Thu, 17 Nov 2022 18:54:47 GMT", "version": "v1" }, { "created": "Mon, 21 Nov 2022 15:16:40 GMT", "version": "v2" }, { "created": "Tue, 6 Dec 2022 08:54:22 GMT", "version": "v3" }, { "created": "Tue, 13 Dec 2022 14:57:14 GMT", "version": "v4" }, { "created": "Mon, 19 Dec 2022 05:15:58 GMT", "version": "v5" }, { "created": "Sat, 27 May 2023 19:28:00 GMT", "version": "v6" } ]
2023-05-30
[ [ "Chen", "Yulong", "" ], [ "Liu", "Yang", "" ], [ "Xu", "Ruochen", "" ], [ "Yang", "Ziyi", "" ], [ "Zhu", "Chenguang", "" ], [ "Zeng", "Michael", "" ], [ "Zhang", "Yue", "" ] ]
The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization. However, despite the emergence of many summarization tasks and datasets, the current training paradigm for few-shot summarization systems ignores potentially shareable knowledge in heterogeneous datasets. To this end, we propose \textsc{UniSumm}, a unified few-shot summarization model pre-trained with multiple summarization tasks and can be prefix-tuned to excel at any few-shot summarization task. Meanwhile, to better evaluate few-shot summarizers, under the principles of diversity and robustness, we assemble and release a new benchmark \textsc{SummZoo}. It consists of $8$ summarization tasks with multiple sets of few-shot samples for each task, covering diverse domains. Experimental results and analysis show that \textsc{UniSumm} outperforms strong baselines by a large margin across all sub-tasks in \textsc{SummZoo} under both automatic and human evaluations and achieves comparable results in human evaluation compared with a GPT-3.5 model.
2406.14756
Huitong Pan
Huitong Pan and Qi Zhang and Cornelia Caragea and Eduard Dragut and Longin Jan Latecki
SciDMT: A Large-Scale Corpus for Detecting Scientific Mentions
LREC/COLING 2024
LREC-COLING. (2024) 14407-14417
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We present SciDMT, an enhanced and expanded corpus for scientific mention detection, offering a significant advancement over existing related resources. SciDMT contains annotated scientific documents for datasets (D), methods (M), and tasks (T). The corpus consists of two components: 1) the SciDMT main corpus, which includes 48 thousand scientific articles with over 1.8 million weakly annotated mention annotations in the format of in-text span, and 2) an evaluation set, which comprises 100 scientific articles manually annotated for evaluation purposes. To the best of our knowledge, SciDMT is the largest corpus for scientific entity mention detection. The corpus's scale and diversity are instrumental in developing and refining models for tasks such as indexing scientific papers, enhancing information retrieval, and improving the accessibility of scientific knowledge. We demonstrate the corpus's utility through experiments with advanced deep learning architectures like SciBERT and GPT-3.5. Our findings establish performance baselines and highlight unresolved challenges in scientific mention detection. SciDMT serves as a robust benchmark for the research community, encouraging the development of innovative models to further the field of scientific information extraction.
[ { "created": "Thu, 20 Jun 2024 22:03:21 GMT", "version": "v1" } ]
2024-06-24
[ [ "Pan", "Huitong", "" ], [ "Zhang", "Qi", "" ], [ "Caragea", "Cornelia", "" ], [ "Dragut", "Eduard", "" ], [ "Latecki", "Longin Jan", "" ] ]
We present SciDMT, an enhanced and expanded corpus for scientific mention detection, offering a significant advancement over existing related resources. SciDMT contains annotated scientific documents for datasets (D), methods (M), and tasks (T). The corpus consists of two components: 1) the SciDMT main corpus, which includes 48 thousand scientific articles with over 1.8 million weakly annotated mention annotations in the format of in-text span, and 2) an evaluation set, which comprises 100 scientific articles manually annotated for evaluation purposes. To the best of our knowledge, SciDMT is the largest corpus for scientific entity mention detection. The corpus's scale and diversity are instrumental in developing and refining models for tasks such as indexing scientific papers, enhancing information retrieval, and improving the accessibility of scientific knowledge. We demonstrate the corpus's utility through experiments with advanced deep learning architectures like SciBERT and GPT-3.5. Our findings establish performance baselines and highlight unresolved challenges in scientific mention detection. SciDMT serves as a robust benchmark for the research community, encouraging the development of innovative models to further the field of scientific information extraction.
2406.17952
Andrew Dennehy
Andrew Dennehy, Xiaoyu Zou, Shabnam J. Semnani, Yuri Fialko, Alexander Cloninger
LINSCAN -- A Linearity Based Clustering Algorithm
null
null
null
null
cs.LG cs.CG
http://creativecommons.org/licenses/by/4.0/
DBSCAN and OPTICS are powerful algorithms for identifying clusters of points in domains where few assumptions can be made about the structure of the data. In this paper, we leverage these strengths and introduce a new algorithm, LINSCAN, designed to seek lineated clusters that are difficult to find and isolate with existing methods. In particular, by embedding points as normal distributions approximating their local neighborhoods and leveraging a distance function derived from the Kullback Leibler Divergence, LINSCAN can detect and distinguish lineated clusters that are spatially close but have orthogonal covariances. We demonstrate how LINSCAN can be applied to seismic data to identify active faults, including intersecting faults, and determine their orientation. Finally, we discuss the properties a generalization of DBSCAN and OPTICS must have in order to retain the stability benefits of these algorithms.
[ { "created": "Tue, 25 Jun 2024 21:58:37 GMT", "version": "v1" } ]
2024-06-27
[ [ "Dennehy", "Andrew", "" ], [ "Zou", "Xiaoyu", "" ], [ "Semnani", "Shabnam J.", "" ], [ "Fialko", "Yuri", "" ], [ "Cloninger", "Alexander", "" ] ]
DBSCAN and OPTICS are powerful algorithms for identifying clusters of points in domains where few assumptions can be made about the structure of the data. In this paper, we leverage these strengths and introduce a new algorithm, LINSCAN, designed to seek lineated clusters that are difficult to find and isolate with existing methods. In particular, by embedding points as normal distributions approximating their local neighborhoods and leveraging a distance function derived from the Kullback Leibler Divergence, LINSCAN can detect and distinguish lineated clusters that are spatially close but have orthogonal covariances. We demonstrate how LINSCAN can be applied to seismic data to identify active faults, including intersecting faults, and determine their orientation. Finally, we discuss the properties a generalization of DBSCAN and OPTICS must have in order to retain the stability benefits of these algorithms.
1610.02526
Arash Shaghaghi
Arash Shaghaghi and Mohamed Ali (Dali) Kaafar and Sandra Scott-Hayward and Salil S. Kanhere and Sanjay Jha
Towards Policy Enforcement Point as a Service (PEPS)
This is a copy of the paper accepted at IEEE NFV-SDN'16. An extended work based on this paper will be submitted to a journal
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we coin the term Policy Enforcement as a Service (PEPS), which enables the provision of innovative inter-layer and inter-domain Access Control. We leverage the architecture of Software-Defined-Network (SDN) to introduce a common network-level enforcement point, which is made available to a range of access control systems. With our PEPS model, it is possible to have a `defense in depth' protection model and drop unsuccessful access requests before engaging the data provider (e.g. a database system). Moreover, the current implementation of access control within the `trusted' perimeter of an organization is no longer a restriction so that the potential for novel, distributed and cooperative security services can be realized. We conduct an analysis of the security requirements and technical challenges for implementing Policy Enforcement as a Service. To illustrate the benefits of our proposal in practice, we include a report on our prototype PEPS-enabled location-based access control.
[ { "created": "Sat, 8 Oct 2016 13:09:47 GMT", "version": "v1" } ]
2016-10-11
[ [ "Shaghaghi", "Arash", "", "Dali" ], [ "Ali", "Mohamed", "", "Dali" ], [ "Kaafar", "", "" ], [ "Scott-Hayward", "Sandra", "" ], [ "Kanhere", "Salil S.", "" ], [ "Jha", "Sanjay", "" ] ]
In this paper, we coin the term Policy Enforcement as a Service (PEPS), which enables the provision of innovative inter-layer and inter-domain Access Control. We leverage the architecture of Software-Defined-Network (SDN) to introduce a common network-level enforcement point, which is made available to a range of access control systems. With our PEPS model, it is possible to have a `defense in depth' protection model and drop unsuccessful access requests before engaging the data provider (e.g. a database system). Moreover, the current implementation of access control within the `trusted' perimeter of an organization is no longer a restriction so that the potential for novel, distributed and cooperative security services can be realized. We conduct an analysis of the security requirements and technical challenges for implementing Policy Enforcement as a Service. To illustrate the benefits of our proposal in practice, we include a report on our prototype PEPS-enabled location-based access control.
2406.17424
Shinwoo An
Shinwoo An, Eunjin Oh and Jie Xue
Sparse Outerstring Graphs Have Logarithmic Treewidth
17pages, In ESA'24
null
null
null
cs.CG cs.DS
http://creativecommons.org/licenses/by/4.0/
An outerstring graph is the intersection graph of curves lying inside a disk with one endpoint on the boundary of the disk. We show that an outerstring graph with $n$ vertices has treewidth $O(\alpha\log n)$, where $\alpha$ denotes the arboricity of the graph, with an almost matching lower bound of $\Omega(\alpha \log (n/\alpha))$. As a corollary, we show that a $t$-biclique-free outerstring graph has treewidth $O(t(\log t)\log n)$. This leads to polynomial-time algorithms for most of the central NP-complete problems such as \textsc{Independent Set}, \textsc{Vertex Cover}, \textsc{Dominating Set}, \textsc{Feedback Vertex Set}, \textsc{Coloring} for sparse outerstring graphs. Also, we can obtain subexponential-time (exact, parameterized, and approximation) algorithms for various NP-complete problems such as \textsc{Vertex Cover}, \textsc{Feedback Vertex Set} and \textsc{Cycle Packing} for (not necessarily sparse) outerstring graphs.
[ { "created": "Tue, 25 Jun 2024 09:59:24 GMT", "version": "v1" } ]
2024-06-26
[ [ "An", "Shinwoo", "" ], [ "Oh", "Eunjin", "" ], [ "Xue", "Jie", "" ] ]
An outerstring graph is the intersection graph of curves lying inside a disk with one endpoint on the boundary of the disk. We show that an outerstring graph with $n$ vertices has treewidth $O(\alpha\log n)$, where $\alpha$ denotes the arboricity of the graph, with an almost matching lower bound of $\Omega(\alpha \log (n/\alpha))$. As a corollary, we show that a $t$-biclique-free outerstring graph has treewidth $O(t(\log t)\log n)$. This leads to polynomial-time algorithms for most of the central NP-complete problems such as \textsc{Independent Set}, \textsc{Vertex Cover}, \textsc{Dominating Set}, \textsc{Feedback Vertex Set}, \textsc{Coloring} for sparse outerstring graphs. Also, we can obtain subexponential-time (exact, parameterized, and approximation) algorithms for various NP-complete problems such as \textsc{Vertex Cover}, \textsc{Feedback Vertex Set} and \textsc{Cycle Packing} for (not necessarily sparse) outerstring graphs.
1512.09032
Shankara Narayanan Krishna
Khushraj Madnani, Shankara Narayanan Krishna and Paritosh Pandya
Metric Temporal Logic with Counting
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ability to count number of occurrences of events within a specified time interval is very useful in specification of resource bounded real time computation. In this paper, we study an extension of Metric Temporal Logic ($\mathsf{MTL}$) with two different counting modalities called $\mathsf{C}$ and $\mathsf{UT}$ (until with threshold), which enhance the expressive power of $\mathsf{MTL}$ in orthogonal fashion. We confine ourselves only to the future fragment of $\mathsf{MTL}$ interpreted in a pointwise manner over finite timed words. We provide a comprehensive study of the expressive power of logic $\mathsf{CTMTL}$ and its fragments using the technique of EF games extended with suitable counting moves. Finally, as our main result, we establish the decidability of $\mathsf{CTMTL}$ by giving an equisatisfiable reduction from $\mathsf{CTMTL}$ to $\mathsf{MTL}$. The reduction provides one more example of the use of temporal projections with oversampling introduced earlier for proving decidability. Our reduction also implies that $\mathsf{MITL}$ extended with $\mathsf{C}$ and $\mathsf{UT}$ modalities is elementarily decidable.
[ { "created": "Wed, 30 Dec 2015 17:42:14 GMT", "version": "v1" } ]
2015-12-31
[ [ "Madnani", "Khushraj", "" ], [ "Krishna", "Shankara Narayanan", "" ], [ "Pandya", "Paritosh", "" ] ]
Ability to count number of occurrences of events within a specified time interval is very useful in specification of resource bounded real time computation. In this paper, we study an extension of Metric Temporal Logic ($\mathsf{MTL}$) with two different counting modalities called $\mathsf{C}$ and $\mathsf{UT}$ (until with threshold), which enhance the expressive power of $\mathsf{MTL}$ in orthogonal fashion. We confine ourselves only to the future fragment of $\mathsf{MTL}$ interpreted in a pointwise manner over finite timed words. We provide a comprehensive study of the expressive power of logic $\mathsf{CTMTL}$ and its fragments using the technique of EF games extended with suitable counting moves. Finally, as our main result, we establish the decidability of $\mathsf{CTMTL}$ by giving an equisatisfiable reduction from $\mathsf{CTMTL}$ to $\mathsf{MTL}$. The reduction provides one more example of the use of temporal projections with oversampling introduced earlier for proving decidability. Our reduction also implies that $\mathsf{MITL}$ extended with $\mathsf{C}$ and $\mathsf{UT}$ modalities is elementarily decidable.
2403.11728
Kevin R\"osch
Johannes Fischer, Kevin R\"osch, Martin Lauer, Christoph Stiller
PITA: Physics-Informed Trajectory Autoencoder
null
null
null
null
cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Validating robotic systems in safety-critical appli-cations requires testing in many scenarios including rare edgecases that are unlikely to occur, requiring to complement real-world testing with testing in simulation. Generative models canbe used to augment real-world datasets with generated data toproduce edge case scenarios by sampling in a learned latentspace. Autoencoders can learn said latent representation for aspecific domain by learning to reconstruct the input data froma lower-dimensional intermediate representation. However, theresulting trajectories are not necessarily physically plausible, butinstead typically contain noise that is not present in the inputtrajectory. To resolve this issue, we propose the novel Physics-Informed Trajectory Autoencoder (PITA) architecture, whichincorporates a physical dynamics model into the loss functionof the autoencoder. This results in smooth trajectories that notonly reconstruct the input trajectory but also adhere to thephysical model. We evaluate PITA on a real-world dataset ofvehicle trajectories and compare its performance to a normalautoencoder and a state-of-the-art action-space autoencoder.
[ { "created": "Mon, 18 Mar 2024 12:37:41 GMT", "version": "v1" } ]
2024-03-19
[ [ "Fischer", "Johannes", "" ], [ "Rösch", "Kevin", "" ], [ "Lauer", "Martin", "" ], [ "Stiller", "Christoph", "" ] ]
Validating robotic systems in safety-critical appli-cations requires testing in many scenarios including rare edgecases that are unlikely to occur, requiring to complement real-world testing with testing in simulation. Generative models canbe used to augment real-world datasets with generated data toproduce edge case scenarios by sampling in a learned latentspace. Autoencoders can learn said latent representation for aspecific domain by learning to reconstruct the input data froma lower-dimensional intermediate representation. However, theresulting trajectories are not necessarily physically plausible, butinstead typically contain noise that is not present in the inputtrajectory. To resolve this issue, we propose the novel Physics-Informed Trajectory Autoencoder (PITA) architecture, whichincorporates a physical dynamics model into the loss functionof the autoencoder. This results in smooth trajectories that notonly reconstruct the input trajectory but also adhere to thephysical model. We evaluate PITA on a real-world dataset ofvehicle trajectories and compare its performance to a normalautoencoder and a state-of-the-art action-space autoencoder.
2405.02022
Michael Baddeley Dr
Burhanuddin Rangwala, Ava Powelson, Michael Baddeley and Israat Haque
STX-Vote: Improving Reliability with Bit Voting in Synchronous Transmission-based IoT Networks
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Industrial Internet of Things (IIoT) networks must meet strict reliability, latency, and low energy consumption requirements. However, traditional low-power wireless protocols are ineffective in finding a sweet spot for balancing these performance metrics. Recently, network flooding protocols based on Synchronous Transmissions (STX) have been proposed for better performance in reliability-critical IIoT, where simultaneous transmissions are possible without packet collisions. STX-based protocols can offer a competitive edge over routing-based protocols, particularly in dependability. However, they notably suffer from the beating effect, a physical layer phenomenon that results in sinusoidal interference across a packet and, consequently, packet loss. Thus, we introduce STX-Vote, an error correction scheme that can handle errors caused by beating effects. Importantly, we utilize transmission redundancy already inherent within STX protocols so do not incur additional on-air overhead. Through simulation, we demonstrate STX-Vote can provide a 40% increase in reliability. We subsequently implement STX-Vote on nRF52840-DK devices and perform extensive experiments. The results confirm that STX-Vote improves reliability by 25-28% for BLE 5 PHYs and 8% for IEEE 802.15.4; thus, it can complement existing error correction schemes.
[ { "created": "Fri, 3 May 2024 11:54:16 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2024 06:12:22 GMT", "version": "v2" }, { "created": "Tue, 13 Aug 2024 07:39:24 GMT", "version": "v3" } ]
2024-08-14
[ [ "Rangwala", "Burhanuddin", "" ], [ "Powelson", "Ava", "" ], [ "Baddeley", "Michael", "" ], [ "Haque", "Israat", "" ] ]
Industrial Internet of Things (IIoT) networks must meet strict reliability, latency, and low energy consumption requirements. However, traditional low-power wireless protocols are ineffective in finding a sweet spot for balancing these performance metrics. Recently, network flooding protocols based on Synchronous Transmissions (STX) have been proposed for better performance in reliability-critical IIoT, where simultaneous transmissions are possible without packet collisions. STX-based protocols can offer a competitive edge over routing-based protocols, particularly in dependability. However, they notably suffer from the beating effect, a physical layer phenomenon that results in sinusoidal interference across a packet and, consequently, packet loss. Thus, we introduce STX-Vote, an error correction scheme that can handle errors caused by beating effects. Importantly, we utilize transmission redundancy already inherent within STX protocols so do not incur additional on-air overhead. Through simulation, we demonstrate STX-Vote can provide a 40% increase in reliability. We subsequently implement STX-Vote on nRF52840-DK devices and perform extensive experiments. The results confirm that STX-Vote improves reliability by 25-28% for BLE 5 PHYs and 8% for IEEE 802.15.4; thus, it can complement existing error correction schemes.
1203.5689
Philipp Schaer
Philipp Schaer, Thomas L\"uke, Wilko van Hoek
Building Custom Term Suggestion Web Services with OAI-Harvested Open Data
8 pages, 5 figures, presented at 2. DGI-Konferenz / 64. Jahrestagung der DGI, D\"usseldorf, Germany on 2012-03-23
Social Media und Web Science: Das Web als Lebensraum. 2. DGI-Konferenz / 64. Jahrestagung der DGI, D\"usseldorf, 22. bis 23. M\"arz 2012, Proceedings (2012), p. 389-396
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem that the same information need can be expressed in a variety of ways is especially true for scientific literature. Each scientific discipline has its own domain-specific language and vocabulary. This language is coded into documentary tools like thesauri or classifications that are used to document and describe scientific documents. When we think of information retrieval as "fundamentally a linguistic process" (Blair, 2003) users have to be aware of the most relevant search terms - which are the controlled thesauri terms the documents are described with. This can be achieved with so-called search-term-recommenders (STR) that map free search terms of a lay user to controlled vocabulary terms which can then be used as a term suggestion or to do an automatic query expansion (Hienert, Schaer, Schaible, & Mayr, 2011). State-of-the-art repository software systems like DSpace or EPrints already offer some kind of term suggestion features in search or input forms but these implementations only work as simple auto completion mechanisms that don't incorporate any kind of semantic mapping. Such software systems would gain a lot in terms of usability and data consistency if tools like the proposed domain-specific STRs would be freely available. We aim to implement a rich toolbox of web services (like the mentioned domain-specific STRs) to support users and providers of online Digital Library (DL) or repository systems.
[ { "created": "Mon, 26 Mar 2012 14:45:45 GMT", "version": "v1" } ]
2012-03-27
[ [ "Schaer", "Philipp", "" ], [ "Lüke", "Thomas", "" ], [ "van Hoek", "Wilko", "" ] ]
The problem that the same information need can be expressed in a variety of ways is especially true for scientific literature. Each scientific discipline has its own domain-specific language and vocabulary. This language is coded into documentary tools like thesauri or classifications that are used to document and describe scientific documents. When we think of information retrieval as "fundamentally a linguistic process" (Blair, 2003) users have to be aware of the most relevant search terms - which are the controlled thesauri terms the documents are described with. This can be achieved with so-called search-term-recommenders (STR) that map free search terms of a lay user to controlled vocabulary terms which can then be used as a term suggestion or to do an automatic query expansion (Hienert, Schaer, Schaible, & Mayr, 2011). State-of-the-art repository software systems like DSpace or EPrints already offer some kind of term suggestion features in search or input forms but these implementations only work as simple auto completion mechanisms that don't incorporate any kind of semantic mapping. Such software systems would gain a lot in terms of usability and data consistency if tools like the proposed domain-specific STRs would be freely available. We aim to implement a rich toolbox of web services (like the mentioned domain-specific STRs) to support users and providers of online Digital Library (DL) or repository systems.
2010.11786
Benjamin Ricaud
Benjamin Ricaud, Nicolas Aspert and Volodymyr Miz
Spikyball sampling: Exploring large networks via an inhomogeneous filtered diffusion
null
null
null
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studying real-world networks such as social networks or web networks is a challenge. These networks often combine a complex, highly connected structure together with a large size. We propose a new approach for large scale networks that is able to automatically sample user-defined relevant parts of a network. Starting from a few selected places in the network and a reduced set of expansion rules, the method adopts a filtered breadth-first search approach, that expands through edges and nodes matching these properties. Moreover, the expansion is performed over a random subset of neighbors at each step to mitigate further the overwhelming number of connections that may exist in large graphs. This carries the image of a "spiky" expansion. We show that this approach generalize previous exploration sampling methods, such as Snowball or Forest Fire and extend them. We demonstrate its ability to capture groups of nodes with high interactions while discarding weakly connected nodes that are often numerous in social networks and may hide important structures.
[ { "created": "Thu, 22 Oct 2020 15:01:13 GMT", "version": "v1" } ]
2020-10-23
[ [ "Ricaud", "Benjamin", "" ], [ "Aspert", "Nicolas", "" ], [ "Miz", "Volodymyr", "" ] ]
Studying real-world networks such as social networks or web networks is a challenge. These networks often combine a complex, highly connected structure together with a large size. We propose a new approach for large scale networks that is able to automatically sample user-defined relevant parts of a network. Starting from a few selected places in the network and a reduced set of expansion rules, the method adopts a filtered breadth-first search approach, that expands through edges and nodes matching these properties. Moreover, the expansion is performed over a random subset of neighbors at each step to mitigate further the overwhelming number of connections that may exist in large graphs. This carries the image of a "spiky" expansion. We show that this approach generalize previous exploration sampling methods, such as Snowball or Forest Fire and extend them. We demonstrate its ability to capture groups of nodes with high interactions while discarding weakly connected nodes that are often numerous in social networks and may hide important structures.
1809.03609
Mohamed Ibrahim
Mohamed R. Ibrahim, James Haworth, Tao Cheng
URBAN-i: From urban scenes to mapping slums, transport modes, and pedestrians in cities using deep learning and computer vision
12 pages, 9 figures
null
10.1177/2399808319846517
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Within the burgeoning expansion of deep learning and computer vision across the different fields of science, when it comes to urban development, deep learning and computer vision applications are still limited towards the notions of smart cities and autonomous vehicles. Indeed, a wide gap of knowledge appears when it comes to cities and urban regions in less developed countries where the chaos of informality is the dominant scheme. How can deep learning and Artificial Intelligence (AI) untangle the complexities of informality to advance urban modelling and our understanding of cities? Various questions and debates can be raised concerning the future of cities of the North and the South in the paradigm of AI and computer vision. In this paper, we introduce a new method for multipurpose realistic-dynamic urban modelling relying on deep learning and computer vision, using deep Convolutional Neural Networks (CNN), to sense and detect informality and slums in urban scenes from aerial and street view images in addition to detection of pedestrian and transport modes. The model has been trained on images of urban scenes in cities across the globe. The model shows a good validation of understanding a wide spectrum of nuances among the planned and the unplanned regions, including informal and slum areas. We attempt to advance urban modelling for better understanding the dynamics of city developments. We also aim to exemplify the significant impacts of AI in cities beyond how smart cities are discussed and perceived in the mainstream. The algorithms of the URBAN-i model are fully-coded in Python programming with the pre-trained deep learning models to be used as a tool for mapping and city modelling in the various corner of the globe, including informal settlements and slum regions.
[ { "created": "Mon, 10 Sep 2018 21:49:38 GMT", "version": "v1" } ]
2019-10-23
[ [ "Ibrahim", "Mohamed R.", "" ], [ "Haworth", "James", "" ], [ "Cheng", "Tao", "" ] ]
Within the burgeoning expansion of deep learning and computer vision across the different fields of science, when it comes to urban development, deep learning and computer vision applications are still limited towards the notions of smart cities and autonomous vehicles. Indeed, a wide gap of knowledge appears when it comes to cities and urban regions in less developed countries where the chaos of informality is the dominant scheme. How can deep learning and Artificial Intelligence (AI) untangle the complexities of informality to advance urban modelling and our understanding of cities? Various questions and debates can be raised concerning the future of cities of the North and the South in the paradigm of AI and computer vision. In this paper, we introduce a new method for multipurpose realistic-dynamic urban modelling relying on deep learning and computer vision, using deep Convolutional Neural Networks (CNN), to sense and detect informality and slums in urban scenes from aerial and street view images in addition to detection of pedestrian and transport modes. The model has been trained on images of urban scenes in cities across the globe. The model shows a good validation of understanding a wide spectrum of nuances among the planned and the unplanned regions, including informal and slum areas. We attempt to advance urban modelling for better understanding the dynamics of city developments. We also aim to exemplify the significant impacts of AI in cities beyond how smart cities are discussed and perceived in the mainstream. The algorithms of the URBAN-i model are fully-coded in Python programming with the pre-trained deep learning models to be used as a tool for mapping and city modelling in the various corner of the globe, including informal settlements and slum regions.
2306.11593
Luigi Celona
Simone Bianco and Luigi Celona and Marco Donzella and Paolo Napoletano
Improving Image Captioning Descriptiveness by Ranking and LLM-based Fusion
null
null
null
null
cs.CV cs.AI cs.CL cs.DB cs.LG
http://creativecommons.org/licenses/by/4.0/
State-of-The-Art (SoTA) image captioning models often rely on the Microsoft COCO (MS-COCO) dataset for training. This dataset contains annotations provided by human annotators, who typically produce captions averaging around ten tokens. However, this constraint presents a challenge in effectively capturing complex scenes and conveying detailed information. Furthermore, captioning models tend to exhibit bias towards the ``average'' caption, which captures only the more general aspects. What would happen if we were able to automatically generate longer captions, thereby making them more detailed? Would these captions, evaluated by humans, be more or less representative of the image content compared to the original MS-COCO captions? In this paper, we present a novel approach to address previous challenges by showcasing how captions generated from different SoTA models can be effectively fused, resulting in richer captions. Our proposed method leverages existing models from the literature, eliminating the need for additional training. Instead, it utilizes an image-text based metric to rank the captions generated by SoTA models for a given image. Subsequently, the top two captions are fused using a Large Language Model (LLM). Experimental results demonstrate the effectiveness of our approach, as the captions generated by our model exhibit higher consistency with human judgment when evaluated on the MS-COCO test set. By combining the strengths of various SoTA models, our method enhances the quality and appeal of image captions, bridging the gap between automated systems and the rich, informative nature of human-generated descriptions. This advance opens up new possibilities for generating captions that are more suitable for the training of both vision-language and captioning models.
[ { "created": "Tue, 20 Jun 2023 15:13:02 GMT", "version": "v1" } ]
2023-06-21
[ [ "Bianco", "Simone", "" ], [ "Celona", "Luigi", "" ], [ "Donzella", "Marco", "" ], [ "Napoletano", "Paolo", "" ] ]
State-of-The-Art (SoTA) image captioning models often rely on the Microsoft COCO (MS-COCO) dataset for training. This dataset contains annotations provided by human annotators, who typically produce captions averaging around ten tokens. However, this constraint presents a challenge in effectively capturing complex scenes and conveying detailed information. Furthermore, captioning models tend to exhibit bias towards the ``average'' caption, which captures only the more general aspects. What would happen if we were able to automatically generate longer captions, thereby making them more detailed? Would these captions, evaluated by humans, be more or less representative of the image content compared to the original MS-COCO captions? In this paper, we present a novel approach to address previous challenges by showcasing how captions generated from different SoTA models can be effectively fused, resulting in richer captions. Our proposed method leverages existing models from the literature, eliminating the need for additional training. Instead, it utilizes an image-text based metric to rank the captions generated by SoTA models for a given image. Subsequently, the top two captions are fused using a Large Language Model (LLM). Experimental results demonstrate the effectiveness of our approach, as the captions generated by our model exhibit higher consistency with human judgment when evaluated on the MS-COCO test set. By combining the strengths of various SoTA models, our method enhances the quality and appeal of image captions, bridging the gap between automated systems and the rich, informative nature of human-generated descriptions. This advance opens up new possibilities for generating captions that are more suitable for the training of both vision-language and captioning models.
2111.14994
Niki Hrovatin
Niki Hrovatin, Aleksandar To\v{s}i\'c, Michael Mrissa and Jernej Vi\v{c}i\v{c}
A General Purpose Data and Query Privacy Preserving Protocol for Wireless Sensor Networks
Submitted to IEEE IoT Journal, 18 pages, 16 figures
null
null
null
cs.CR cs.NI
http://creativecommons.org/licenses/by/4.0/
Wireless Sensor Networks (WSNs) are composed of a large number of spatially distributed devices equipped with sensing technology and interlinked via radio signaling. A WSN deployed for monitoring purposes can provide a ubiquitous view over the monitored environment. However, the management of collected data is very resource-consuming and raises security and privacy issues. In this paper, we propose a privacy preserving protocol for collecting aggregated data from WSNs. The protocol relies on the Onion Routing technique to provide uniformly distributed network traffic and confine the knowledge a foreign actor can gain from monitoring messages traveling the network. Our solution employs the computing power of nodes in the network by conveying them general-purpose computer code for in-situ processing and aggregation of data sourcing from multiple sensor nodes. We complement our work with a simulation of the proposed solution using the network simulator ns-3. Results of the simulation give an overview of the scalability of the solution and highlight potential constraints.
[ { "created": "Mon, 29 Nov 2021 22:18:19 GMT", "version": "v1" } ]
2021-12-01
[ [ "Hrovatin", "Niki", "" ], [ "Tošić", "Aleksandar", "" ], [ "Mrissa", "Michael", "" ], [ "Vičič", "Jernej", "" ] ]
Wireless Sensor Networks (WSNs) are composed of a large number of spatially distributed devices equipped with sensing technology and interlinked via radio signaling. A WSN deployed for monitoring purposes can provide a ubiquitous view over the monitored environment. However, the management of collected data is very resource-consuming and raises security and privacy issues. In this paper, we propose a privacy preserving protocol for collecting aggregated data from WSNs. The protocol relies on the Onion Routing technique to provide uniformly distributed network traffic and confine the knowledge a foreign actor can gain from monitoring messages traveling the network. Our solution employs the computing power of nodes in the network by conveying them general-purpose computer code for in-situ processing and aggregation of data sourcing from multiple sensor nodes. We complement our work with a simulation of the proposed solution using the network simulator ns-3. Results of the simulation give an overview of the scalability of the solution and highlight potential constraints.
2103.06819
Jieren Deng
Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Hang Liu, Sanguthevar Rajasekaran and Caiwen Ding
TAG: Gradient Attack on Transformer-based Language Models
Accepted to Findings of EMNLP 2021
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although federated learning has increasingly gained attention in terms of effectively utilizing local devices for data privacy enhancement, recent studies show that publicly shared gradients in the training process can reveal the private training images (gradient leakage) to a third-party in computer vision. We have, however, no systematic understanding of the gradient leakage mechanism on the Transformer based language models. In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data. We develop a set of metrics to evaluate the effectiveness of the proposed attack algorithm quantitatively. Experimental results on Transformer, TinyBERT$_{4}$, TinyBERT$_{6}$, BERT$_{BASE}$, and BERT$_{LARGE}$ using GLUE benchmark show that TAG works well on more weight distributions in reconstructing training data and achieves 1.5$\times$ recover rate and 2.5$\times$ ROUGE-2 over prior methods without the need of ground truth label. TAG can obtain up to 90$\%$ data by attacking gradients in CoLA dataset. In addition, TAG has a stronger adversary on large models, small dictionary size, and small input length. We hope the proposed TAG will shed some light on the privacy leakage problem in Transformer-based NLP models.
[ { "created": "Thu, 11 Mar 2021 17:41:32 GMT", "version": "v1" }, { "created": "Mon, 15 Mar 2021 03:08:57 GMT", "version": "v2" }, { "created": "Tue, 16 Mar 2021 20:51:19 GMT", "version": "v3" }, { "created": "Wed, 21 Apr 2021 04:04:18 GMT", "version": "v4" }, { "created": "Fri, 10 Sep 2021 02:23:35 GMT", "version": "v5" }, { "created": "Tue, 21 Sep 2021 17:58:26 GMT", "version": "v6" } ]
2021-09-22
[ [ "Deng", "Jieren", "" ], [ "Wang", "Yijue", "" ], [ "Li", "Ji", "" ], [ "Shang", "Chao", "" ], [ "Liu", "Hang", "" ], [ "Rajasekaran", "Sanguthevar", "" ], [ "Ding", "Caiwen", "" ] ]
Although federated learning has increasingly gained attention in terms of effectively utilizing local devices for data privacy enhancement, recent studies show that publicly shared gradients in the training process can reveal the private training images (gradient leakage) to a third-party in computer vision. We have, however, no systematic understanding of the gradient leakage mechanism on the Transformer based language models. In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data. We develop a set of metrics to evaluate the effectiveness of the proposed attack algorithm quantitatively. Experimental results on Transformer, TinyBERT$_{4}$, TinyBERT$_{6}$, BERT$_{BASE}$, and BERT$_{LARGE}$ using GLUE benchmark show that TAG works well on more weight distributions in reconstructing training data and achieves 1.5$\times$ recover rate and 2.5$\times$ ROUGE-2 over prior methods without the need of ground truth label. TAG can obtain up to 90$\%$ data by attacking gradients in CoLA dataset. In addition, TAG has a stronger adversary on large models, small dictionary size, and small input length. We hope the proposed TAG will shed some light on the privacy leakage problem in Transformer-based NLP models.
2307.05694
Dr. Mohammed Javed
Anurag Dhote and Mohammed Javed and David S Doermann
A Survey on Figure Classification Techniques in Scientific Documents
Some contents of this paper appears in the accepted paper - "A Survey and Approach to Chart Classification" at 15th IAPR GREC 2023 at 17th ICDAR 2023, August 21-26, San Jose, USA. arXiv admin note: text overlap with arXiv:2307.04147
null
null
null
cs.IR cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Figures visually represent an essential piece of information and provide an effective means to communicate scientific facts. Recently there have been many efforts toward extracting data directly from figures, specifically from tables, diagrams, and plots, using different Artificial Intelligence and Machine Learning techniques. This is because removing information from figures could lead to deeper insights into the concepts highlighted in the scientific documents. In this survey paper, we systematically categorize figures into five classes - tables, photos, diagrams, maps, and plots, and subsequently present a critical review of the existing methodologies and data sets that address the problem of figure classification. Finally, we identify the current research gaps and provide possible directions for further research on figure classification.
[ { "created": "Sun, 9 Jul 2023 10:55:11 GMT", "version": "v1" } ]
2023-07-13
[ [ "Dhote", "Anurag", "" ], [ "Javed", "Mohammed", "" ], [ "Doermann", "David S", "" ] ]
Figures visually represent an essential piece of information and provide an effective means to communicate scientific facts. Recently there have been many efforts toward extracting data directly from figures, specifically from tables, diagrams, and plots, using different Artificial Intelligence and Machine Learning techniques. This is because removing information from figures could lead to deeper insights into the concepts highlighted in the scientific documents. In this survey paper, we systematically categorize figures into five classes - tables, photos, diagrams, maps, and plots, and subsequently present a critical review of the existing methodologies and data sets that address the problem of figure classification. Finally, we identify the current research gaps and provide possible directions for further research on figure classification.
2306.00148
Wei Xiao
Wei Xiao and Tsun-Hsuan Wang and Chuang Gan and Daniela Rus
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
19 pages, website: https://safediffuser.github.io/safediffuser/
null
null
null
cs.LG cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion model-based approaches have shown promise in data-driven planning, but there are no safety guarantees, thus making it hard to be applied for safety-critical applications. To address these challenges, we propose a new method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy specifications by using a class of control barrier functions. The key idea of our approach is to embed the proposed finite-time diffusion invariance into the denoising diffusion procedure, which enables trustworthy diffusion data generation. Moreover, we demonstrate that our finite-time diffusion invariance method through generative models not only maintains generalization performance but also creates robustness in safe data generation. We test our method on a series of safe planning tasks, including maze path generation, legged robot locomotion, and 3D space manipulation, with results showing the advantages of robustness and guarantees over vanilla diffusion models.
[ { "created": "Wed, 31 May 2023 19:38:12 GMT", "version": "v1" } ]
2023-06-02
[ [ "Xiao", "Wei", "" ], [ "Wang", "Tsun-Hsuan", "" ], [ "Gan", "Chuang", "" ], [ "Rus", "Daniela", "" ] ]
Diffusion model-based approaches have shown promise in data-driven planning, but there are no safety guarantees, thus making it hard to be applied for safety-critical applications. To address these challenges, we propose a new method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy specifications by using a class of control barrier functions. The key idea of our approach is to embed the proposed finite-time diffusion invariance into the denoising diffusion procedure, which enables trustworthy diffusion data generation. Moreover, we demonstrate that our finite-time diffusion invariance method through generative models not only maintains generalization performance but also creates robustness in safe data generation. We test our method on a series of safe planning tasks, including maze path generation, legged robot locomotion, and 3D space manipulation, with results showing the advantages of robustness and guarantees over vanilla diffusion models.
2206.00229
Wisdom Agboh
Wisdom C. Agboh, Jeffrey Ichnowski, Ken Goldberg, Mehmet R. Dogar
Multi-Object Grasping in the Plane
Accepted to the International Symposium on Robotics Research (ISRR), 2022
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
We consider a novel problem where multiple rigid convex polygonal objects rest in randomly placed positions and orientations on a planar surface visible from an overhead camera. The objective is to efficiently grasp and transport all objects into a bin using multi-object push-grasps, where multiple objects are pushed together to facilitate multi-object grasping. We provide necessary conditions for frictionless multi-object push-grasps and apply these to filter inadmissible grasps in a novel multi-object grasp planner. We find that our planner is 19 times faster than a Mujoco simulator baseline. We also propose a picking algorithm that uses both single- and multi-object grasps to pick objects. In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13.6\% higher grasp success and is 59.9\% faster, from 212 PPH to 340 PPH. See \url{https://sites.google.com/view/multi-object-grasping} for videos and code.
[ { "created": "Wed, 1 Jun 2022 04:40:45 GMT", "version": "v1" }, { "created": "Wed, 21 Sep 2022 16:51:42 GMT", "version": "v2" } ]
2022-09-22
[ [ "Agboh", "Wisdom C.", "" ], [ "Ichnowski", "Jeffrey", "" ], [ "Goldberg", "Ken", "" ], [ "Dogar", "Mehmet R.", "" ] ]
We consider a novel problem where multiple rigid convex polygonal objects rest in randomly placed positions and orientations on a planar surface visible from an overhead camera. The objective is to efficiently grasp and transport all objects into a bin using multi-object push-grasps, where multiple objects are pushed together to facilitate multi-object grasping. We provide necessary conditions for frictionless multi-object push-grasps and apply these to filter inadmissible grasps in a novel multi-object grasp planner. We find that our planner is 19 times faster than a Mujoco simulator baseline. We also propose a picking algorithm that uses both single- and multi-object grasps to pick objects. In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13.6\% higher grasp success and is 59.9\% faster, from 212 PPH to 340 PPH. See \url{https://sites.google.com/view/multi-object-grasping} for videos and code.
2106.03373
Yiding Liu Dr.
Yiding Liu, Guan Huang, Jiaxiang Liu, Weixue Lu, Suqi Cheng, Yukun Li, Daiting Shi, Shuaiqiang Wang, Zhicong Cheng, Dawei Yin
Pre-trained Language Model for Web-scale Retrieval in Baidu Search
Accepted by KDD 2021
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Retrieval is a crucial stage in web search that identifies a small set of query-relevant candidates from a billion-scale corpus. Discovering more semantically-related candidates in the retrieval stage is very promising to expose more high-quality results to the end users. However, it still remains non-trivial challenges of building and deploying effective retrieval models for semantic matching in real search engine. In this paper, we describe the retrieval system that we developed and deployed in Baidu Search. The system exploits the recent state-of-the-art Chinese pretrained language model, namely Enhanced Representation through kNowledge IntEgration (ERNIE), which facilitates the system with expressive semantic matching. In particular, we developed an ERNIE-based retrieval model, which is equipped with 1) expressive Transformer-based semantic encoders, and 2) a comprehensive multi-stage training paradigm. More importantly, we present a practical system workflow for deploying the model in web-scale retrieval. Eventually, the system is fully deployed into production, where rigorous offline and online experiments were conducted. The results show that the system can perform high-quality candidate retrieval, especially for those tail queries with uncommon demands. Overall, the new retrieval system facilitated by pretrained language model (i.e., ERNIE) can largely improve the usability and applicability of our search engine.
[ { "created": "Mon, 7 Jun 2021 06:55:45 GMT", "version": "v1" }, { "created": "Fri, 25 Jun 2021 13:32:13 GMT", "version": "v2" }, { "created": "Wed, 30 Jun 2021 05:38:58 GMT", "version": "v3" }, { "created": "Sat, 16 Oct 2021 15:12:57 GMT", "version": "v4" } ]
2021-10-19
[ [ "Liu", "Yiding", "" ], [ "Huang", "Guan", "" ], [ "Liu", "Jiaxiang", "" ], [ "Lu", "Weixue", "" ], [ "Cheng", "Suqi", "" ], [ "Li", "Yukun", "" ], [ "Shi", "Daiting", "" ], [ "Wang", "Shuaiqiang", "" ], [ "Cheng", "Zhicong", "" ], [ "Yin", "Dawei", "" ] ]
Retrieval is a crucial stage in web search that identifies a small set of query-relevant candidates from a billion-scale corpus. Discovering more semantically-related candidates in the retrieval stage is very promising to expose more high-quality results to the end users. However, it still remains non-trivial challenges of building and deploying effective retrieval models for semantic matching in real search engine. In this paper, we describe the retrieval system that we developed and deployed in Baidu Search. The system exploits the recent state-of-the-art Chinese pretrained language model, namely Enhanced Representation through kNowledge IntEgration (ERNIE), which facilitates the system with expressive semantic matching. In particular, we developed an ERNIE-based retrieval model, which is equipped with 1) expressive Transformer-based semantic encoders, and 2) a comprehensive multi-stage training paradigm. More importantly, we present a practical system workflow for deploying the model in web-scale retrieval. Eventually, the system is fully deployed into production, where rigorous offline and online experiments were conducted. The results show that the system can perform high-quality candidate retrieval, especially for those tail queries with uncommon demands. Overall, the new retrieval system facilitated by pretrained language model (i.e., ERNIE) can largely improve the usability and applicability of our search engine.
2310.17569
Xinghui Li Mr.
Xinghui Li, Jingyi Lu, Kai Han, Victor Prisacariu
SD4Match: Learning to Prompt Stable Diffusion Model for Semantic Matching
Accepted to CVPR 2024. Project website: https://sd4match.active.vision/
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we address the challenge of matching semantically similar keypoints across image pairs. Existing research indicates that the intermediate output of the UNet within the Stable Diffusion (SD) can serve as robust image feature maps for such a matching task. We demonstrate that by employing a basic prompt tuning technique, the inherent potential of Stable Diffusion can be harnessed, resulting in a significant enhancement in accuracy over previous approaches. We further introduce a novel conditional prompting module that conditions the prompt on the local details of the input image pairs, leading to a further improvement in performance. We designate our approach as SD4Match, short for Stable Diffusion for Semantic Matching. Comprehensive evaluations of SD4Match on the PF-Pascal, PF-Willow, and SPair-71k datasets show that it sets new benchmarks in accuracy across all these datasets. Particularly, SD4Match outperforms the previous state-of-the-art by a margin of 12 percentage points on the challenging SPair-71k dataset.
[ { "created": "Thu, 26 Oct 2023 16:58:01 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 11:52:23 GMT", "version": "v2" } ]
2024-03-27
[ [ "Li", "Xinghui", "" ], [ "Lu", "Jingyi", "" ], [ "Han", "Kai", "" ], [ "Prisacariu", "Victor", "" ] ]
In this paper, we address the challenge of matching semantically similar keypoints across image pairs. Existing research indicates that the intermediate output of the UNet within the Stable Diffusion (SD) can serve as robust image feature maps for such a matching task. We demonstrate that by employing a basic prompt tuning technique, the inherent potential of Stable Diffusion can be harnessed, resulting in a significant enhancement in accuracy over previous approaches. We further introduce a novel conditional prompting module that conditions the prompt on the local details of the input image pairs, leading to a further improvement in performance. We designate our approach as SD4Match, short for Stable Diffusion for Semantic Matching. Comprehensive evaluations of SD4Match on the PF-Pascal, PF-Willow, and SPair-71k datasets show that it sets new benchmarks in accuracy across all these datasets. Particularly, SD4Match outperforms the previous state-of-the-art by a margin of 12 percentage points on the challenging SPair-71k dataset.
2103.12328
Kazuma Kobayashi
Kazuma Kobayashi, Ryuichiro Hataya, Yusuke Kurose, Mototaka Miyake, Masamichi Takahashi, Akiko Nakagawa, Tatsuya Harada, Ryuji Hamamoto
Decomposing Normal and Abnormal Features of Medical Images into Discrete Latent Codes for Content-Based Image Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In medical imaging, the characteristics purely derived from a disease should reflect the extent to which abnormal findings deviate from the normal features. Indeed, physicians often need corresponding images without abnormal findings of interest or, conversely, images that contain similar abnormal findings regardless of normal anatomical context. This is called comparative diagnostic reading of medical images, which is essential for a correct diagnosis. To support comparative diagnostic reading, content-based image retrieval (CBIR), which can selectively utilize normal and abnormal features in medical images as two separable semantic components, will be useful. Therefore, we propose a neural network architecture to decompose the semantic components of medical images into two latent codes: normal anatomy code and abnormal anatomy code. The normal anatomy code represents normal anatomies that should have existed if the sample is healthy, whereas the abnormal anatomy code attributes to abnormal changes that reflect deviation from the normal baseline. These latent codes are discretized through vector quantization to enable binary hashing, which can reduce the computational burden at the time of similarity search. By calculating the similarity based on either normal or abnormal anatomy codes or the combination of the two codes, our algorithm can retrieve images according to the selected semantic component from a dataset consisting of brain magnetic resonance images of gliomas. Our CBIR system qualitatively and quantitatively achieves remarkable results.
[ { "created": "Tue, 23 Mar 2021 05:53:53 GMT", "version": "v1" } ]
2021-03-24
[ [ "Kobayashi", "Kazuma", "" ], [ "Hataya", "Ryuichiro", "" ], [ "Kurose", "Yusuke", "" ], [ "Miyake", "Mototaka", "" ], [ "Takahashi", "Masamichi", "" ], [ "Nakagawa", "Akiko", "" ], [ "Harada", "Tatsuya", "" ], [ "Hamamoto", "Ryuji", "" ] ]
In medical imaging, the characteristics purely derived from a disease should reflect the extent to which abnormal findings deviate from the normal features. Indeed, physicians often need corresponding images without abnormal findings of interest or, conversely, images that contain similar abnormal findings regardless of normal anatomical context. This is called comparative diagnostic reading of medical images, which is essential for a correct diagnosis. To support comparative diagnostic reading, content-based image retrieval (CBIR), which can selectively utilize normal and abnormal features in medical images as two separable semantic components, will be useful. Therefore, we propose a neural network architecture to decompose the semantic components of medical images into two latent codes: normal anatomy code and abnormal anatomy code. The normal anatomy code represents normal anatomies that should have existed if the sample is healthy, whereas the abnormal anatomy code attributes to abnormal changes that reflect deviation from the normal baseline. These latent codes are discretized through vector quantization to enable binary hashing, which can reduce the computational burden at the time of similarity search. By calculating the similarity based on either normal or abnormal anatomy codes or the combination of the two codes, our algorithm can retrieve images according to the selected semantic component from a dataset consisting of brain magnetic resonance images of gliomas. Our CBIR system qualitatively and quantitatively achieves remarkable results.
2102.09202
Emir Demirel
Emir Demirel, Sven Ahlb\"ack, Simon Dixon
Low Resource Audio-to-Lyrics Alignment From Polyphonic Music Recordings
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lyrics alignment in long music recordings can be memory exhaustive when performed in a single pass. In this study, we present a novel method that performs audio-to-lyrics alignment with a low memory consumption footprint regardless of the duration of the music recording. The proposed system first spots the anchoring words within the audio signal. With respect to these anchors, the recording is then segmented and a second-pass alignment is performed to obtain the word timings. We show that our audio-to-lyrics alignment system performs competitively with the state-of-the-art, while requiring much less computational resources. In addition, we utilise our lyrics alignment system to segment the music recordings into sentence-level chunks. Notably on the segmented recordings, we report the lyrics transcription scores on a number of benchmark test sets. Finally, our experiments highlight the importance of the source separation step for good performance on the transcription and alignment tasks. For reproducibility, we publicly share our code with the research community.
[ { "created": "Thu, 18 Feb 2021 07:54:56 GMT", "version": "v1" } ]
2021-02-19
[ [ "Demirel", "Emir", "" ], [ "Ahlbäck", "Sven", "" ], [ "Dixon", "Simon", "" ] ]
Lyrics alignment in long music recordings can be memory exhaustive when performed in a single pass. In this study, we present a novel method that performs audio-to-lyrics alignment with a low memory consumption footprint regardless of the duration of the music recording. The proposed system first spots the anchoring words within the audio signal. With respect to these anchors, the recording is then segmented and a second-pass alignment is performed to obtain the word timings. We show that our audio-to-lyrics alignment system performs competitively with the state-of-the-art, while requiring much less computational resources. In addition, we utilise our lyrics alignment system to segment the music recordings into sentence-level chunks. Notably on the segmented recordings, we report the lyrics transcription scores on a number of benchmark test sets. Finally, our experiments highlight the importance of the source separation step for good performance on the transcription and alignment tasks. For reproducibility, we publicly share our code with the research community.
1401.2483
Andino Maseleno
Andino Maseleno and Md. Mahmud Hasan
Dempster-Shafer Theory for Move Prediction in Start Kicking of The Bicycle Kick of Sepak Takraw Game
Middle-East Journal of Scientific Research, Vol. 16, No. 7, 2013. ISSN 1990-9233, pp. 896 - 903
null
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
This paper presents Dempster-Shafer theory for move prediction in start kicking of the bicycle kick of sepak takraw game. Sepak takraw is a highly complex net-barrier kicking sport that involves dazzling displays of quick reflexes, acrobatic twists, turns and swerves of the agile human body movement. A Bicycle kick or Scissor kick is a physical move made by throwing the body up into the air, making a shearing movement with the legs to get one leg in front of the other without holding on to the ground. Specifically, this paper considers bicycle kick of sepak takraw game in start kicking of the ball with uncertainty where player has different awareness regarding the contingencies. We have chosen Dempster-Shafer theory because the advantages of the Dempster-Shafer theory which include the ability to model information in a flexible way without requiring a probability to be assigned to each element in a set, providing a convenient and simple mechanism for combining two or more pieces of evidence under certain conditions, it can model ignorance explicitly, rejection of the law of additivity for belief in disjoint propositions.
[ { "created": "Fri, 10 Jan 2014 23:48:40 GMT", "version": "v1" } ]
2014-01-14
[ [ "Maseleno", "Andino", "" ], [ "Hasan", "Md. Mahmud", "" ] ]
This paper presents Dempster-Shafer theory for move prediction in start kicking of the bicycle kick of sepak takraw game. Sepak takraw is a highly complex net-barrier kicking sport that involves dazzling displays of quick reflexes, acrobatic twists, turns and swerves of the agile human body movement. A Bicycle kick or Scissor kick is a physical move made by throwing the body up into the air, making a shearing movement with the legs to get one leg in front of the other without holding on to the ground. Specifically, this paper considers bicycle kick of sepak takraw game in start kicking of the ball with uncertainty where player has different awareness regarding the contingencies. We have chosen Dempster-Shafer theory because the advantages of the Dempster-Shafer theory which include the ability to model information in a flexible way without requiring a probability to be assigned to each element in a set, providing a convenient and simple mechanism for combining two or more pieces of evidence under certain conditions, it can model ignorance explicitly, rejection of the law of additivity for belief in disjoint propositions.
1906.08462
Chongyi Li
Chongyi Li, Runmin Cong, Junhui Hou, Sanyi Zhang, Yue Qian, Sam Kwong
Nested Network with Two-Stream Pyramid for Salient Object Detection in Optical Remote Sensing Images
11 pages, 8 figures, has been accepted by TGRS
null
10.1109/TGRS.2019.2925070
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Arising from the various object types and scales, diverse imaging orientations, and cluttered backgrounds in optical remote sensing image (RSI), it is difficult to directly extend the success of salient object detection for nature scene image to the optical RSI. In this paper, we propose an end-to-end deep network called LV-Net based on the shape of network architecture, which detects salient objects from optical RSIs in a purely data-driven fashion. The proposed LV-Net consists of two key modules, i.e., a two-stream pyramid module (L-shaped module) and an encoder-decoder module with nested connections (V-shaped module). Specifically, the L-shaped module extracts a set of complementary information hierarchically by using a two-stream pyramid structure, which is beneficial to perceiving the diverse scales and local details of salient objects. The V-shaped module gradually integrates encoder detail features with decoder semantic features through nested connections, which aims at suppressing the cluttered backgrounds and highlighting the salient objects. In addition, we construct the first publicly available optical RSI dataset for salient object detection, including 800 images with varying spatial resolutions, diverse saliency types, and pixel-wise ground truth. Experiments on this benchmark dataset demonstrate that the proposed method outperforms the state-of-the-art salient object detection methods both qualitatively and quantitatively.
[ { "created": "Thu, 20 Jun 2019 06:57:13 GMT", "version": "v1" } ]
2020-01-08
[ [ "Li", "Chongyi", "" ], [ "Cong", "Runmin", "" ], [ "Hou", "Junhui", "" ], [ "Zhang", "Sanyi", "" ], [ "Qian", "Yue", "" ], [ "Kwong", "Sam", "" ] ]
Arising from the various object types and scales, diverse imaging orientations, and cluttered backgrounds in optical remote sensing image (RSI), it is difficult to directly extend the success of salient object detection for nature scene image to the optical RSI. In this paper, we propose an end-to-end deep network called LV-Net based on the shape of network architecture, which detects salient objects from optical RSIs in a purely data-driven fashion. The proposed LV-Net consists of two key modules, i.e., a two-stream pyramid module (L-shaped module) and an encoder-decoder module with nested connections (V-shaped module). Specifically, the L-shaped module extracts a set of complementary information hierarchically by using a two-stream pyramid structure, which is beneficial to perceiving the diverse scales and local details of salient objects. The V-shaped module gradually integrates encoder detail features with decoder semantic features through nested connections, which aims at suppressing the cluttered backgrounds and highlighting the salient objects. In addition, we construct the first publicly available optical RSI dataset for salient object detection, including 800 images with varying spatial resolutions, diverse saliency types, and pixel-wise ground truth. Experiments on this benchmark dataset demonstrate that the proposed method outperforms the state-of-the-art salient object detection methods both qualitatively and quantitatively.
1902.08646
Fabio Kepler
F\'abio Kepler, Jonay Tr\'enous, Marcos Treviso, Miguel Vera, Andr\'e F. T. Martins
OpenKiwi: An Open Source Framework for Quality Estimation
Published at the Annual Meeting of the Association for Computational Linguistics (ACL) 2019: System Demonstrations (https://aclweb.org/anthology/papers/P/P19/P19-3020/)
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce OpenKiwi, a PyTorch-based open source framework for translation quality estimation. OpenKiwi supports training and testing of word-level and sentence-level quality estimation systems, implementing the winning systems of the WMT 2015-18 quality estimation campaigns. We benchmark OpenKiwi on two datasets from WMT 2018 (English-German SMT and NMT), yielding state-of-the-art performance on the word-level tasks and near state-of-the-art in the sentence-level tasks.
[ { "created": "Fri, 22 Feb 2019 19:27:45 GMT", "version": "v1" }, { "created": "Mon, 26 Aug 2019 15:07:52 GMT", "version": "v2" } ]
2019-08-27
[ [ "Kepler", "Fábio", "" ], [ "Trénous", "Jonay", "" ], [ "Treviso", "Marcos", "" ], [ "Vera", "Miguel", "" ], [ "Martins", "André F. T.", "" ] ]
We introduce OpenKiwi, a PyTorch-based open source framework for translation quality estimation. OpenKiwi supports training and testing of word-level and sentence-level quality estimation systems, implementing the winning systems of the WMT 2015-18 quality estimation campaigns. We benchmark OpenKiwi on two datasets from WMT 2018 (English-German SMT and NMT), yielding state-of-the-art performance on the word-level tasks and near state-of-the-art in the sentence-level tasks.
2104.08542
Huifeng Guo
Huifeng Guo, Wei Guo, Yong Gao, Ruiming Tang, Xiuqiang He, Wenzhi Liu
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models with Huge Embedding Table
10 pages
null
null
null
cs.IR cs.AI cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Because of the superior feature representation ability of deep learning, various deep Click-Through Rate (CTR) models are deployed in the commercial systems by industrial companies. To achieve better performance, it is necessary to train the deep CTR models on huge volume of training data efficiently, which makes speeding up the training process an essential problem. Different from the models with dense training data, the training data for CTR models is usually high-dimensional and sparse. To transform the high-dimensional sparse input into low-dimensional dense real-value vectors, almost all deep CTR models adopt the embedding layer, which easily reaches hundreds of GB or even TB. Since a single GPU cannot afford to accommodate all the embedding parameters, when performing distributed training, it is not reasonable to conduct the data-parallelism only. Therefore, existing distributed training platforms for recommendation adopt model-parallelism. Specifically, they use CPU (Host) memory of servers to maintain and update the embedding parameters and utilize GPU worker to conduct forward and backward computations. Unfortunately, these platforms suffer from two bottlenecks: (1) the latency of pull \& push operations between Host and GPU; (2) parameters update and synchronization in the CPU servers. To address such bottlenecks, in this paper, we propose the ScaleFreeCTR: a MixCache-based distributed training system for CTR models. Specifically, in SFCTR, we also store huge embedding table in CPU but utilize GPU instead of CPU to conduct embedding synchronization efficiently. To reduce the latency of data transfer between both GPU-Host and GPU-GPU, the MixCache mechanism and Virtual Sparse Id operation are proposed. Comprehensive experiments and ablation studies are conducted to demonstrate the effectiveness and efficiency of SFCTR.
[ { "created": "Sat, 17 Apr 2021 13:36:19 GMT", "version": "v1" }, { "created": "Tue, 11 May 2021 14:11:46 GMT", "version": "v2" } ]
2021-05-12
[ [ "Guo", "Huifeng", "" ], [ "Guo", "Wei", "" ], [ "Gao", "Yong", "" ], [ "Tang", "Ruiming", "" ], [ "He", "Xiuqiang", "" ], [ "Liu", "Wenzhi", "" ] ]
Because of the superior feature representation ability of deep learning, various deep Click-Through Rate (CTR) models are deployed in the commercial systems by industrial companies. To achieve better performance, it is necessary to train the deep CTR models on huge volume of training data efficiently, which makes speeding up the training process an essential problem. Different from the models with dense training data, the training data for CTR models is usually high-dimensional and sparse. To transform the high-dimensional sparse input into low-dimensional dense real-value vectors, almost all deep CTR models adopt the embedding layer, which easily reaches hundreds of GB or even TB. Since a single GPU cannot afford to accommodate all the embedding parameters, when performing distributed training, it is not reasonable to conduct the data-parallelism only. Therefore, existing distributed training platforms for recommendation adopt model-parallelism. Specifically, they use CPU (Host) memory of servers to maintain and update the embedding parameters and utilize GPU worker to conduct forward and backward computations. Unfortunately, these platforms suffer from two bottlenecks: (1) the latency of pull \& push operations between Host and GPU; (2) parameters update and synchronization in the CPU servers. To address such bottlenecks, in this paper, we propose the ScaleFreeCTR: a MixCache-based distributed training system for CTR models. Specifically, in SFCTR, we also store huge embedding table in CPU but utilize GPU instead of CPU to conduct embedding synchronization efficiently. To reduce the latency of data transfer between both GPU-Host and GPU-GPU, the MixCache mechanism and Virtual Sparse Id operation are proposed. Comprehensive experiments and ablation studies are conducted to demonstrate the effectiveness and efficiency of SFCTR.
2401.15595
Manas Mhasakar
Manas Mhasakar, Shikhar Sharma, Apurv Mehra, Utkarsh Venaik, Ujjwal Singhal, Dhruv Kumar, Kashish Mittal
Comuniqa : Exploring Large Language Models for improving speaking skills
Accepted at 7th ACM SIGCAS/SIGCHI Conference of Computing and Sustainable Societies : ACM COMPASS 2024
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the potential of Large Language Models (LLMs) to improve English speaking skills. This is particularly relevant in countries like India, where English is crucial for academic, professional, and personal communication but remains a non-native language for many. Traditional methods for enhancing speaking skills often rely on human experts, which can be limited in terms of scalability, accessibility, and affordability. Recent advancements in Artificial Intelligence (AI) offer promising solutions to overcome these limitations. We propose Comuniqa, a novel LLM-based system designed to enhance English speaking skills. We adopt a human-centric evaluation approach, comparing Comuniqa with the feedback and instructions provided by human experts. In our evaluation, we divide the participants in three groups: those who use LLM-based system for improving speaking skills, those guided by human experts for the same task and those who utilize both the LLM-based system as well as the human experts. Using surveys, interviews, and actual study sessions, we provide a detailed perspective on the effectiveness of different learning modalities. Our preliminary findings suggest that while LLM-based systems have commendable accuracy, they lack human-level cognitive capabilities, both in terms of accuracy and empathy. Nevertheless, Comuniqa represents a significant step towards achieving Sustainable Development Goal 4: Quality Education by providing a valuable learning tool for individuals who may not have access to human experts for improving their speaking skills.
[ { "created": "Sun, 28 Jan 2024 07:37:33 GMT", "version": "v1" }, { "created": "Wed, 3 Apr 2024 14:33:10 GMT", "version": "v2" }, { "created": "Tue, 14 May 2024 04:34:20 GMT", "version": "v3" } ]
2024-05-15
[ [ "Mhasakar", "Manas", "" ], [ "Sharma", "Shikhar", "" ], [ "Mehra", "Apurv", "" ], [ "Venaik", "Utkarsh", "" ], [ "Singhal", "Ujjwal", "" ], [ "Kumar", "Dhruv", "" ], [ "Mittal", "Kashish", "" ] ]
In this paper, we investigate the potential of Large Language Models (LLMs) to improve English speaking skills. This is particularly relevant in countries like India, where English is crucial for academic, professional, and personal communication but remains a non-native language for many. Traditional methods for enhancing speaking skills often rely on human experts, which can be limited in terms of scalability, accessibility, and affordability. Recent advancements in Artificial Intelligence (AI) offer promising solutions to overcome these limitations. We propose Comuniqa, a novel LLM-based system designed to enhance English speaking skills. We adopt a human-centric evaluation approach, comparing Comuniqa with the feedback and instructions provided by human experts. In our evaluation, we divide the participants in three groups: those who use LLM-based system for improving speaking skills, those guided by human experts for the same task and those who utilize both the LLM-based system as well as the human experts. Using surveys, interviews, and actual study sessions, we provide a detailed perspective on the effectiveness of different learning modalities. Our preliminary findings suggest that while LLM-based systems have commendable accuracy, they lack human-level cognitive capabilities, both in terms of accuracy and empathy. Nevertheless, Comuniqa represents a significant step towards achieving Sustainable Development Goal 4: Quality Education by providing a valuable learning tool for individuals who may not have access to human experts for improving their speaking skills.
2110.02514
Woojoo Kim
Woojoo Kim, Shuping Xiong
ViewfinderVR: Configurable Viewfinder for Selection of Distant Objects in VR
null
null
10.1007/s10055-022-00649-z
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Selection is one of the fundamental user interactions in virtual reality (VR) and 3D user interaction, and raycasting has been one of the most popular object selection techniques in VR. However, the selection of small or distant objects through raycasting has been known to be difficult. To overcome this limitation, this study proposed a new technique called ViewfinderVR for improved selection of distant objects in VR, utilizing a virtual viewfinder panel with a modern adaptation of the through-the-lens metaphor. ViewfinderVR enables faster and more accurate target selection by allowing customization of the interaction space projected onto a virtual panel within reach, and users can select objects reflected on the panel with either ray-based or touch interaction. Experimental results of Fitts' law-based tests with 20 participants showed that ViewfinderVR outperformed traditional raycasting in terms of task performance (movement time, error rate, and throughput) and perceived workload (NASA-TLX ratings), where touch interaction was superior to ray-based interaction. The associated user behavior was also recorded and analyzed to understand the underlying reasons for the improved task performance and reduced workload. The proposed technique can be used in VR applications to enhance the selection of distant objects.
[ { "created": "Wed, 6 Oct 2021 05:35:04 GMT", "version": "v1" } ]
2022-06-07
[ [ "Kim", "Woojoo", "" ], [ "Xiong", "Shuping", "" ] ]
Selection is one of the fundamental user interactions in virtual reality (VR) and 3D user interaction, and raycasting has been one of the most popular object selection techniques in VR. However, the selection of small or distant objects through raycasting has been known to be difficult. To overcome this limitation, this study proposed a new technique called ViewfinderVR for improved selection of distant objects in VR, utilizing a virtual viewfinder panel with a modern adaptation of the through-the-lens metaphor. ViewfinderVR enables faster and more accurate target selection by allowing customization of the interaction space projected onto a virtual panel within reach, and users can select objects reflected on the panel with either ray-based or touch interaction. Experimental results of Fitts' law-based tests with 20 participants showed that ViewfinderVR outperformed traditional raycasting in terms of task performance (movement time, error rate, and throughput) and perceived workload (NASA-TLX ratings), where touch interaction was superior to ray-based interaction. The associated user behavior was also recorded and analyzed to understand the underlying reasons for the improved task performance and reduced workload. The proposed technique can be used in VR applications to enhance the selection of distant objects.
2107.06511
Dingcheng Yang
Dingcheng Yang, Wenjian Yu, Yuanbo Guo, Wenjie Liang
CNN-Cap: Effective Convolutional Neural Network Based Capacitance Models for Full-Chip Parasitic Extraction
9 pages, 13 figures. Accepted at 2021 International Conference On Computer Aided Design (ICCAD)
null
null
null
cs.LG cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate capacitance extraction is becoming more important for designing integrated circuits under advanced process technology. The pattern matching based full-chip extraction methodology delivers fast computational speed, but suffers from large error, and tedious efforts on building capacitance models of the increasing structure patterns. In this work, we propose an effective method for building convolutional neural network (CNN) based capacitance models (called CNN-Cap) for two-dimensional (2-D) structures in full-chip capacitance extraction. With a novel grid-based data representation, the proposed method is able to model the pattern with a variable number of conductors, so that largely reduce the number of patterns. Based on the ability of ResNet architecture on capturing spatial information and the proposed training skills, the obtained CNN-Cap exhibits much better performance over the multilayer perception neural network based capacitance model while being more versatile. Extensive experiments on a 55nm and a 15nm process technologies have demonstrated that the error of total capacitance produced with CNN-Cap is always within 1.3% and the error of produced coupling capacitance is less than 10% in over 99.5% probability. CNN-Cap runs more than 4000X faster than 2-D field solver on a GPU server, while it consumes negligible memory compared to the look-up table based capacitance model.
[ { "created": "Wed, 14 Jul 2021 07:14:35 GMT", "version": "v1" } ]
2021-07-15
[ [ "Yang", "Dingcheng", "" ], [ "Yu", "Wenjian", "" ], [ "Guo", "Yuanbo", "" ], [ "Liang", "Wenjie", "" ] ]
Accurate capacitance extraction is becoming more important for designing integrated circuits under advanced process technology. The pattern matching based full-chip extraction methodology delivers fast computational speed, but suffers from large error, and tedious efforts on building capacitance models of the increasing structure patterns. In this work, we propose an effective method for building convolutional neural network (CNN) based capacitance models (called CNN-Cap) for two-dimensional (2-D) structures in full-chip capacitance extraction. With a novel grid-based data representation, the proposed method is able to model the pattern with a variable number of conductors, so that largely reduce the number of patterns. Based on the ability of ResNet architecture on capturing spatial information and the proposed training skills, the obtained CNN-Cap exhibits much better performance over the multilayer perception neural network based capacitance model while being more versatile. Extensive experiments on a 55nm and a 15nm process technologies have demonstrated that the error of total capacitance produced with CNN-Cap is always within 1.3% and the error of produced coupling capacitance is less than 10% in over 99.5% probability. CNN-Cap runs more than 4000X faster than 2-D field solver on a GPU server, while it consumes negligible memory compared to the look-up table based capacitance model.
2311.00434
Shintaro Shiba
Shintaro Shiba, Friedhelm Hamann, Yoshimitsu Aoki, Guillermo Gallego
Event-based Background-Oriented Schlieren
Accepted at IEEE T-PAMI
IEEE Transactions on Pattern Analysis and Machine Intelligence, Oct. 2023
10.1109/TPAMI.2023.3328188
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Schlieren imaging is an optical technique to observe the flow of transparent media, such as air or water, without any particle seeding. However, conventional frame-based techniques require both high spatial and temporal resolution cameras, which impose bright illumination and expensive computation limitations. Event cameras offer potential advantages (high dynamic range, high temporal resolution, and data efficiency) to overcome such limitations due to their bio-inspired sensing principle. This paper presents a novel technique for perceiving air convection using events and frames by providing the first theoretical analysis that connects event data and schlieren. We formulate the problem as a variational optimization one combining the linearized event generation model with a physically-motivated parameterization that estimates the temporal derivative of the air density. The experiments with accurately aligned frame- and event camera data reveal that the proposed method enables event cameras to obtain on par results with existing frame-based optical flow techniques. Moreover, the proposed method works under dark conditions where frame-based schlieren fails, and also enables slow-motion analysis by leveraging the event camera's advantages. Our work pioneers and opens a new stack of event camera applications, as we publish the source code as well as the first schlieren dataset with high-quality frame and event data. https://github.com/tub-rip/event_based_bos
[ { "created": "Wed, 1 Nov 2023 10:57:20 GMT", "version": "v1" } ]
2024-03-05
[ [ "Shiba", "Shintaro", "" ], [ "Hamann", "Friedhelm", "" ], [ "Aoki", "Yoshimitsu", "" ], [ "Gallego", "Guillermo", "" ] ]
Schlieren imaging is an optical technique to observe the flow of transparent media, such as air or water, without any particle seeding. However, conventional frame-based techniques require both high spatial and temporal resolution cameras, which impose bright illumination and expensive computation limitations. Event cameras offer potential advantages (high dynamic range, high temporal resolution, and data efficiency) to overcome such limitations due to their bio-inspired sensing principle. This paper presents a novel technique for perceiving air convection using events and frames by providing the first theoretical analysis that connects event data and schlieren. We formulate the problem as a variational optimization one combining the linearized event generation model with a physically-motivated parameterization that estimates the temporal derivative of the air density. The experiments with accurately aligned frame- and event camera data reveal that the proposed method enables event cameras to obtain on par results with existing frame-based optical flow techniques. Moreover, the proposed method works under dark conditions where frame-based schlieren fails, and also enables slow-motion analysis by leveraging the event camera's advantages. Our work pioneers and opens a new stack of event camera applications, as we publish the source code as well as the first schlieren dataset with high-quality frame and event data. https://github.com/tub-rip/event_based_bos
1906.01622
Mozhi Zhang
Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber
Are Girls Neko or Sh\=ojo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization
ACL 2019
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For non-isomorphic pairs, our method (Iterative Normalization) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that (1) individual word vectors are unit length, and (2) each language's average vector is zero. Iterative Normalization consistently improves word translation accuracy of three CLWE methods, with the largest improvement observed on English-Japanese (from 2% to 44% test accuracy).
[ { "created": "Tue, 4 Jun 2019 17:56:22 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2019 01:34:19 GMT", "version": "v2" }, { "created": "Mon, 11 Nov 2019 07:36:47 GMT", "version": "v3" } ]
2019-11-12
[ [ "Zhang", "Mozhi", "" ], [ "Xu", "Keyulu", "" ], [ "Kawarabayashi", "Ken-ichi", "" ], [ "Jegelka", "Stefanie", "" ], [ "Boyd-Graber", "Jordan", "" ] ]
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For non-isomorphic pairs, our method (Iterative Normalization) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that (1) individual word vectors are unit length, and (2) each language's average vector is zero. Iterative Normalization consistently improves word translation accuracy of three CLWE methods, with the largest improvement observed on English-Japanese (from 2% to 44% test accuracy).
2305.04539
Masato Uchida
Kota Kawamoto and Masato Uchida
Q&A Label Learning
46 pages, 5 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assigning labels to instances is crucial for supervised machine learning. In this paper, we proposed a novel annotation method called Q&A labeling, which involves a question generator that asks questions about the labels of the instances to be assigned, and an annotator who answers the questions and assigns the corresponding labels to the instances. We derived a generative model of labels assigned according to two different Q&A labeling procedures that differ in the way questions are asked and answered. We showed that, in both procedures, the derived model is partially consistent with that assumed in previous studies. The main distinction of this study from previous studies lies in the fact that the label generative model was not assumed, but rather derived based on the definition of a specific annotation method, Q&A labeling. We also derived a loss function to evaluate the classification risk of ordinary supervised machine learning using instances assigned Q&A labels and evaluated the upper bound of the classification error. The results indicate statistical consistency in learning with Q&A labels.
[ { "created": "Mon, 8 May 2023 08:22:18 GMT", "version": "v1" } ]
2023-05-09
[ [ "Kawamoto", "Kota", "" ], [ "Uchida", "Masato", "" ] ]
Assigning labels to instances is crucial for supervised machine learning. In this paper, we proposed a novel annotation method called Q&A labeling, which involves a question generator that asks questions about the labels of the instances to be assigned, and an annotator who answers the questions and assigns the corresponding labels to the instances. We derived a generative model of labels assigned according to two different Q&A labeling procedures that differ in the way questions are asked and answered. We showed that, in both procedures, the derived model is partially consistent with that assumed in previous studies. The main distinction of this study from previous studies lies in the fact that the label generative model was not assumed, but rather derived based on the definition of a specific annotation method, Q&A labeling. We also derived a loss function to evaluate the classification risk of ordinary supervised machine learning using instances assigned Q&A labels and evaluated the upper bound of the classification error. The results indicate statistical consistency in learning with Q&A labels.
1904.09823
Yingchao Feng
Yingchao Feng, Wenhui Diao, Zhonghan Chang, Menglong Yan, Xian Sun, Xin Gao
Ship Instance Segmentation From Remote Sensing Images Using Sequence Local Context Module
4 pages, 5 figures, IEEE Geoscience and Remote Sensing Society 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The performance of object instance segmentation in remote sensing images has been greatly improved through the introduction of many landmark frameworks based on convolutional neural network. However, the object densely issue still affects the accuracy of such segmentation frameworks. Objects of the same class are easily confused, which is most likely due to the close docking between objects. We think context information is critical to address this issue. So, we propose a novel framework called SLCMASK-Net, in which a sequence local context module (SLC) is introduced to avoid confusion between objects of the same class. The SLC module applies a sequence of dilation convolution blocks to progressively learn multi-scale context information in the mask branch. Besides, we try to add SLC module to different locations in our framework and experiment with the effect of different parameter settings. Comparative experiments are conducted on remote sensing images acquired by QuickBird with a resolution of $0.5m-1m$ and the results show that the proposed method achieves state-of-the-art performance.
[ { "created": "Mon, 22 Apr 2019 12:33:06 GMT", "version": "v1" } ]
2019-04-23
[ [ "Feng", "Yingchao", "" ], [ "Diao", "Wenhui", "" ], [ "Chang", "Zhonghan", "" ], [ "Yan", "Menglong", "" ], [ "Sun", "Xian", "" ], [ "Gao", "Xin", "" ] ]
The performance of object instance segmentation in remote sensing images has been greatly improved through the introduction of many landmark frameworks based on convolutional neural network. However, the object densely issue still affects the accuracy of such segmentation frameworks. Objects of the same class are easily confused, which is most likely due to the close docking between objects. We think context information is critical to address this issue. So, we propose a novel framework called SLCMASK-Net, in which a sequence local context module (SLC) is introduced to avoid confusion between objects of the same class. The SLC module applies a sequence of dilation convolution blocks to progressively learn multi-scale context information in the mask branch. Besides, we try to add SLC module to different locations in our framework and experiment with the effect of different parameter settings. Comparative experiments are conducted on remote sensing images acquired by QuickBird with a resolution of $0.5m-1m$ and the results show that the proposed method achieves state-of-the-art performance.
2105.02905
Roberto Metere
Roberto Metere, Myriam Neaimeh, Charles Morisset, Carsten Maple, Xavier Bellekens, Ricardo M. Czekster
Securing the Electric Vehicle Charging Infrastructure
42 pages, white paper
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electric Vehicles (EVs) can help alleviate our reliance on fossil fuels for transport and electricity systems. However, charging millions of EV batteries requires management to prevent overloading the electricity grid and minimise costly upgrades that are ultimately paid for by consumers. Managed chargers, such as Vehicle-to-Grid (V2G) chargers, allow control over the time, speed and direction of charging. Such control assists in balancing electricity supply and demand across a green electricity system and could reduce costs for consumers. Smart and V2G chargers connect EVs to the power grid using a charging device which includes a data connection to exchange information and control commands between various entities in the EV ecosystem. This introduces data privacy concerns and is a potential target for cyber-security attacks. Therefore, the implementation of a secure system is crucial to permit both consumers and electricity system operators to trust smart charging and V2G. In principle, we already have the technology needed for a connected EV charging infrastructure to be securely enabled, borrowing best practices from the Internet and industrial control systems. We must properly adapt the security technology to take into account the challenges peculiar to the EV charging infrastructure. Challenges go beyond technical considerations and other issues arise such as balancing trade-offs between security and other desirable qualities such as interoperability, scalability, crypto-agility, affordability and energy efficiency. This document reviews security and privacy topics relevant to the EV charging ecosystem with a focus on smart charging and V2G.
[ { "created": "Thu, 6 May 2021 18:10:42 GMT", "version": "v1" }, { "created": "Mon, 4 Apr 2022 16:03:07 GMT", "version": "v2" }, { "created": "Wed, 6 Jul 2022 09:54:31 GMT", "version": "v3" } ]
2022-07-07
[ [ "Metere", "Roberto", "" ], [ "Neaimeh", "Myriam", "" ], [ "Morisset", "Charles", "" ], [ "Maple", "Carsten", "" ], [ "Bellekens", "Xavier", "" ], [ "Czekster", "Ricardo M.", "" ] ]
Electric Vehicles (EVs) can help alleviate our reliance on fossil fuels for transport and electricity systems. However, charging millions of EV batteries requires management to prevent overloading the electricity grid and minimise costly upgrades that are ultimately paid for by consumers. Managed chargers, such as Vehicle-to-Grid (V2G) chargers, allow control over the time, speed and direction of charging. Such control assists in balancing electricity supply and demand across a green electricity system and could reduce costs for consumers. Smart and V2G chargers connect EVs to the power grid using a charging device which includes a data connection to exchange information and control commands between various entities in the EV ecosystem. This introduces data privacy concerns and is a potential target for cyber-security attacks. Therefore, the implementation of a secure system is crucial to permit both consumers and electricity system operators to trust smart charging and V2G. In principle, we already have the technology needed for a connected EV charging infrastructure to be securely enabled, borrowing best practices from the Internet and industrial control systems. We must properly adapt the security technology to take into account the challenges peculiar to the EV charging infrastructure. Challenges go beyond technical considerations and other issues arise such as balancing trade-offs between security and other desirable qualities such as interoperability, scalability, crypto-agility, affordability and energy efficiency. This document reviews security and privacy topics relevant to the EV charging ecosystem with a focus on smart charging and V2G.
2209.13514
Hang Zhou
Zhiliang Xu, Hang Zhou, Zhibin Hong, Ziwei Liu, Jiaming Liu, Zhizhi Guo, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang
StyleSwap: Style-Based Generator Empowers Robust Face Swapping
Accepted to ECCV 2022. Demo videos and code can be found at https://hangz-nju-cuhk.github.io/projects/StyleSwap
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Numerous attempts have been made to the task of person-agnostic face swapping given its wide applications. While existing methods mostly rely on tedious network and loss designs, they still struggle in the information balancing between the source and target faces, and tend to produce visible artifacts. In this work, we introduce a concise and effective framework named StyleSwap. Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping, thus the generator's advantage can be adopted for optimizing identity similarity. We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target. Additionally, inspired by the ToRGB layers, a Swapping-Driven Mask Branch is further devised to improve information blending. Furthermore, the advantage of StyleGAN inversion can be adopted. Particularly, a Swapping-Guided ID Inversion strategy is proposed to optimize identity similarity. Extensive experiments validate that our framework generates high-quality face swapping results that outperform state-of-the-art methods both qualitatively and quantitatively.
[ { "created": "Tue, 27 Sep 2022 16:35:16 GMT", "version": "v1" } ]
2022-09-28
[ [ "Xu", "Zhiliang", "" ], [ "Zhou", "Hang", "" ], [ "Hong", "Zhibin", "" ], [ "Liu", "Ziwei", "" ], [ "Liu", "Jiaming", "" ], [ "Guo", "Zhizhi", "" ], [ "Han", "Junyu", "" ], [ "Liu", "Jingtuo", "" ], [ "Ding", "Errui", "" ], [ "Wang", "Jingdong", "" ] ]
Numerous attempts have been made to the task of person-agnostic face swapping given its wide applications. While existing methods mostly rely on tedious network and loss designs, they still struggle in the information balancing between the source and target faces, and tend to produce visible artifacts. In this work, we introduce a concise and effective framework named StyleSwap. Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping, thus the generator's advantage can be adopted for optimizing identity similarity. We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target. Additionally, inspired by the ToRGB layers, a Swapping-Driven Mask Branch is further devised to improve information blending. Furthermore, the advantage of StyleGAN inversion can be adopted. Particularly, a Swapping-Guided ID Inversion strategy is proposed to optimize identity similarity. Extensive experiments validate that our framework generates high-quality face swapping results that outperform state-of-the-art methods both qualitatively and quantitatively.
2311.12312
Evgenia (Eugenia) Ternovska
Eugenia Ternovska
Promise Algebra: An Algebraic Model of Non-Deterministic Computations
34 pages, 6 figures
null
null
null
cs.LO cs.DB
http://creativecommons.org/licenses/by/4.0/
Our goal is to define an algebraic language for reasoning about non-deterministic computations. Towards this goal, we introduce an algebra of string-to-string transductions. Specifically, it is an algebra of partial functions on words over the alphabet of relational $\tau$-structures over the same domain. The algebra has a two-level syntax, and thus, two parameters to control its expressive power. The top level defines algebraic expressions, and the bottom level specifies atomic transitions. History-dependent Choice functions resolve atomic non-determinism, and make general relations functional. Equivalence classes of such functions serve as certificates for computational problems specified by algebraic terms. The algebra has an equivalent syntax in the form of a Dynamic Logic, where terms describing computational processes or programs appear inside the modalities. We define a simple secondary logic for representing atomic transitions, which is a modification of conjunctive queries. With this logic, the algebra can represent both reachability and counting examples, which is not possible in Datalog. We analyze the data complexity of this logic, measured in the size of the input structure, and show that a restricted fragment of the logic captures the complexity class NP. The logic can be viewed as a database query language, where atomic propagations are separated from control.
[ { "created": "Tue, 21 Nov 2023 03:10:43 GMT", "version": "v1" } ]
2023-11-22
[ [ "Ternovska", "Eugenia", "" ] ]
Our goal is to define an algebraic language for reasoning about non-deterministic computations. Towards this goal, we introduce an algebra of string-to-string transductions. Specifically, it is an algebra of partial functions on words over the alphabet of relational $\tau$-structures over the same domain. The algebra has a two-level syntax, and thus, two parameters to control its expressive power. The top level defines algebraic expressions, and the bottom level specifies atomic transitions. History-dependent Choice functions resolve atomic non-determinism, and make general relations functional. Equivalence classes of such functions serve as certificates for computational problems specified by algebraic terms. The algebra has an equivalent syntax in the form of a Dynamic Logic, where terms describing computational processes or programs appear inside the modalities. We define a simple secondary logic for representing atomic transitions, which is a modification of conjunctive queries. With this logic, the algebra can represent both reachability and counting examples, which is not possible in Datalog. We analyze the data complexity of this logic, measured in the size of the input structure, and show that a restricted fragment of the logic captures the complexity class NP. The logic can be viewed as a database query language, where atomic propagations are separated from control.
2103.12452
Rianne De Heide
Rianne de Heide and James Cheshire and Pierre M\'enard and Alexandra Carpentier
Bandits with many optimal arms
Substantial rewrite and added experiments. Accepted for NeurIPS 2021
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
We consider a stochastic bandit problem with a possibly infinite number of arms. We write $p^*$ for the proportion of optimal arms and $\Delta$ for the minimal mean-gap between optimal and sub-optimal arms. We characterize the optimal learning rates both in the cumulative regret setting, and in the best-arm identification setting in terms of the problem parameters $T$ (the budget), $p^*$ and $\Delta$. For the objective of minimizing the cumulative regret, we provide a lower bound of order $\Omega(\log(T)/(p^*\Delta))$ and a UCB-style algorithm with matching upper bound up to a factor of $\log(1/\Delta)$. Our algorithm needs $p^*$ to calibrate its parameters, and we prove that this knowledge is necessary, since adapting to $p^*$ in this setting is impossible. For best-arm identification we also provide a lower bound of order $\Omega(\exp(-cT\Delta^2 p^*))$ on the probability of outputting a sub-optimal arm where $c>0$ is an absolute constant. We also provide an elimination algorithm with an upper bound matching the lower bound up to a factor of order $\log(T)$ in the exponential, and that does not need $p^*$ or $\Delta$ as parameter. Our results apply directly to the three related problems of competing against the $j$-th best arm, identifying an $\epsilon$ good arm, and finding an arm with mean larger than a quantile of a known order.
[ { "created": "Tue, 23 Mar 2021 11:02:31 GMT", "version": "v1" }, { "created": "Fri, 5 Nov 2021 08:25:11 GMT", "version": "v2" } ]
2021-11-08
[ [ "de Heide", "Rianne", "" ], [ "Cheshire", "James", "" ], [ "Ménard", "Pierre", "" ], [ "Carpentier", "Alexandra", "" ] ]
We consider a stochastic bandit problem with a possibly infinite number of arms. We write $p^*$ for the proportion of optimal arms and $\Delta$ for the minimal mean-gap between optimal and sub-optimal arms. We characterize the optimal learning rates both in the cumulative regret setting, and in the best-arm identification setting in terms of the problem parameters $T$ (the budget), $p^*$ and $\Delta$. For the objective of minimizing the cumulative regret, we provide a lower bound of order $\Omega(\log(T)/(p^*\Delta))$ and a UCB-style algorithm with matching upper bound up to a factor of $\log(1/\Delta)$. Our algorithm needs $p^*$ to calibrate its parameters, and we prove that this knowledge is necessary, since adapting to $p^*$ in this setting is impossible. For best-arm identification we also provide a lower bound of order $\Omega(\exp(-cT\Delta^2 p^*))$ on the probability of outputting a sub-optimal arm where $c>0$ is an absolute constant. We also provide an elimination algorithm with an upper bound matching the lower bound up to a factor of order $\log(T)$ in the exponential, and that does not need $p^*$ or $\Delta$ as parameter. Our results apply directly to the three related problems of competing against the $j$-th best arm, identifying an $\epsilon$ good arm, and finding an arm with mean larger than a quantile of a known order.
1810.03993
Margaret Mitchell
Margaret Mitchell and Simone Wu and Andrew Zaldivar and Parker Barnes and Lucy Vasserman and Ben Hutchinson and Elena Spitzer and Inioluwa Deborah Raji and Timnit Gebru
Model Cards for Model Reporting
null
FAT* '19: Conference on Fairness, Accountability, and Transparency, January 29--31, 2019, Atlanta, GA, USA
10.1145/3287560.3287596
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
[ { "created": "Fri, 5 Oct 2018 22:33:43 GMT", "version": "v1" }, { "created": "Mon, 14 Jan 2019 20:25:27 GMT", "version": "v2" } ]
2019-01-16
[ [ "Mitchell", "Margaret", "" ], [ "Wu", "Simone", "" ], [ "Zaldivar", "Andrew", "" ], [ "Barnes", "Parker", "" ], [ "Vasserman", "Lucy", "" ], [ "Hutchinson", "Ben", "" ], [ "Spitzer", "Elena", "" ], [ "Raji", "Inioluwa Deborah", "" ], [ "Gebru", "Timnit", "" ] ]
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
1702.06943
Fengan Li
Fengan Li, Lingjiao Chen, Yijing Zeng, Arun Kumar, Jeffrey F. Naughton, Jignesh M. Patel, Xi Wu
Tuple-oriented Compression for Large-scale Mini-batch Stochastic Gradient Descent
Accepted to Sigmod 2019
null
10.1145/3299869.3300070
null
cs.LG cs.DB stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data compression is a popular technique for improving the efficiency of data processing workloads such as SQL queries and more recently, machine learning (ML) with classical batch gradient methods. But the efficacy of such ideas for mini-batch stochastic gradient descent (MGD), arguably the workhorse algorithm of modern ML, is an open question. MGD's unique data access pattern renders prior art, including those designed for batch gradient methods, less effective. We fill this crucial research gap by proposing a new lossless compression scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely source, the string/text compression scheme Lempel-Ziv-Welch, but tailored to MGD in a way that preserves tuple boundaries within mini-batches. We then present a suite of novel compressed matrix operation execution techniques tailored to the TOC compression scheme that operate directly over the compressed data representation and avoid decompression overheads. An extensive empirical evaluation with real-world datasets shows that TOC consistently achieves substantial compression ratios by up to 51x and reduces runtimes for MGD workloads by up to 10.2x in popular ML systems.
[ { "created": "Wed, 22 Feb 2017 18:58:25 GMT", "version": "v1" }, { "created": "Wed, 1 Mar 2017 05:43:41 GMT", "version": "v2" }, { "created": "Sun, 20 Jan 2019 05:13:18 GMT", "version": "v3" } ]
2019-01-23
[ [ "Li", "Fengan", "" ], [ "Chen", "Lingjiao", "" ], [ "Zeng", "Yijing", "" ], [ "Kumar", "Arun", "" ], [ "Naughton", "Jeffrey F.", "" ], [ "Patel", "Jignesh M.", "" ], [ "Wu", "Xi", "" ] ]
Data compression is a popular technique for improving the efficiency of data processing workloads such as SQL queries and more recently, machine learning (ML) with classical batch gradient methods. But the efficacy of such ideas for mini-batch stochastic gradient descent (MGD), arguably the workhorse algorithm of modern ML, is an open question. MGD's unique data access pattern renders prior art, including those designed for batch gradient methods, less effective. We fill this crucial research gap by proposing a new lossless compression scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely source, the string/text compression scheme Lempel-Ziv-Welch, but tailored to MGD in a way that preserves tuple boundaries within mini-batches. We then present a suite of novel compressed matrix operation execution techniques tailored to the TOC compression scheme that operate directly over the compressed data representation and avoid decompression overheads. An extensive empirical evaluation with real-world datasets shows that TOC consistently achieves substantial compression ratios by up to 51x and reduces runtimes for MGD workloads by up to 10.2x in popular ML systems.
1009.3527
Darren Strash
Michael T. Goodrich and Darren Strash
Priority Range Trees
12 pages, 3 figures. To appear at 21st International Symposium on Algorithms and Computation (ISAAC 2010)
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a data structure, called a priority range tree, which accommodates fast orthogonal range reporting queries on prioritized points. Let $S$ be a set of $n$ points in the plane, where each point $p$ in $S$ is assigned a weight $w(p)$ that is polynomial in $n$, and define the rank of $p$ to be $r(p)=\lfloor \log w(p) \rfloor$. Then the priority range tree can be used to report all points in a three- or four-sided query range $R$ with rank at least $\lfloor \log w \rfloor$ in time $O(\log W/w + k)$, and report $k$ highest-rank points in $R$ in time $O(\log\log n + \log W/w' + k)$, where $W=\sum_{p\in S}{w(p)}$, $w'$ is the smallest weight of any point reported, and $k$ is the output size. All times assume the standard RAM model of computation. If the query range of interest is three sided, then the priority range tree occupies $O(n)$ space, otherwise $O(n\log n)$ space is used to answer four-sided queries. These queries are motivated by the Weber--Fechner Law, which states that humans perceive and interpret data on a logarithmic scale.
[ { "created": "Sat, 18 Sep 2010 01:00:33 GMT", "version": "v1" } ]
2010-09-21
[ [ "Goodrich", "Michael T.", "" ], [ "Strash", "Darren", "" ] ]
We describe a data structure, called a priority range tree, which accommodates fast orthogonal range reporting queries on prioritized points. Let $S$ be a set of $n$ points in the plane, where each point $p$ in $S$ is assigned a weight $w(p)$ that is polynomial in $n$, and define the rank of $p$ to be $r(p)=\lfloor \log w(p) \rfloor$. Then the priority range tree can be used to report all points in a three- or four-sided query range $R$ with rank at least $\lfloor \log w \rfloor$ in time $O(\log W/w + k)$, and report $k$ highest-rank points in $R$ in time $O(\log\log n + \log W/w' + k)$, where $W=\sum_{p\in S}{w(p)}$, $w'$ is the smallest weight of any point reported, and $k$ is the output size. All times assume the standard RAM model of computation. If the query range of interest is three sided, then the priority range tree occupies $O(n)$ space, otherwise $O(n\log n)$ space is used to answer four-sided queries. These queries are motivated by the Weber--Fechner Law, which states that humans perceive and interpret data on a logarithmic scale.
1911.12507
Thuong Nguyen Canh
Thuong, Nguyen Canh, Chien, Trinh Van
Error Resilient Deep Compressive Sensing
4 pages, 2 figures
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressive sensing (CS) is an emerging sampling technology that enables reconstructing signals from a subset of measurements and even corrupted measurements. Deep learning-based compressive sensing (DCS) has improved CS performance while maintaining a fast reconstruction but requires a training network for each measurement rate. Also, concerning the transmission scheme of measurement lost, DCS cannot recover the original signal. Thereby, it fails to maintain the error-resilient property. In this work, we proposed a robust deep reconstruction network to preserve the error-resilient property under the assumption of random measurement lost. Measurement lost layer is proposed to simulate the measurement lost in an end-to-end framework.
[ { "created": "Thu, 28 Nov 2019 03:16:39 GMT", "version": "v1" } ]
2019-12-02
[ [ "Thuong", "", "" ], [ "Canh", "Nguyen", "" ], [ "Chien", "", "" ], [ "Van", "Trinh", "" ] ]
Compressive sensing (CS) is an emerging sampling technology that enables reconstructing signals from a subset of measurements and even corrupted measurements. Deep learning-based compressive sensing (DCS) has improved CS performance while maintaining a fast reconstruction but requires a training network for each measurement rate. Also, concerning the transmission scheme of measurement lost, DCS cannot recover the original signal. Thereby, it fails to maintain the error-resilient property. In this work, we proposed a robust deep reconstruction network to preserve the error-resilient property under the assumption of random measurement lost. Measurement lost layer is proposed to simulate the measurement lost in an end-to-end framework.
2204.09617
Zheng Chen
Zheng Chen, Durgakant Pushp, Lantao Liu
CALI: Coarse-to-Fine ALIgnments Based Unsupervised Domain Adaptation of Traversability Prediction for Deployable Autonomous Navigation
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Traversability prediction is a fundamental perception capability for autonomous navigation. The diversity of data in different domains imposes significant gaps to the prediction performance of the perception model. In this work, we make efforts to reduce the gaps by proposing a novel coarse-to-fine unsupervised domain adaptation (UDA) model - CALI. Our aim is to transfer the perception model with high data efficiency, eliminate the prohibitively expensive data labeling, and improve the generalization capability during the adaptation from easy-to-obtain source domains to various challenging target domains. We prove that a combination of a coarse alignment and a fine alignment can be beneficial to each other and further design a first-coarse-then-fine alignment process. This proposed work bridges theoretical analyses and algorithm designs, leading to an efficient UDA model with easy and stable training. We show the advantages of our proposed model over multiple baselines in several challenging domain adaptation setups. To further validate the effectiveness of our model, we then combine our perception model with a visual planner to build a navigation system and show the high reliability of our model in complex natural environments where no labeled data is available.
[ { "created": "Wed, 20 Apr 2022 16:52:43 GMT", "version": "v1" } ]
2022-04-21
[ [ "Chen", "Zheng", "" ], [ "Pushp", "Durgakant", "" ], [ "Liu", "Lantao", "" ] ]
Traversability prediction is a fundamental perception capability for autonomous navigation. The diversity of data in different domains imposes significant gaps to the prediction performance of the perception model. In this work, we make efforts to reduce the gaps by proposing a novel coarse-to-fine unsupervised domain adaptation (UDA) model - CALI. Our aim is to transfer the perception model with high data efficiency, eliminate the prohibitively expensive data labeling, and improve the generalization capability during the adaptation from easy-to-obtain source domains to various challenging target domains. We prove that a combination of a coarse alignment and a fine alignment can be beneficial to each other and further design a first-coarse-then-fine alignment process. This proposed work bridges theoretical analyses and algorithm designs, leading to an efficient UDA model with easy and stable training. We show the advantages of our proposed model over multiple baselines in several challenging domain adaptation setups. To further validate the effectiveness of our model, we then combine our perception model with a visual planner to build a navigation system and show the high reliability of our model in complex natural environments where no labeled data is available.
1211.5221
Reza Malekian Ph.D.
Reza Malekian and Abdul Hanan Abdullah
Traffic Engineering Based on Effective Envelope Algorithm on Novel Resource Reservation Method over Mobile Internet Protocol Version 6
International Journal of Innovative Computing, Information and Control, 2012
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first decade of the 21st century has seen tremendous improvements in mobile internet and its technologies. The high traffic volume of services such as video conference and other real-time traffic applications are imposing a great challenge on networks. In the meantime, demand for the use of mobile devices in computation and communication such as smart phones, personal digital assistants, and mobile-enabled laptops has grown rapidly. These services have driven the demand for increasing and guaranteing bandwidth requirements in the network. A direction of this paper is in the case of resource reservation protocol (RSVP) over mobile IPv6 networks. There are numbers of proposed solutions for RSVP and quality of service provision over mobile IPv6 networks, but most of them using advanced resource reservation. In this paper, we propose a mathematical model to determine maximum end-to-end delay bound through intermediate routers along the network. These bounds are sent back to the home agent for further processing. Once the home agent receives maximum end-to-end delay bounds, it calculates cumulative bound and compares this bound with the desired application end-to-end delay bound to make final decision on resource reservation. This approach improves network resource utilization.
[ { "created": "Thu, 22 Nov 2012 07:38:59 GMT", "version": "v1" } ]
2012-11-26
[ [ "Malekian", "Reza", "" ], [ "Abdullah", "Abdul Hanan", "" ] ]
The first decade of the 21st century has seen tremendous improvements in mobile internet and its technologies. The high traffic volume of services such as video conference and other real-time traffic applications are imposing a great challenge on networks. In the meantime, demand for the use of mobile devices in computation and communication such as smart phones, personal digital assistants, and mobile-enabled laptops has grown rapidly. These services have driven the demand for increasing and guaranteing bandwidth requirements in the network. A direction of this paper is in the case of resource reservation protocol (RSVP) over mobile IPv6 networks. There are numbers of proposed solutions for RSVP and quality of service provision over mobile IPv6 networks, but most of them using advanced resource reservation. In this paper, we propose a mathematical model to determine maximum end-to-end delay bound through intermediate routers along the network. These bounds are sent back to the home agent for further processing. Once the home agent receives maximum end-to-end delay bounds, it calculates cumulative bound and compares this bound with the desired application end-to-end delay bound to make final decision on resource reservation. This approach improves network resource utilization.
1311.1712
Shaoshi Yang Dr.
Shaoshi Yang, Li Wang, Tiejun Lv and Lajos Hanzo
Approximate Bayesian Probabilistic-Data-Association-Aided Iterative Detection for MIMO Systems Using Arbitrary M-ary Modulation
13 pages, 14 figures, 1 table, published in IEEE Transactions on Vehicular Technology, vol. 62, no. 3, pp. 1228-1240, March, 2013
IEEE Transactions on Vehicular Technology, vol. 62, no. 3, pp. 1228-1240, March, 2013
10.1109/TVT.2012.2227863
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the issue of designing an iterative-detection-and-decoding (IDD)-aided receiver, relying on the low-complexity probabilistic data association (PDA) method, is addressed for turbo-coded multiple-input-multiple-output (MIMO) systems using general M-ary modulations. We demonstrate that the classic candidate-search-aided bit-based extrinsic log-likelihood ratio (LLR) calculation method is not applicable to the family of PDA-based detectors. Additionally, we reveal that, in contrast to the interpretation in the existing literature, the output symbol probabilities of existing PDA algorithms are not the true a posteriori probabilities (APPs) but, rather, the normalized symbol likelihoods. Therefore, the classic relationship, where the extrinsic LLRs are given by subtracting the a priori LLRs from the a posteriori LLRs, does not hold for the existing PDA-based detectors. Motivated by these revelations, we conceive a new approximate Bayesian-theorem-based logarithmic-domain PDA (AB-Log-PDA) method and unveil the technique of calculating bit-based extrinsic LLRs for the AB-Log-PDA, which facilitates the employment of the AB-Log-PDA in a simplified IDD receiver structure. Additionally, we demonstrate that we may dispense with inner iterations within the AB-Log-PDA in the context of IDD receivers. Our complexity analysis and numerical results recorded for Nakagami-m fading channels demonstrate that the proposed AB-Log-PDA-based IDD scheme is capable of achieving a performance comparable with that of the optimal maximum a posteriori (MAP)-detector-based IDD receiver, while imposing significantly lower computational complexity in the scenarios considered.
[ { "created": "Thu, 7 Nov 2013 15:26:34 GMT", "version": "v1" } ]
2013-11-08
[ [ "Yang", "Shaoshi", "" ], [ "Wang", "Li", "" ], [ "Lv", "Tiejun", "" ], [ "Hanzo", "Lajos", "" ] ]
In this paper, the issue of designing an iterative-detection-and-decoding (IDD)-aided receiver, relying on the low-complexity probabilistic data association (PDA) method, is addressed for turbo-coded multiple-input-multiple-output (MIMO) systems using general M-ary modulations. We demonstrate that the classic candidate-search-aided bit-based extrinsic log-likelihood ratio (LLR) calculation method is not applicable to the family of PDA-based detectors. Additionally, we reveal that, in contrast to the interpretation in the existing literature, the output symbol probabilities of existing PDA algorithms are not the true a posteriori probabilities (APPs) but, rather, the normalized symbol likelihoods. Therefore, the classic relationship, where the extrinsic LLRs are given by subtracting the a priori LLRs from the a posteriori LLRs, does not hold for the existing PDA-based detectors. Motivated by these revelations, we conceive a new approximate Bayesian-theorem-based logarithmic-domain PDA (AB-Log-PDA) method and unveil the technique of calculating bit-based extrinsic LLRs for the AB-Log-PDA, which facilitates the employment of the AB-Log-PDA in a simplified IDD receiver structure. Additionally, we demonstrate that we may dispense with inner iterations within the AB-Log-PDA in the context of IDD receivers. Our complexity analysis and numerical results recorded for Nakagami-m fading channels demonstrate that the proposed AB-Log-PDA-based IDD scheme is capable of achieving a performance comparable with that of the optimal maximum a posteriori (MAP)-detector-based IDD receiver, while imposing significantly lower computational complexity in the scenarios considered.
2006.00954
Gianni Franchi
Gianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, Isabelle Bloch
One Versus all for deep Neural Network Incertitude (OVNNI) quantification
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.
[ { "created": "Mon, 1 Jun 2020 14:06:12 GMT", "version": "v1" } ]
2020-06-03
[ [ "Franchi", "Gianni", "" ], [ "Bursuc", "Andrei", "" ], [ "Aldea", "Emanuel", "" ], [ "Dubuisson", "Severine", "" ], [ "Bloch", "Isabelle", "" ] ]
Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.
1205.7044
Negin Golrezaei
Negin Golrezaei, Alexandros G. Dimakis, Andreas F. Molisch
Wireless Device-to-Device Communications with Distributed Caching
to appear in ISIT 2012
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a novel wireless device-to-device (D2D) collaboration architecture that exploits distributed storage of popular content to enable frequency reuse. We identify a fundamental conflict between collaboration distance and interference and show how to optimize the transmission power to maximize frequency reuse. Our analysis depends on the user content request statistics which are modeled by a Zipf distribution. Our main result is a closed form expression of the optimal collaboration distance as a function of the content reuse distribution parameters. We show that if the Zipf exponent of the content reuse distribution is greater than 1, it is possible to have a number of D2D interference-free collaboration pairs that scales linearly in the number of nodes. If the Zipf exponent is smaller than 1, we identify the best possible scaling in the number of D2D collaborating links. Surprisingly, a very simple distributed caching policy achieves the optimal scaling behavior and therefore there is no need to centrally coordinate what each node is caching.
[ { "created": "Thu, 31 May 2012 17:02:31 GMT", "version": "v1" } ]
2012-06-01
[ [ "Golrezaei", "Negin", "" ], [ "Dimakis", "Alexandros G.", "" ], [ "Molisch", "Andreas F.", "" ] ]
We introduce a novel wireless device-to-device (D2D) collaboration architecture that exploits distributed storage of popular content to enable frequency reuse. We identify a fundamental conflict between collaboration distance and interference and show how to optimize the transmission power to maximize frequency reuse. Our analysis depends on the user content request statistics which are modeled by a Zipf distribution. Our main result is a closed form expression of the optimal collaboration distance as a function of the content reuse distribution parameters. We show that if the Zipf exponent of the content reuse distribution is greater than 1, it is possible to have a number of D2D interference-free collaboration pairs that scales linearly in the number of nodes. If the Zipf exponent is smaller than 1, we identify the best possible scaling in the number of D2D collaborating links. Surprisingly, a very simple distributed caching policy achieves the optimal scaling behavior and therefore there is no need to centrally coordinate what each node is caching.