id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2005.04163
Visahl Samson David Selvam
Visahl Samson David Selvam
Human Error in IT Security
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper details on the analysis of human error, an IT security issue, and a major threat to the company. The human is one of the weakest links in the cybersecurity chain however it is a fundamental constituent of the embodiment. This report provides a sophisticated elucidation on the various human errors and the necessary measures to mitigate and control the menacing IT security issue.
[ { "created": "Fri, 8 May 2020 17:08:22 GMT", "version": "v1" } ]
2020-05-11
[ [ "Selvam", "Visahl Samson David", "" ] ]
This paper details on the analysis of human error, an IT security issue, and a major threat to the company. The human is one of the weakest links in the cybersecurity chain however it is a fundamental constituent of the embodiment. This report provides a sophisticated elucidation on the various human errors and the necessary measures to mitigate and control the menacing IT security issue.
2204.12384
Finn Voichick
Finn Voichick, Liyi Li, Robert Rand, Michael Hicks
Qunity: A Unified Language for Quantum and Classical Computing (Extended Version)
76 pages, 37 figures. To appear at POPL 2023, previous version presented at QPL 2022. Expanded with additional background information and a characterization of the classical sublanguage
null
10.1145/3571225
null
cs.PL cs.LO quant-ph
http://creativecommons.org/licenses/by/4.0/
We introduce Qunity, a new quantum programming language designed to treat quantum computing as a natural generalization of classical computing. Qunity presents a unified syntax where familiar programming constructs can have both quantum and classical effects. For example, one can use sum types to implement the direct sum of linear operators, exception-handling syntax to implement projective measurements, and aliasing to induce entanglement. Further, Qunity takes advantage of the overlooked BQP subroutine theorem, allowing one to construct reversible subroutines from irreversible quantum algorithms through the uncomputation of "garbage" outputs. Unlike existing languages that enable quantum aspects with separate add-ons (like a classical language with quantum gates bolted on), Qunity provides a unified syntax and a novel denotational semantics that guarantees that programs are quantum mechanically valid. We present Qunity's syntax, type system, and denotational semantics, showing how it can cleanly express several quantum algorithms. We also detail how Qunity can be compiled into a low-level qubit circuit language like OpenQASM, proving the realizability of our design.
[ { "created": "Tue, 26 Apr 2022 15:34:22 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2022 12:31:06 GMT", "version": "v2" }, { "created": "Tue, 15 Nov 2022 02:44:37 GMT", "version": "v3" } ]
2023-01-18
[ [ "Voichick", "Finn", "" ], [ "Li", "Liyi", "" ], [ "Rand", "Robert", "" ], [ "Hicks", "Michael", "" ] ]
We introduce Qunity, a new quantum programming language designed to treat quantum computing as a natural generalization of classical computing. Qunity presents a unified syntax where familiar programming constructs can have both quantum and classical effects. For example, one can use sum types to implement the direct sum of linear operators, exception-handling syntax to implement projective measurements, and aliasing to induce entanglement. Further, Qunity takes advantage of the overlooked BQP subroutine theorem, allowing one to construct reversible subroutines from irreversible quantum algorithms through the uncomputation of "garbage" outputs. Unlike existing languages that enable quantum aspects with separate add-ons (like a classical language with quantum gates bolted on), Qunity provides a unified syntax and a novel denotational semantics that guarantees that programs are quantum mechanically valid. We present Qunity's syntax, type system, and denotational semantics, showing how it can cleanly express several quantum algorithms. We also detail how Qunity can be compiled into a low-level qubit circuit language like OpenQASM, proving the realizability of our design.
1902.02588
Benjamin Doerr
Benjamin Doerr, Carola Doerr, Johannes Lengler
Self-Adjusting Mutation Rates with Provably Optimal Success Rules
Conference version appeared at GECCO 2019. This full version appeared in Algorithmica (2021)
Algorithmica 83(10): 3108-3147 (2021)
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms. While it is often applied in the literal sense, a common interpretation sees the one-fifth success rule as a family of success-based updated rules that are determined by an update strength $F$ and a success rate. We analyze in this work how the performance of the (1+1) Evolutionary Algorithm on LeadingOnes depends on these two hyper-parameters. Our main result shows that the best performance is obtained for small update strengths $F=1+o(1)$ and success rate $1/e$. We also prove that the running time obtained by this parameter setting is, apart from lower order terms, the same that is achieved with the best fitness-dependent mutation rate. We show similar results for the resampling variant of the (1+1) Evolutionary Algorithm, which enforces to flip at least one bit per iteration.
[ { "created": "Thu, 7 Feb 2019 12:38:15 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2019 00:24:38 GMT", "version": "v2" }, { "created": "Wed, 30 Jun 2021 17:50:16 GMT", "version": "v3" }, { "created": "Tue, 28 Dec 2021 22:06:14 GMT", "version": "v4" } ]
2021-12-30
[ [ "Doerr", "Benjamin", "" ], [ "Doerr", "Carola", "" ], [ "Lengler", "Johannes", "" ] ]
The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms. While it is often applied in the literal sense, a common interpretation sees the one-fifth success rule as a family of success-based updated rules that are determined by an update strength $F$ and a success rate. We analyze in this work how the performance of the (1+1) Evolutionary Algorithm on LeadingOnes depends on these two hyper-parameters. Our main result shows that the best performance is obtained for small update strengths $F=1+o(1)$ and success rate $1/e$. We also prove that the running time obtained by this parameter setting is, apart from lower order terms, the same that is achieved with the best fitness-dependent mutation rate. We show similar results for the resampling variant of the (1+1) Evolutionary Algorithm, which enforces to flip at least one bit per iteration.
2401.08636
Gregor von Laszewski PhD
Varshitha Chennamsetti, Gregor von Laszewski, Ruochen Gu, Laiba Mehnaz, Juri Papay, Samuel Jackson, Jeyan Thiyagalingam, Sergey V. Samsonau and Geoffrey C. Fox
MLCommons Cloud Masking Benchmark with Early Stopping
NYU did not approve the publication of the paper
null
null
null
cs.DC cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we report on work performed for the MLCommons Science Working Group on the cloud masking benchmark. MLCommons is a consortium that develops and maintains several scientific benchmarks that aim to benefit developments in AI. The benchmarks are conducted on the High Performance Computing (HPC) Clusters of New York University and University of Virginia, as well as a commodity desktop. We provide a description of the cloud masking benchmark, as well as a summary of our submission to MLCommons on the benchmark experiment we conducted. It includes a modification to the reference implementation of the cloud masking benchmark enabling early stopping. This benchmark is executed on the NYU HPC through a custom batch script that runs the various experiments through the batch queuing system while allowing for variation on the number of epochs trained. Our submission includes the modified code, a custom batch script to modify epochs, documentation, and the benchmark results. We report the highest accuracy (scientific metric) and the average time taken (performance metric) for training and inference that was achieved on NYU HPC Greene. We also provide a comparison of the compute capabilities between different systems by running the benchmark for one epoch. Our submission can be found in a Globus repository that is accessible to MLCommons Science Working Group.
[ { "created": "Mon, 11 Dec 2023 19:06:06 GMT", "version": "v1" }, { "created": "Thu, 30 May 2024 19:07:46 GMT", "version": "v2" } ]
2024-06-05
[ [ "Chennamsetti", "Varshitha", "" ], [ "von Laszewski", "Gregor", "" ], [ "Gu", "Ruochen", "" ], [ "Mehnaz", "Laiba", "" ], [ "Papay", "Juri", "" ], [ "Jackson", "Samuel", "" ], [ "Thiyagalingam", "Jeyan", "" ], [ "Samsonau", "Sergey V.", "" ], [ "Fox", "Geoffrey C.", "" ] ]
In this paper, we report on work performed for the MLCommons Science Working Group on the cloud masking benchmark. MLCommons is a consortium that develops and maintains several scientific benchmarks that aim to benefit developments in AI. The benchmarks are conducted on the High Performance Computing (HPC) Clusters of New York University and University of Virginia, as well as a commodity desktop. We provide a description of the cloud masking benchmark, as well as a summary of our submission to MLCommons on the benchmark experiment we conducted. It includes a modification to the reference implementation of the cloud masking benchmark enabling early stopping. This benchmark is executed on the NYU HPC through a custom batch script that runs the various experiments through the batch queuing system while allowing for variation on the number of epochs trained. Our submission includes the modified code, a custom batch script to modify epochs, documentation, and the benchmark results. We report the highest accuracy (scientific metric) and the average time taken (performance metric) for training and inference that was achieved on NYU HPC Greene. We also provide a comparison of the compute capabilities between different systems by running the benchmark for one epoch. Our submission can be found in a Globus repository that is accessible to MLCommons Science Working Group.
2112.13424
Hiram H. L\'opez
Eduardo Camps, Hiram H. L\'opez, Gretchen L. Matthews
Explicit non-special divisors of small degree, algebraic geometric hulls, and LCD codes from Kummer extensions
SIAM Journal on Applied Algebra and Geometry, to appear
null
null
null
cs.IT math.AG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the hull of an algebraic geometry code, meaning the intersection of the code and its dual. We demonstrate how codes whose hulls are algebraic geometry codes may be defined using only rational places of Kummer extensions (and Hermitian function fields in particular). Our primary tool is explicitly constructing non-special divisors of degrees $g$ and $g-1$ on certain families of function fields with many rational places, accomplished by appealing to Weierstrass semigroups. We provide explicit algebraic geometry codes with hulls of specified dimensions, producing along the way linearly complementary dual algebraic geometric codes from the Hermitian function field (among others) using only rational places and an answer to an open question posed by Ballet and Le Brigand for particular function fields. These results complement earlier work by Mesnager, Tang, and Qi that use lower-genus function fields as well as instances using places of a higher degree from Hermitian function fields to construct linearly complementary dual (LCD) codes and that of Carlet, Mesnager, Tang, Qi, and Pellikaan to provide explicit algebraic geometry codes with the LCD property rather than obtaining codes via monomial equivalences.
[ { "created": "Sun, 26 Dec 2021 17:57:44 GMT", "version": "v1" }, { "created": "Mon, 24 Jul 2023 08:15:32 GMT", "version": "v2" }, { "created": "Sun, 4 Feb 2024 02:23:33 GMT", "version": "v3" } ]
2024-02-06
[ [ "Camps", "Eduardo", "" ], [ "López", "Hiram H.", "" ], [ "Matthews", "Gretchen L.", "" ] ]
In this paper, we consider the hull of an algebraic geometry code, meaning the intersection of the code and its dual. We demonstrate how codes whose hulls are algebraic geometry codes may be defined using only rational places of Kummer extensions (and Hermitian function fields in particular). Our primary tool is explicitly constructing non-special divisors of degrees $g$ and $g-1$ on certain families of function fields with many rational places, accomplished by appealing to Weierstrass semigroups. We provide explicit algebraic geometry codes with hulls of specified dimensions, producing along the way linearly complementary dual algebraic geometric codes from the Hermitian function field (among others) using only rational places and an answer to an open question posed by Ballet and Le Brigand for particular function fields. These results complement earlier work by Mesnager, Tang, and Qi that use lower-genus function fields as well as instances using places of a higher degree from Hermitian function fields to construct linearly complementary dual (LCD) codes and that of Carlet, Mesnager, Tang, Qi, and Pellikaan to provide explicit algebraic geometry codes with the LCD property rather than obtaining codes via monomial equivalences.
2110.06043
Gangli Liu
Gangli Liu
Topic Model Supervised by Understanding Map
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Inspired by the notion of Center of Mass in physics, an extension called Semantic Center of Mass (SCOM) is proposed, and used to discover the abstract "topic" of a document. The notion is under a framework model called Understanding Map Supervised Topic Model (UM-S-TM). The devising aim of UM-S-TM is to let both the document content and a semantic network -- specifically, Understanding Map -- play a role, in interpreting the meaning of a document. Based on different justifications, three possible methods are devised to discover the SCOM of a document. Some experiments on artificial documents and Understanding Maps are conducted to test their outcomes. In addition, its ability of vectorization of documents and capturing sequential information are tested. We also compared UM-S-TM with probabilistic topic models like Latent Dirichlet Allocation (LDA) and probabilistic Latent Semantic Analysis (pLSA).
[ { "created": "Tue, 12 Oct 2021 14:42:33 GMT", "version": "v1" }, { "created": "Mon, 9 May 2022 10:51:14 GMT", "version": "v10" }, { "created": "Tue, 10 May 2022 17:41:25 GMT", "version": "v11" }, { "created": "Fri, 27 May 2022 08:22:12 GMT", "version": "v12" }, { "created": "Thu, 21 Oct 2021 17:11:06 GMT", "version": "v2" }, { "created": "Tue, 9 Nov 2021 10:27:59 GMT", "version": "v3" }, { "created": "Fri, 10 Dec 2021 17:20:58 GMT", "version": "v4" }, { "created": "Mon, 27 Dec 2021 18:43:29 GMT", "version": "v5" }, { "created": "Tue, 4 Jan 2022 07:56:06 GMT", "version": "v6" }, { "created": "Wed, 23 Mar 2022 17:31:09 GMT", "version": "v7" }, { "created": "Sun, 27 Mar 2022 14:44:35 GMT", "version": "v8" }, { "created": "Mon, 11 Apr 2022 11:46:43 GMT", "version": "v9" } ]
2022-05-31
[ [ "Liu", "Gangli", "" ] ]
Inspired by the notion of Center of Mass in physics, an extension called Semantic Center of Mass (SCOM) is proposed, and used to discover the abstract "topic" of a document. The notion is under a framework model called Understanding Map Supervised Topic Model (UM-S-TM). The devising aim of UM-S-TM is to let both the document content and a semantic network -- specifically, Understanding Map -- play a role, in interpreting the meaning of a document. Based on different justifications, three possible methods are devised to discover the SCOM of a document. Some experiments on artificial documents and Understanding Maps are conducted to test their outcomes. In addition, its ability of vectorization of documents and capturing sequential information are tested. We also compared UM-S-TM with probabilistic topic models like Latent Dirichlet Allocation (LDA) and probabilistic Latent Semantic Analysis (pLSA).
0908.0122
R Doomun
Kalpana Sharma, M.K. Ghose, Kuldeep
Complete Security Framework for Wireless Sensor Networks
7 pages, International Journal of Computer Science and Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423
International Journal of Computer Science and Information Security, IJCSIS July 2009, Vol. 3 No. 1, USA
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Security concern for a Sensor Networks and level of security desired may differ according to application specific needs where the sensor networks are deployed. Till now, most of the security solutions proposed for sensor networks are layer wise i.e a particular solution is applicable to single layer itself. So, to integrate them all is a new research challenge. In this paper we took up the challenge and have proposed an integrated comprehensive security framework that will provide security services for all services of sensor network. We have added one extra component i.e. Intelligent Security Agent (ISA) to assess level of security and cross layer interactions. This framework has many components like Intrusion Detection System, Trust Framework, Key Management scheme and Link layer communication protocol. We have also tested it on three different application scenarios in Castalia and Omnet++ simulator.
[ { "created": "Sun, 2 Aug 2009 10:58:51 GMT", "version": "v1" } ]
2009-08-04
[ [ "Sharma", "Kalpana", "" ], [ "Ghose", "M. K.", "" ], [ "Kuldeep", "", "" ] ]
Security concern for a Sensor Networks and level of security desired may differ according to application specific needs where the sensor networks are deployed. Till now, most of the security solutions proposed for sensor networks are layer wise i.e a particular solution is applicable to single layer itself. So, to integrate them all is a new research challenge. In this paper we took up the challenge and have proposed an integrated comprehensive security framework that will provide security services for all services of sensor network. We have added one extra component i.e. Intelligent Security Agent (ISA) to assess level of security and cross layer interactions. This framework has many components like Intrusion Detection System, Trust Framework, Key Management scheme and Link layer communication protocol. We have also tested it on three different application scenarios in Castalia and Omnet++ simulator.
2311.12800
Liu Zhendong
Zhendong Liu, Jie Zhang, Qiangqiang He, Chongjun Wang
Understanding Data Augmentation from a Robustness Perspective
Not published yet. arXiv admin note: text overlap with arXiv:2212.04059
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the realm of visual recognition, data augmentation stands out as a pivotal technique to amplify model robustness. Yet, a considerable number of existing methodologies lean heavily on heuristic foundations, rendering their intrinsic mechanisms ambiguous. This manuscript takes both a theoretical and empirical approach to understanding the phenomenon. Theoretically, we frame the discourse around data augmentation within game theory's constructs. Venturing deeper, our empirical evaluations dissect the intricate mechanisms of emblematic data augmentation strategies, illuminating that these techniques primarily stimulate mid- and high-order game interactions. Beyond the foundational exploration, our experiments span multiple datasets and diverse augmentation techniques, underscoring the universal applicability of our findings. Recognizing the vast array of robustness metrics with intricate correlations, we unveil a streamlined proxy. This proxy not only simplifies robustness assessment but also offers invaluable insights, shedding light on the inherent dynamics of model game interactions and their relation to overarching system robustness. These insights provide a novel lens through which we can re-evaluate model safety and robustness in visual recognition tasks.
[ { "created": "Thu, 7 Sep 2023 10:54:56 GMT", "version": "v1" } ]
2023-11-23
[ [ "Liu", "Zhendong", "" ], [ "Zhang", "Jie", "" ], [ "He", "Qiangqiang", "" ], [ "Wang", "Chongjun", "" ] ]
In the realm of visual recognition, data augmentation stands out as a pivotal technique to amplify model robustness. Yet, a considerable number of existing methodologies lean heavily on heuristic foundations, rendering their intrinsic mechanisms ambiguous. This manuscript takes both a theoretical and empirical approach to understanding the phenomenon. Theoretically, we frame the discourse around data augmentation within game theory's constructs. Venturing deeper, our empirical evaluations dissect the intricate mechanisms of emblematic data augmentation strategies, illuminating that these techniques primarily stimulate mid- and high-order game interactions. Beyond the foundational exploration, our experiments span multiple datasets and diverse augmentation techniques, underscoring the universal applicability of our findings. Recognizing the vast array of robustness metrics with intricate correlations, we unveil a streamlined proxy. This proxy not only simplifies robustness assessment but also offers invaluable insights, shedding light on the inherent dynamics of model game interactions and their relation to overarching system robustness. These insights provide a novel lens through which we can re-evaluate model safety and robustness in visual recognition tasks.
2109.07911
Olle H\"aggstr\"om
Olle H\"aggstr\"om
AI, orthogonality and the M\"uller-Cannon instrumental vs general intelligence distinction
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The by now standard argument put forth by Yudkowsky, Bostrom and others for why the possibility of a carelessly handled AI breakthrough poses an existential threat to humanity is shown through careful conceptual analysis to be very much alive and kicking, despite the suggestion in a recent paper by M\"uller and Cannon that the argument contains a flaw.
[ { "created": "Tue, 14 Sep 2021 14:38:33 GMT", "version": "v1" } ]
2021-09-17
[ [ "Häggström", "Olle", "" ] ]
The by now standard argument put forth by Yudkowsky, Bostrom and others for why the possibility of a carelessly handled AI breakthrough poses an existential threat to humanity is shown through careful conceptual analysis to be very much alive and kicking, despite the suggestion in a recent paper by M\"uller and Cannon that the argument contains a flaw.
1906.06432
Ryan Rossi
Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, and Sungchul Kim
Linear-time Hierarchical Community Detection
null
null
null
null
cs.SI cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection in graphs has many important and fundamental applications including in distributed systems, compression, image segmentation, divide-and-conquer graph algorithms such as nested dissection, document and word clustering, circuit design, among many others. Finding these densely connected regions of graphs remains an important and challenging problem. Most work has focused on scaling up existing methods to handle large graphs. These methods often partition the graph into two or more communities. In this work, we focus on the problem of hierarchical community detection (i.e., finding a hierarchy of dense community structures going from the lowest granularity to the largest) and describe an approach that runs in linear time with respect to the number of edges and thus fast and efficient for large-scale networks. The experiments demonstrate the effectiveness of the approach quantitatively. Finally, we show an application of it for visualizing large networks with hundreds of thousands of nodes/links.
[ { "created": "Fri, 14 Jun 2019 23:29:37 GMT", "version": "v1" } ]
2019-06-18
[ [ "Rossi", "Ryan A.", "" ], [ "Ahmed", "Nesreen K.", "" ], [ "Koh", "Eunyee", "" ], [ "Kim", "Sungchul", "" ] ]
Community detection in graphs has many important and fundamental applications including in distributed systems, compression, image segmentation, divide-and-conquer graph algorithms such as nested dissection, document and word clustering, circuit design, among many others. Finding these densely connected regions of graphs remains an important and challenging problem. Most work has focused on scaling up existing methods to handle large graphs. These methods often partition the graph into two or more communities. In this work, we focus on the problem of hierarchical community detection (i.e., finding a hierarchy of dense community structures going from the lowest granularity to the largest) and describe an approach that runs in linear time with respect to the number of edges and thus fast and efficient for large-scale networks. The experiments demonstrate the effectiveness of the approach quantitatively. Finally, we show an application of it for visualizing large networks with hundreds of thousands of nodes/links.
1707.06357
Majid Laali
Majid Laali and Leila Kosseim
Improving Discourse Relation Projection to Build Discourse Annotated Corpora
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The naive approach to annotation projection is not effective to project discourse annotations from one language to another because implicit discourse relations are often changed to explicit ones and vice-versa in the translation. In this paper, we propose a novel approach based on the intersection between statistical word-alignment models to identify unsupported discourse annotations. This approach identified 65% of the unsupported annotations in the English-French parallel sentences from Europarl. By filtering out these unsupported annotations, we induced the first PDTB-style discourse annotated corpus for French from Europarl. We then used this corpus to train a classifier to identify the discourse-usage of French discourse connectives and show a 15% improvement of F1-score compared to the classifier trained on the non-filtered annotations.
[ { "created": "Thu, 20 Jul 2017 03:17:19 GMT", "version": "v1" } ]
2017-07-21
[ [ "Laali", "Majid", "" ], [ "Kosseim", "Leila", "" ] ]
The naive approach to annotation projection is not effective to project discourse annotations from one language to another because implicit discourse relations are often changed to explicit ones and vice-versa in the translation. In this paper, we propose a novel approach based on the intersection between statistical word-alignment models to identify unsupported discourse annotations. This approach identified 65% of the unsupported annotations in the English-French parallel sentences from Europarl. By filtering out these unsupported annotations, we induced the first PDTB-style discourse annotated corpus for French from Europarl. We then used this corpus to train a classifier to identify the discourse-usage of French discourse connectives and show a 15% improvement of F1-score compared to the classifier trained on the non-filtered annotations.
2305.00194
Yesheng Zhang
Yesheng Zhang, Xu Zhao, Dahong Qian
Searching from Area to Point: A Hierarchical Framework for Semantic-Geometric Combined Feature Matching
v3
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature matching is a crucial technique in computer vision. A unified perspective for this task is to treat it as a searching problem, aiming at an efficient search strategy to narrow the search space to point matches between images. One of the key aspects of search strategy is the search space, which in current approaches is not carefully defined, resulting in limited matching accuracy. This paper, thus, pays attention to the search space and proposes to set the initial search space for point matching as the matched image areas containing prominent semantic, named semantic area matches. This search space favors point matching by salient features and alleviates the accuracy limitation in recent Transformer-based matching methods. To achieve this search space, we introduce a hierarchical feature matching framework: Area to Point Matching (A2PM), to first find semantic area matches between images and later perform point matching on area matches. We further propose Semantic and Geometry Area Matching (SGAM) method to realize this framework, which utilizes semantic prior and geometry consistency to establish accurate area matches between images. By integrating SGAM with off-the-shelf state-of-the-art matchers, our method, adopting the A2PM framework, achieves encouraging precision improvements in massive point matching and pose estimation experiments.
[ { "created": "Sat, 29 Apr 2023 08:16:12 GMT", "version": "v1" }, { "created": "Tue, 2 May 2023 11:49:26 GMT", "version": "v2" }, { "created": "Fri, 5 May 2023 09:04:12 GMT", "version": "v3" }, { "created": "Sun, 2 Jul 2023 03:11:26 GMT", "version": "v4" }, { "created": "Thu, 2 May 2024 03:19:33 GMT", "version": "v5" } ]
2024-05-03
[ [ "Zhang", "Yesheng", "" ], [ "Zhao", "Xu", "" ], [ "Qian", "Dahong", "" ] ]
Feature matching is a crucial technique in computer vision. A unified perspective for this task is to treat it as a searching problem, aiming at an efficient search strategy to narrow the search space to point matches between images. One of the key aspects of search strategy is the search space, which in current approaches is not carefully defined, resulting in limited matching accuracy. This paper, thus, pays attention to the search space and proposes to set the initial search space for point matching as the matched image areas containing prominent semantic, named semantic area matches. This search space favors point matching by salient features and alleviates the accuracy limitation in recent Transformer-based matching methods. To achieve this search space, we introduce a hierarchical feature matching framework: Area to Point Matching (A2PM), to first find semantic area matches between images and later perform point matching on area matches. We further propose Semantic and Geometry Area Matching (SGAM) method to realize this framework, which utilizes semantic prior and geometry consistency to establish accurate area matches between images. By integrating SGAM with off-the-shelf state-of-the-art matchers, our method, adopting the A2PM framework, achieves encouraging precision improvements in massive point matching and pose estimation experiments.
2206.02902
Chunlok Lo
Chunlok Lo, Kevin Roice, Parham Mohammad Panahi, Scott Jordan, Adam White, Gabor Mihucz, Farzane Aminmansour, Martha White
Goal-Space Planning with Subgoal Models
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives, such as Double DQN, even though the former uses significantly more memory and computation. The fundamental problem is that learned models can be inaccurate and often generate invalid states, especially when iterated many steps. In this paper, we avoid this limitation by constraining background planning to a set of (abstract) subgoals and learning only local, subgoal-conditioned models. This goal-space planning (GSP) approach is more computationally efficient, naturally incorporates temporal abstraction for faster long-horizon planning and avoids learning the transition dynamics entirely. We show that our GSP algorithm can propagate value from an abstract space in a manner that helps a variety of base learners learn significantly faster in different domains.
[ { "created": "Mon, 6 Jun 2022 20:59:07 GMT", "version": "v1" }, { "created": "Wed, 8 Jun 2022 03:37:49 GMT", "version": "v2" }, { "created": "Tue, 1 Nov 2022 15:58:58 GMT", "version": "v3" }, { "created": "Tue, 14 Feb 2023 07:21:14 GMT", "version": "v4" }, { "created": "Tue, 27 Feb 2024 06:15:53 GMT", "version": "v5" } ]
2024-02-28
[ [ "Lo", "Chunlok", "" ], [ "Roice", "Kevin", "" ], [ "Panahi", "Parham Mohammad", "" ], [ "Jordan", "Scott", "" ], [ "White", "Adam", "" ], [ "Mihucz", "Gabor", "" ], [ "Aminmansour", "Farzane", "" ], [ "White", "Martha", "" ] ]
This paper investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives, such as Double DQN, even though the former uses significantly more memory and computation. The fundamental problem is that learned models can be inaccurate and often generate invalid states, especially when iterated many steps. In this paper, we avoid this limitation by constraining background planning to a set of (abstract) subgoals and learning only local, subgoal-conditioned models. This goal-space planning (GSP) approach is more computationally efficient, naturally incorporates temporal abstraction for faster long-horizon planning and avoids learning the transition dynamics entirely. We show that our GSP algorithm can propagate value from an abstract space in a manner that helps a variety of base learners learn significantly faster in different domains.
1701.08893
Connelly Barnes
Eric Risser, Pierre Wilmot, Connelly Barnes
Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses
null
null
null
null
cs.GR cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, methods have been proposed that perform texture synthesis and style transfer by using convolutional neural networks (e.g. Gatys et al. [2015,2016]). These methods are exciting because they can in some cases create results with state-of-the-art quality. However, in this paper, we show these methods also have limitations in texture quality, stability, requisite parameter tuning, and lack of user controls. This paper presents a multiscale synthesis pipeline based on convolutional neural networks that ameliorates these issues. We first give a mathematical explanation of the source of instabilities in many previous approaches. We then improve these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. We also show how to integrate localized style losses in our multiscale framework. These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers. We demonstrate that our approach offers improved quality, convergence in fewer iterations, and more stability over the optimization.
[ { "created": "Tue, 31 Jan 2017 02:37:19 GMT", "version": "v1" }, { "created": "Wed, 1 Feb 2017 23:30:20 GMT", "version": "v2" } ]
2017-02-09
[ [ "Risser", "Eric", "" ], [ "Wilmot", "Pierre", "" ], [ "Barnes", "Connelly", "" ] ]
Recently, methods have been proposed that perform texture synthesis and style transfer by using convolutional neural networks (e.g. Gatys et al. [2015,2016]). These methods are exciting because they can in some cases create results with state-of-the-art quality. However, in this paper, we show these methods also have limitations in texture quality, stability, requisite parameter tuning, and lack of user controls. This paper presents a multiscale synthesis pipeline based on convolutional neural networks that ameliorates these issues. We first give a mathematical explanation of the source of instabilities in many previous approaches. We then improve these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. We also show how to integrate localized style losses in our multiscale framework. These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers. We demonstrate that our approach offers improved quality, convergence in fewer iterations, and more stability over the optimization.
2007.04251
Zheyuan Xu
Zheyuan Xu, Hongche Yin, Jian Yao
Deformable spatial propagation network for depth completion
5 pages, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Depth completion has attracted extensive attention recently due to the development of autonomous driving, which aims to recover dense depth map from sparse depth measurements. Convolutional spatial propagation network (CSPN) is one of the state-of-the-art methods in this task, which adopt a linear propagation model to refine coarse depth maps with local context. However, the propagation of each pixel occurs in a fixed receptive field. This may not be the optimal for refinement since different pixel needs different local context. To tackle this issue, in this paper, we propose a deformable spatial propagation network (DSPN) to adaptively generates different receptive field and affinity matrix for each pixel. It allows the network obtain information with much fewer but more relevant pixels for propagation. Experimental results on KITTI depth completion benchmark demonstrate that our proposed method achieves the state-of-the-art performance.
[ { "created": "Wed, 8 Jul 2020 16:39:50 GMT", "version": "v1" }, { "created": "Sun, 19 Jul 2020 09:52:56 GMT", "version": "v2" } ]
2020-07-21
[ [ "Xu", "Zheyuan", "" ], [ "Yin", "Hongche", "" ], [ "Yao", "Jian", "" ] ]
Depth completion has attracted extensive attention recently due to the development of autonomous driving, which aims to recover dense depth map from sparse depth measurements. Convolutional spatial propagation network (CSPN) is one of the state-of-the-art methods in this task, which adopt a linear propagation model to refine coarse depth maps with local context. However, the propagation of each pixel occurs in a fixed receptive field. This may not be the optimal for refinement since different pixel needs different local context. To tackle this issue, in this paper, we propose a deformable spatial propagation network (DSPN) to adaptively generates different receptive field and affinity matrix for each pixel. It allows the network obtain information with much fewer but more relevant pixels for propagation. Experimental results on KITTI depth completion benchmark demonstrate that our proposed method achieves the state-of-the-art performance.
2406.02598
Vedant Khandelwal
Vedant Khandelwal, Amit Sheth, Forest Agostinelli
Towards Learning Foundation Models for Heuristic Functions to Solve Pathfinding Problems
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Pathfinding problems are found throughout robotics, computational science, and natural sciences. Traditional methods to solve these require training deep neural networks (DNNs) for each new problem domain, consuming substantial time and resources. This study introduces a novel foundation model, leveraging deep reinforcement learning to train heuristic functions that seamlessly adapt to new domains without further fine-tuning. Building upon DeepCubeA, we enhance the model by providing the heuristic function with the domain's state transition information, improving its adaptability. Utilizing a puzzle generator for the 15-puzzle action space variation domains, we demonstrate our model's ability to generalize and solve unseen domains. We achieve a strong correlation between learned and ground truth heuristic values across various domains, as evidenced by robust R-squared and Concordance Correlation Coefficient metrics. These results underscore the potential of foundation models to establish new standards in efficiency and adaptability for AI-driven solutions in complex pathfinding problems.
[ { "created": "Sat, 1 Jun 2024 16:18:20 GMT", "version": "v1" } ]
2024-06-06
[ [ "Khandelwal", "Vedant", "" ], [ "Sheth", "Amit", "" ], [ "Agostinelli", "Forest", "" ] ]
Pathfinding problems are found throughout robotics, computational science, and natural sciences. Traditional methods to solve these require training deep neural networks (DNNs) for each new problem domain, consuming substantial time and resources. This study introduces a novel foundation model, leveraging deep reinforcement learning to train heuristic functions that seamlessly adapt to new domains without further fine-tuning. Building upon DeepCubeA, we enhance the model by providing the heuristic function with the domain's state transition information, improving its adaptability. Utilizing a puzzle generator for the 15-puzzle action space variation domains, we demonstrate our model's ability to generalize and solve unseen domains. We achieve a strong correlation between learned and ground truth heuristic values across various domains, as evidenced by robust R-squared and Concordance Correlation Coefficient metrics. These results underscore the potential of foundation models to establish new standards in efficiency and adaptability for AI-driven solutions in complex pathfinding problems.
2003.14034
Siyuan Xiang
Wenyu Han, Siyuan Xiang, Chenhui Liu, Ruoyu Wang, Chen Feng
SPARE3D: A Dataset for SPAtial REasoning on Three-View Line Drawings
This paper has been accepted in CVPR'20. The first two authors contributed equally. Chen Feng is the corresponding author
null
null
null
cs.CV cs.CG cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial reasoning is an important component of human intelligence. We can imagine the shapes of 3D objects and reason about their spatial relations by merely looking at their three-view line drawings in 2D, with different levels of competence. Can deep networks be trained to perform spatial reasoning tasks? How can we measure their "spatial intelligence"? To answer these questions, we present the SPARE3D dataset. Based on cognitive science and psychometrics, SPARE3D contains three types of 2D-3D reasoning tasks on view consistency, camera pose, and shape generation, with increasing difficulty. We then design a method to automatically generate a large number of challenging questions with ground truth answers for each task. They are used to provide supervision for training our baseline models using state-of-the-art architectures like ResNet. Our experiments show that although convolutional networks have achieved superhuman performance in many visual learning tasks, their spatial reasoning performance on SPARE3D tasks is either lower than average human performance or even close to random guesses. We hope SPARE3D can stimulate new problem formulations and network designs for spatial reasoning to empower intelligent robots to operate effectively in the 3D world via 2D sensors. The dataset and code are available at https://ai4ce.github.io/SPARE3D.
[ { "created": "Tue, 31 Mar 2020 09:01:27 GMT", "version": "v1" }, { "created": "Wed, 2 Sep 2020 14:18:47 GMT", "version": "v2" } ]
2020-09-03
[ [ "Han", "Wenyu", "" ], [ "Xiang", "Siyuan", "" ], [ "Liu", "Chenhui", "" ], [ "Wang", "Ruoyu", "" ], [ "Feng", "Chen", "" ] ]
Spatial reasoning is an important component of human intelligence. We can imagine the shapes of 3D objects and reason about their spatial relations by merely looking at their three-view line drawings in 2D, with different levels of competence. Can deep networks be trained to perform spatial reasoning tasks? How can we measure their "spatial intelligence"? To answer these questions, we present the SPARE3D dataset. Based on cognitive science and psychometrics, SPARE3D contains three types of 2D-3D reasoning tasks on view consistency, camera pose, and shape generation, with increasing difficulty. We then design a method to automatically generate a large number of challenging questions with ground truth answers for each task. They are used to provide supervision for training our baseline models using state-of-the-art architectures like ResNet. Our experiments show that although convolutional networks have achieved superhuman performance in many visual learning tasks, their spatial reasoning performance on SPARE3D tasks is either lower than average human performance or even close to random guesses. We hope SPARE3D can stimulate new problem formulations and network designs for spatial reasoning to empower intelligent robots to operate effectively in the 3D world via 2D sensors. The dataset and code are available at https://ai4ce.github.io/SPARE3D.
1407.4650
Dipan Shaw
Dipan Lal Shaw, M. Sohel Rahman, A. S. M. Sohidull Islam and Shuvasish Karmaker
Protein Folding in the Hexagonal Prism Lattice with Diagonals
12 page, 8 figure, ISSAC
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting protein secondary structure using lattice model is one of the most studied computational problem in bioinformatics. Here secondary structure or three dimensional structure of protein is predicted from its amino acid sequence. Secondary structure refers to local sub-structures of protein. Mostly founded secondary structures are alpha helix and beta sheets. Since, it is a problem of great potential complexity many simplified energy model have been proposed in literature on basis of interaction of amino acid residue in protein. Here we use well researched Hydrophobic-Polar (HP) energy model. In this paper, we proposed hexagonal prism lattice with diagonal that can overcome the problems of other lattice structure, e.g., parity problem. We give two approximation algorithm for protein folding on this lattice. Our first algorithm leads us to similar structure of helix structure which is commonly found in protein structure. This motivated us to find next algorithm which improves the algorithm ratio of 9/7.
[ { "created": "Thu, 17 Jul 2014 12:12:35 GMT", "version": "v1" } ]
2014-07-18
[ [ "Shaw", "Dipan Lal", "" ], [ "Rahman", "M. Sohel", "" ], [ "Islam", "A. S. M. Sohidull", "" ], [ "Karmaker", "Shuvasish", "" ] ]
Predicting protein secondary structure using lattice model is one of the most studied computational problem in bioinformatics. Here secondary structure or three dimensional structure of protein is predicted from its amino acid sequence. Secondary structure refers to local sub-structures of protein. Mostly founded secondary structures are alpha helix and beta sheets. Since, it is a problem of great potential complexity many simplified energy model have been proposed in literature on basis of interaction of amino acid residue in protein. Here we use well researched Hydrophobic-Polar (HP) energy model. In this paper, we proposed hexagonal prism lattice with diagonal that can overcome the problems of other lattice structure, e.g., parity problem. We give two approximation algorithm for protein folding on this lattice. Our first algorithm leads us to similar structure of helix structure which is commonly found in protein structure. This motivated us to find next algorithm which improves the algorithm ratio of 9/7.
2303.05368
Quoc Huy Vu
Alex B. Grilo, Or Sattath, Quoc-Huy Vu
Encryption with Quantum Public Keys
This paper is subsumed and superseded by arXiv:2306.07698
null
null
null
cs.CR quant-ph
http://creativecommons.org/licenses/by/4.0/
It is an important question to find constructions of quantum cryptographic protocols which rely on weaker computational assumptions than classical protocols. Recently, it has been shown that oblivious transfer and multi-party computation can be constructed from one-way functions, whereas this is impossible in the classical setting in a black-box way. In this work, we study the question of building quantum public-key encryption schemes from one-way functions and even weaker assumptions. Firstly, we revisit the definition of IND-CPA security to this setting. Then, we propose three schemes for quantum public-key encryption from one-way functions, pseudorandom function-like states with proof of deletion and pseudorandom function-like states, respectively.
[ { "created": "Thu, 9 Mar 2023 16:17:19 GMT", "version": "v1" }, { "created": "Tue, 20 Jun 2023 10:11:12 GMT", "version": "v2" }, { "created": "Wed, 21 Jun 2023 11:28:01 GMT", "version": "v3" } ]
2023-06-22
[ [ "Grilo", "Alex B.", "" ], [ "Sattath", "Or", "" ], [ "Vu", "Quoc-Huy", "" ] ]
It is an important question to find constructions of quantum cryptographic protocols which rely on weaker computational assumptions than classical protocols. Recently, it has been shown that oblivious transfer and multi-party computation can be constructed from one-way functions, whereas this is impossible in the classical setting in a black-box way. In this work, we study the question of building quantum public-key encryption schemes from one-way functions and even weaker assumptions. Firstly, we revisit the definition of IND-CPA security to this setting. Then, we propose three schemes for quantum public-key encryption from one-way functions, pseudorandom function-like states with proof of deletion and pseudorandom function-like states, respectively.
2206.07139
Haozheng Luo
Hanming Wang, Haozheng Luo, Yue Wang
MBGDT:Robust Mini-Batch Gradient Descent
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In high dimensions, most machine learning method perform fragile even there are a little outliers. To address this, we hope to introduce a new method with the base learner, such as Bayesian regression or stochastic gradient descent to solve the problem of the vulnerability in the model. Because the mini-batch gradient descent allows for a more robust convergence than the batch gradient descent, we work a method with the mini-batch gradient descent, called Mini-Batch Gradient Descent with Trimming (MBGDT). Our method show state-of-art performance and have greater robustness than several baselines when we apply our method in designed dataset.
[ { "created": "Tue, 14 Jun 2022 19:52:23 GMT", "version": "v1" } ]
2022-06-16
[ [ "Wang", "Hanming", "" ], [ "Luo", "Haozheng", "" ], [ "Wang", "Yue", "" ] ]
In high dimensions, most machine learning method perform fragile even there are a little outliers. To address this, we hope to introduce a new method with the base learner, such as Bayesian regression or stochastic gradient descent to solve the problem of the vulnerability in the model. Because the mini-batch gradient descent allows for a more robust convergence than the batch gradient descent, we work a method with the mini-batch gradient descent, called Mini-Batch Gradient Descent with Trimming (MBGDT). Our method show state-of-art performance and have greater robustness than several baselines when we apply our method in designed dataset.
1905.09794
Ramin Norouzi
Ramin Norouzi, Amirreza Kosari, Mohammad Hossein Sabour
Evaluating the Effects of Control Surfaces Failure on the GTM
17 Pages, 32 Figures, 9 Tables
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the advances in aircraft guidance and control systems technology, Loss of Control remains as the main cause of the fatal accidents of large transport aircraft. Loss of Control is defined as excursion beyond the allowable flight envelope and is often a consequence of upset condition if improper maneuver is implemented by the pilot. Hence, extensive research in recent years has focused on improving the current fault tolerant control systems and developing new strategies for loss of control prevention and recovery systems. However, success of such systems requires the perception of the damaged aircraft's dynamic behavior and performance, and understanding of its new flight envelope. This paper provides a comprehensive understanding of lateral control surfaces' failure effect on the NASA Generic Transport Model's maneuvering flight envelope; which is a set of attainable steady state maneuvers herein referred to as trim points. The study utilizes a massive database of the Generic Transport Model's high-fidelity maneuvering flight envelopes computed for the unimpaired case and wide ranges of aileron and rudder failure cases at different flight conditions. Flight envelope boundary is rigorously investigated and the key parameters confining the trim points at different boundary sections are identified. Trend analysis of the impaired flight envelopes and the corresponding limiting factors is performed which demonstrates the effect of various failure degrees on the remaining feasible trim points. Results of the post-failure analysis can be employed in emergency path planning and have potential uses in the development of aircraft resilient control and upset recovery systems.
[ { "created": "Thu, 23 May 2019 17:38:17 GMT", "version": "v1" } ]
2019-05-24
[ [ "Norouzi", "Ramin", "" ], [ "Kosari", "Amirreza", "" ], [ "Sabour", "Mohammad Hossein", "" ] ]
Despite the advances in aircraft guidance and control systems technology, Loss of Control remains as the main cause of the fatal accidents of large transport aircraft. Loss of Control is defined as excursion beyond the allowable flight envelope and is often a consequence of upset condition if improper maneuver is implemented by the pilot. Hence, extensive research in recent years has focused on improving the current fault tolerant control systems and developing new strategies for loss of control prevention and recovery systems. However, success of such systems requires the perception of the damaged aircraft's dynamic behavior and performance, and understanding of its new flight envelope. This paper provides a comprehensive understanding of lateral control surfaces' failure effect on the NASA Generic Transport Model's maneuvering flight envelope; which is a set of attainable steady state maneuvers herein referred to as trim points. The study utilizes a massive database of the Generic Transport Model's high-fidelity maneuvering flight envelopes computed for the unimpaired case and wide ranges of aileron and rudder failure cases at different flight conditions. Flight envelope boundary is rigorously investigated and the key parameters confining the trim points at different boundary sections are identified. Trend analysis of the impaired flight envelopes and the corresponding limiting factors is performed which demonstrates the effect of various failure degrees on the remaining feasible trim points. Results of the post-failure analysis can be employed in emergency path planning and have potential uses in the development of aircraft resilient control and upset recovery systems.
2308.02299
Qiang Zhou
Qiang Zhou, Chaohui Yu, Shaofeng Zhang, Sitong Wu, Zhibing Wang, Fan Wang
RegionBLIP: A Unified Multi-modal Pre-training Framework for Holistic and Regional Comprehension
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we investigate extending the comprehension of Multi-modal Large Language Models (MLLMs) to regional objects. To this end, we propose to extract features corresponding to regional objects as soft prompts for LLM, which provides a straightforward and scalable approach and eliminates the need for LLM fine-tuning. To effectively extract regional features from regular image features and irregular point cloud features, we present a novel and unified position-assisted feature extraction module. Furthermore, training an MLLM from scratch is highly time-consuming. Thus, we propose incrementally extending existing pre-trained MLLMs to comprehend more modalities and the regional objects of those modalities. Specifically, we freeze the Q-Former from BLIP-2, an impressive MLLM, and optimize the modality-specific Lora parameters in Q-Former and LLM for each newly introduced modality. The freezing of the Q-Former eliminates the need for extensive pre-training on massive image-text data. The freezed Q-Former pre-trained from massive image-text data is also beneficial for the pre-training on image-region-text data. We name our framework RegionBLIP. We pre-train RegionBLIP on image-region-text, point-cloud-text, and point-cloud-region-text data. Experimental results verify that \Ours{} can preserve the image comprehension capability of BILP-2 and further gain a comprehension of the newly introduced point cloud modality and regional objects. The Data, Code, and Pre-trained models will be available at https://github.com/mightyzau/RegionBLIP.
[ { "created": "Thu, 3 Aug 2023 14:17:22 GMT", "version": "v1" } ]
2023-08-07
[ [ "Zhou", "Qiang", "" ], [ "Yu", "Chaohui", "" ], [ "Zhang", "Shaofeng", "" ], [ "Wu", "Sitong", "" ], [ "Wang", "Zhibing", "" ], [ "Wang", "Fan", "" ] ]
In this work, we investigate extending the comprehension of Multi-modal Large Language Models (MLLMs) to regional objects. To this end, we propose to extract features corresponding to regional objects as soft prompts for LLM, which provides a straightforward and scalable approach and eliminates the need for LLM fine-tuning. To effectively extract regional features from regular image features and irregular point cloud features, we present a novel and unified position-assisted feature extraction module. Furthermore, training an MLLM from scratch is highly time-consuming. Thus, we propose incrementally extending existing pre-trained MLLMs to comprehend more modalities and the regional objects of those modalities. Specifically, we freeze the Q-Former from BLIP-2, an impressive MLLM, and optimize the modality-specific Lora parameters in Q-Former and LLM for each newly introduced modality. The freezing of the Q-Former eliminates the need for extensive pre-training on massive image-text data. The freezed Q-Former pre-trained from massive image-text data is also beneficial for the pre-training on image-region-text data. We name our framework RegionBLIP. We pre-train RegionBLIP on image-region-text, point-cloud-text, and point-cloud-region-text data. Experimental results verify that \Ours{} can preserve the image comprehension capability of BILP-2 and further gain a comprehension of the newly introduced point cloud modality and regional objects. The Data, Code, and Pre-trained models will be available at https://github.com/mightyzau/RegionBLIP.
2406.05072
Emilia Magnani
Emilia Magnani, Marvin Pf\"ortner, Tobias Weber, Philipp Hennig
Linearization Turns Neural Operators into Function-Valued Gaussian Processes
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Modeling dynamical systems, e.g. in climate and engineering sciences, often necessitates solving partial differential equations. Neural operators are deep neural networks designed to learn nontrivial solution operators of such differential equations from data. As for all statistical models, the predictions of these models are imperfect and exhibit errors. Such errors are particularly difficult to spot in the complex nonlinear behaviour of dynamical systems. We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators using function-valued Gaussian processes. Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming and provides a practical yet theoretically sound way to apply the linearized Laplace approximation to neural operators. In a case study on Fourier neural operators, we show that, even for a discretized input, our method yields a Gaussian closure--a structured Gaussian process posterior capturing the uncertainty in the output function of the neural operator, which can be evaluated at an arbitrary set of points. The method adds minimal prediction overhead, can be applied post-hoc without retraining the neural operator, and scales to large models and datasets. We showcase the efficacy of our approach through applications to different types of partial differential equations.
[ { "created": "Fri, 7 Jun 2024 16:43:54 GMT", "version": "v1" } ]
2024-06-10
[ [ "Magnani", "Emilia", "" ], [ "Pförtner", "Marvin", "" ], [ "Weber", "Tobias", "" ], [ "Hennig", "Philipp", "" ] ]
Modeling dynamical systems, e.g. in climate and engineering sciences, often necessitates solving partial differential equations. Neural operators are deep neural networks designed to learn nontrivial solution operators of such differential equations from data. As for all statistical models, the predictions of these models are imperfect and exhibit errors. Such errors are particularly difficult to spot in the complex nonlinear behaviour of dynamical systems. We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators using function-valued Gaussian processes. Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming and provides a practical yet theoretically sound way to apply the linearized Laplace approximation to neural operators. In a case study on Fourier neural operators, we show that, even for a discretized input, our method yields a Gaussian closure--a structured Gaussian process posterior capturing the uncertainty in the output function of the neural operator, which can be evaluated at an arbitrary set of points. The method adds minimal prediction overhead, can be applied post-hoc without retraining the neural operator, and scales to large models and datasets. We showcase the efficacy of our approach through applications to different types of partial differential equations.
2104.05824
Shuoyang Ding
Shuoyang Ding, Philipp Koehn
Evaluating Saliency Methods for Neural Language Models
19 pages, 2 figures, Accepted for NAACL 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Saliency methods are widely used to interpret neural network predictions, but different variants of saliency methods often disagree even on the interpretations of the same prediction made by the same model. In these cases, how do we identify when are these interpretations trustworthy enough to be used in analyses? To address this question, we conduct a comprehensive and quantitative evaluation of saliency methods on a fundamental category of NLP models: neural language models. We evaluate the quality of prediction interpretations from two perspectives that each represents a desirable property of these interpretations: plausibility and faithfulness. Our evaluation is conducted on four different datasets constructed from the existing human annotation of syntactic and semantic agreements, on both sentence-level and document-level. Through our evaluation, we identified various ways saliency methods could yield interpretations of low quality. We recommend that future work deploying such methods to neural language models should carefully validate their interpretations before drawing insights.
[ { "created": "Mon, 12 Apr 2021 21:19:48 GMT", "version": "v1" } ]
2021-04-14
[ [ "Ding", "Shuoyang", "" ], [ "Koehn", "Philipp", "" ] ]
Saliency methods are widely used to interpret neural network predictions, but different variants of saliency methods often disagree even on the interpretations of the same prediction made by the same model. In these cases, how do we identify when are these interpretations trustworthy enough to be used in analyses? To address this question, we conduct a comprehensive and quantitative evaluation of saliency methods on a fundamental category of NLP models: neural language models. We evaluate the quality of prediction interpretations from two perspectives that each represents a desirable property of these interpretations: plausibility and faithfulness. Our evaluation is conducted on four different datasets constructed from the existing human annotation of syntactic and semantic agreements, on both sentence-level and document-level. Through our evaluation, we identified various ways saliency methods could yield interpretations of low quality. We recommend that future work deploying such methods to neural language models should carefully validate their interpretations before drawing insights.
2201.12380
Shichang Zhang
Shichang Zhang, Yozen Liu, Neil Shah, Yizhou Sun
GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining machine learning models is an important and increasingly popular area of research interest. The Shapley value from game theory has been proposed as a prime approach to compute feature importance towards model predictions on images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for GNN explanation, where the task is to identify the most important subgraph and constituent nodes for GNN predictions. We claim that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we define a scoring function based on a new structure-aware value from the cooperative game theory proposed by Hamiache and Navarro (HN). When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the node structural roles. We demonstrate that GStarX produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification.
[ { "created": "Fri, 28 Jan 2022 19:19:39 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 23:06:26 GMT", "version": "v2" }, { "created": "Sat, 21 May 2022 06:23:20 GMT", "version": "v3" }, { "created": "Thu, 13 Oct 2022 05:16:08 GMT", "version": "v4" }, { "created": "Thu, 29 Dec 2022 23:44:14 GMT", "version": "v5" } ]
2023-01-02
[ [ "Zhang", "Shichang", "" ], [ "Liu", "Yozen", "" ], [ "Shah", "Neil", "" ], [ "Sun", "Yizhou", "" ] ]
Explaining machine learning models is an important and increasingly popular area of research interest. The Shapley value from game theory has been proposed as a prime approach to compute feature importance towards model predictions on images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for GNN explanation, where the task is to identify the most important subgraph and constituent nodes for GNN predictions. We claim that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we define a scoring function based on a new structure-aware value from the cooperative game theory proposed by Hamiache and Navarro (HN). When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the node structural roles. We demonstrate that GStarX produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification.
2311.05021
Josue Ruano
Josu\'e Ruano, Mart\'in G\'omez, Eduardo Romero, Antoine Manzanera
Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images
null
Ruano, J., Gomez, M., Romero, E., & Manzanera, A. (2024). Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images. Computerized Medical Imaging and Graphics, 102390
10.1016/j.compmedimag.2024.102390
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Colonoscopy is the choice procedure to diagnose colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248,400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of 0.451 cm, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods.
[ { "created": "Wed, 8 Nov 2023 21:14:56 GMT", "version": "v1" } ]
2024-05-07
[ [ "Ruano", "Josué", "" ], [ "Gómez", "Martín", "" ], [ "Romero", "Eduardo", "" ], [ "Manzanera", "Antoine", "" ] ]
Colonoscopy is the choice procedure to diagnose colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248,400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of 0.451 cm, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods.
1811.04756
Pedro Hermosilla Casajus
Pedro Hermosilla and Sebastian Maisch and Tobias Ritschel and Timo Ropinski
Deep-learning the Latent Space of Light Transport
Eurographics Symposium on Rendering 2019
null
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising of a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.
[ { "created": "Mon, 12 Nov 2018 14:55:58 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2019 10:22:08 GMT", "version": "v2" } ]
2019-07-02
[ [ "Hermosilla", "Pedro", "" ], [ "Maisch", "Sebastian", "" ], [ "Ritschel", "Tobias", "" ], [ "Ropinski", "Timo", "" ] ]
We suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising of a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.
1803.09909
Xinghao Ding
Liyan Sun, Zhiwen Fan, Xinghao Ding, Congbo Cai, Yue Huang, John Paisley
A Divide-and-Conquer Approach to Compressed Sensing MRI
37 pages, 20 figures, 2 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires. In traditional CS-MRI inversion methods, the fact that the energy within the Fourier measurement domain is distributed non-uniformly is often neglected during reconstruction. As a result, more densely sampled low-frequency information tends to dominate penalization schemes for reconstructing MRI at the expense of high-frequency details. In this paper, we propose a new framework for CS-MRI inversion in which we decompose the observed k-space data into "subspaces" via sets of filters in a lossless way, and reconstruct the images in these various spaces individually using off-the-shelf algorithms. We then fuse the results to obtain the final reconstruction. In this way we are able to focus reconstruction on frequency information within the entire k-space more equally, preserving both high and low frequency details. We demonstrate that the proposed framework is competitive with state-of-the-art methods in CS-MRI in terms of quantitative performance, and often improves an algorithm's results qualitatively compared with it's direct application to k-space.
[ { "created": "Tue, 27 Mar 2018 06:07:17 GMT", "version": "v1" } ]
2018-03-28
[ [ "Sun", "Liyan", "" ], [ "Fan", "Zhiwen", "" ], [ "Ding", "Xinghao", "" ], [ "Cai", "Congbo", "" ], [ "Huang", "Yue", "" ], [ "Paisley", "John", "" ] ]
Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires. In traditional CS-MRI inversion methods, the fact that the energy within the Fourier measurement domain is distributed non-uniformly is often neglected during reconstruction. As a result, more densely sampled low-frequency information tends to dominate penalization schemes for reconstructing MRI at the expense of high-frequency details. In this paper, we propose a new framework for CS-MRI inversion in which we decompose the observed k-space data into "subspaces" via sets of filters in a lossless way, and reconstruct the images in these various spaces individually using off-the-shelf algorithms. We then fuse the results to obtain the final reconstruction. In this way we are able to focus reconstruction on frequency information within the entire k-space more equally, preserving both high and low frequency details. We demonstrate that the proposed framework is competitive with state-of-the-art methods in CS-MRI in terms of quantitative performance, and often improves an algorithm's results qualitatively compared with it's direct application to k-space.
2101.07215
Samarth Bhatia
Yukti Makhija (1), Samarth Bhatia (1), Shalendra Singh (2), Sneha Kumar Jayaswal (1), Prabhat Singh Malik (3), Pallavi Gupta (4), Shreyas N. Samaga (1), Shreya Johri (1), Sri Krishna Venigalla (2), Rabi Narayan Hota (2), Surinder Singh Bhatia (5), Ishaan Gupta (1) ((1) Indian Institute of Technology Delhi, (2) Armed forces Medical College Pune, (3) All India Institute of Medical Sciences Delhi, (4) Indian institute of Science Education and Research Bhopal, (5) DGAFMS office Ministry of Defence Delhi)
Challenges in the application of a mortality prediction model for COVID-19 patients on an Indian cohort
8 pages, 1 figure, 1 table Study designed by: IG, SB, YM, SJ. Data collected and curated by: SKJ, PG, SNS, RNH, SSB, PSM, SKV and SS. Data analysis performed by: SB, YM. Manuscript was written by: IG, SS, SB, YM . All authors read and approved the final manuscript. The first two authors have contributed equally
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Many countries are now experiencing the third wave of the COVID-19 pandemic straining the healthcare resources with an acute shortage of hospital beds and ventilators for the critically ill patients. This situation is especially worse in India with the second largest load of COVID-19 cases and a relatively resource-scarce medical infrastructure. Therefore, it becomes essential to triage the patients based on the severity of their disease and devote resources towards critically ill patients. Yan et al. 1 have published a very pertinent research that uses Machine learning (ML) methods to predict the outcome of COVID-19 patients based on their clinical parameters at the day of admission. They used the XGBoost algorithm, a type of ensemble model, to build the mortality prediction model. The final classifier is built through the sequential addition of multiple weak classifiers. The clinically operable decision rule was obtained from a 'single-tree XGBoost' and used lactic dehydrogenase (LDH), lymphocyte and high-sensitivity C-reactive protein (hs-CRP) values. This decision tree achieved a 100% survival prediction and 81% mortality prediction. However, these models have several technical challenges and do not provide an out of the box solution that can be deployed for other populations as has been reported in the "Matters Arising" section of Yan et al. Here, we show the limitations of this model by deploying it on one of the largest datasets of COVID-19 patients containing detailed clinical parameters collected from India.
[ { "created": "Fri, 15 Jan 2021 07:06:49 GMT", "version": "v1" } ]
2021-01-19
[ [ "Makhija", "Yukti", "" ], [ "Bhatia", "Samarth", "" ], [ "Singh", "Shalendra", "" ], [ "Jayaswal", "Sneha Kumar", "" ], [ "Malik", "Prabhat Singh", "" ], [ "Gupta", "Pallavi", "" ], [ "Samaga", "Shreyas N.", "" ], [ "Johri", "Shreya", "" ], [ "Venigalla", "Sri Krishna", "" ], [ "Hota", "Rabi Narayan", "" ], [ "Bhatia", "Surinder Singh", "" ], [ "Gupta", "Ishaan", "" ] ]
Many countries are now experiencing the third wave of the COVID-19 pandemic straining the healthcare resources with an acute shortage of hospital beds and ventilators for the critically ill patients. This situation is especially worse in India with the second largest load of COVID-19 cases and a relatively resource-scarce medical infrastructure. Therefore, it becomes essential to triage the patients based on the severity of their disease and devote resources towards critically ill patients. Yan et al. 1 have published a very pertinent research that uses Machine learning (ML) methods to predict the outcome of COVID-19 patients based on their clinical parameters at the day of admission. They used the XGBoost algorithm, a type of ensemble model, to build the mortality prediction model. The final classifier is built through the sequential addition of multiple weak classifiers. The clinically operable decision rule was obtained from a 'single-tree XGBoost' and used lactic dehydrogenase (LDH), lymphocyte and high-sensitivity C-reactive protein (hs-CRP) values. This decision tree achieved a 100% survival prediction and 81% mortality prediction. However, these models have several technical challenges and do not provide an out of the box solution that can be deployed for other populations as has been reported in the "Matters Arising" section of Yan et al. Here, we show the limitations of this model by deploying it on one of the largest datasets of COVID-19 patients containing detailed clinical parameters collected from India.
2403.18370
Luigi Sigillo
Luigi Sigillo, Riccardo Fosco Gramaccioni, Alessandro Nicolosi, Danilo Comminiello
Ship in Sight: Diffusion Models for Ship-Image Super Resolution
Accepted at 2024 International Joint Conference on Neural Networks (IJCNN)
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years, remarkable advancements have been achieved in the field of image generation, primarily driven by the escalating demand for high-quality outcomes across various image generation subtasks, such as inpainting, denoising, and super resolution. A major effort is devoted to exploring the application of super-resolution techniques to enhance the quality of low-resolution images. In this context, our method explores in depth the problem of ship image super resolution, which is crucial for coastal and port surveillance. We investigate the opportunity given by the growing interest in text-to-image diffusion models, taking advantage of the prior knowledge that such foundation models have already learned. In particular, we present a diffusion-model-based architecture that leverages text conditioning during training while being class-aware, to best preserve the crucial details of the ships during the generation of the super-resoluted image. Since the specificity of this task and the scarcity availability of off-the-shelf data, we also introduce a large labeled ship dataset scraped from online ship images, mostly from ShipSpotting\footnote{\url{www.shipspotting.com}} website. Our method achieves more robust results than other deep learning models previously employed for super resolution, as proven by the multiple experiments performed. Moreover, we investigate how this model can benefit downstream tasks, such as classification and object detection, thus emphasizing practical implementation in a real-world scenario. Experimental results show flexibility, reliability, and impressive performance of the proposed framework over state-of-the-art methods for different tasks. The code is available at: https://github.com/LuigiSigillo/ShipinSight .
[ { "created": "Wed, 27 Mar 2024 09:06:36 GMT", "version": "v1" }, { "created": "Tue, 21 May 2024 16:45:05 GMT", "version": "v2" } ]
2024-05-22
[ [ "Sigillo", "Luigi", "" ], [ "Gramaccioni", "Riccardo Fosco", "" ], [ "Nicolosi", "Alessandro", "" ], [ "Comminiello", "Danilo", "" ] ]
In recent years, remarkable advancements have been achieved in the field of image generation, primarily driven by the escalating demand for high-quality outcomes across various image generation subtasks, such as inpainting, denoising, and super resolution. A major effort is devoted to exploring the application of super-resolution techniques to enhance the quality of low-resolution images. In this context, our method explores in depth the problem of ship image super resolution, which is crucial for coastal and port surveillance. We investigate the opportunity given by the growing interest in text-to-image diffusion models, taking advantage of the prior knowledge that such foundation models have already learned. In particular, we present a diffusion-model-based architecture that leverages text conditioning during training while being class-aware, to best preserve the crucial details of the ships during the generation of the super-resoluted image. Since the specificity of this task and the scarcity availability of off-the-shelf data, we also introduce a large labeled ship dataset scraped from online ship images, mostly from ShipSpotting\footnote{\url{www.shipspotting.com}} website. Our method achieves more robust results than other deep learning models previously employed for super resolution, as proven by the multiple experiments performed. Moreover, we investigate how this model can benefit downstream tasks, such as classification and object detection, thus emphasizing practical implementation in a real-world scenario. Experimental results show flexibility, reliability, and impressive performance of the proposed framework over state-of-the-art methods for different tasks. The code is available at: https://github.com/LuigiSigillo/ShipinSight .
1811.04491
Kowshik Thopalli
Kowshik Thopalli, Rushil Anirudh, Jayaraman J. Thiagarajan, Pavan Turaga
Multiple Subspace Alignment Improves Domain Adaptation
under review in ICASSP 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel unsupervised domain adaptation (DA) method for cross-domain visual recognition. Though subspace methods have found success in DA, their performance is often limited due to the assumption of approximating an entire dataset using a single low-dimensional subspace. Instead, we develop a method to effectively represent the source and target datasets via a collection of low-dimensional subspaces, and subsequently align them by exploiting the natural geometry of the space of subspaces, on the Grassmann manifold. We demonstrate the effectiveness of this approach, using empirical studies on two widely used benchmarks, with state of the art domain adaptation performance
[ { "created": "Sun, 11 Nov 2018 22:02:16 GMT", "version": "v1" } ]
2018-11-13
[ [ "Thopalli", "Kowshik", "" ], [ "Anirudh", "Rushil", "" ], [ "Thiagarajan", "Jayaraman J.", "" ], [ "Turaga", "Pavan", "" ] ]
We present a novel unsupervised domain adaptation (DA) method for cross-domain visual recognition. Though subspace methods have found success in DA, their performance is often limited due to the assumption of approximating an entire dataset using a single low-dimensional subspace. Instead, we develop a method to effectively represent the source and target datasets via a collection of low-dimensional subspaces, and subsequently align them by exploiting the natural geometry of the space of subspaces, on the Grassmann manifold. We demonstrate the effectiveness of this approach, using empirical studies on two widely used benchmarks, with state of the art domain adaptation performance
2302.02990
Rickard Stureborg
Rickard Stureborg, Bhuwan Dhingra, Jun Yang
Interface Design for Crowdsourcing Hierarchical Multi-Label Text Annotations
To appear in CHI-2023
null
10.1145/3544548.3581431
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Human data labeling is an important and expensive task at the heart of supervised learning systems. Hierarchies help humans understand and organize concepts. We ask whether and how concept hierarchies can inform the design of annotation interfaces to improve labeling quality and efficiency. We study this question through annotation of vaccine misinformation, where the labeling task is difficult and highly subjective. We investigate 6 user interface designs for crowdsourcing hierarchical labels by collecting over 18,000 individual annotations. Under a fixed budget, integrating hierarchies into the design improves crowdsource workers' F1 scores. We attribute this to (1) Grouping similar concepts, improving F1 scores by +0.16 over random groupings, (2) Strong relative performance on high-difficulty examples (relative F1 score difference of +0.40), and (3) Filtering out obvious negatives, increasing precision by +0.07. Ultimately, labeling schemes integrating the hierarchy outperform those that do not - achieving mean F1 of 0.70.
[ { "created": "Mon, 6 Feb 2023 18:29:16 GMT", "version": "v1" }, { "created": "Wed, 22 Feb 2023 21:03:57 GMT", "version": "v2" } ]
2023-02-24
[ [ "Stureborg", "Rickard", "" ], [ "Dhingra", "Bhuwan", "" ], [ "Yang", "Jun", "" ] ]
Human data labeling is an important and expensive task at the heart of supervised learning systems. Hierarchies help humans understand and organize concepts. We ask whether and how concept hierarchies can inform the design of annotation interfaces to improve labeling quality and efficiency. We study this question through annotation of vaccine misinformation, where the labeling task is difficult and highly subjective. We investigate 6 user interface designs for crowdsourcing hierarchical labels by collecting over 18,000 individual annotations. Under a fixed budget, integrating hierarchies into the design improves crowdsource workers' F1 scores. We attribute this to (1) Grouping similar concepts, improving F1 scores by +0.16 over random groupings, (2) Strong relative performance on high-difficulty examples (relative F1 score difference of +0.40), and (3) Filtering out obvious negatives, increasing precision by +0.07. Ultimately, labeling schemes integrating the hierarchy outperform those that do not - achieving mean F1 of 0.70.
1709.00098
Duc Nguyen
Duc T. Nguyen and Blair Kaneshiro
AudExpCreator: A GUI-based Matlab tool for designing and creating auditory experiments with the Psychophysics Toolbox
15 pages, 6 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present AudExpCreator, a GUI-based Matlab tool for designing and creating auditory experiments. AudExpCreator allows users to generate auditory experiments that run on Matlab's Psychophysics Toolbox without having to write any code; rather, users simply follow instructions in GUIs to specify desired design parameters. The software comprises five auditory study types, including behavioral studies and integration with EEG and physiological response collection systems. Advanced features permit more complicated experimental designs as well as maintenance and update of previously created experiments. AudExpCreator alleviates programming barriers while providing a free, open-source alternative to commercial experimental design software.
[ { "created": "Thu, 31 Aug 2017 22:15:12 GMT", "version": "v1" } ]
2017-09-04
[ [ "Nguyen", "Duc T.", "" ], [ "Kaneshiro", "Blair", "" ] ]
We present AudExpCreator, a GUI-based Matlab tool for designing and creating auditory experiments. AudExpCreator allows users to generate auditory experiments that run on Matlab's Psychophysics Toolbox without having to write any code; rather, users simply follow instructions in GUIs to specify desired design parameters. The software comprises five auditory study types, including behavioral studies and integration with EEG and physiological response collection systems. Advanced features permit more complicated experimental designs as well as maintenance and update of previously created experiments. AudExpCreator alleviates programming barriers while providing a free, open-source alternative to commercial experimental design software.
1904.07011
Li Huang
Li Huang and Eun-Young Kang
SMT-based Probabilistic Analysis of Timing Constraints in Cyber-Physical Systems
2 pages, accepted at FMCAD2018 student forum
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling and analysis of timing constraints is crucial in cyber-physical systems (CPS). EAST-ADL is an architectural language dedicated to safety-critical embedded system design. SIMULINK/STATEFLOW (S/S) is a widely used industrial tool for modeling and analysis of embedded systems. In most cases, a bounded number of violations of timing constraints in systems would not lead to system failures when the results of the violations are negligible, called Weakly-Hard (WH). We have previously defined a probabilistic extension of Clock Constraint Specification Language (CCSL), called PrCCSL, for formal specification of EAST-ADL timing constraints in the context of WH. In this paper, we propose an SMT-based approach for probabilistic analysis of EAST-ADL timing constraints in CPS modeled in S/S: an automatic transformation from S/S models to the input language of SMT solver is provided; timing constraints specified in PrCCSL are encoded into SMT formulas and the probabilistic analysis of timing constraints is reduced to the validity checking of the resulting SMT encodings. Our approach is demonstrated a cooperative automotive system case study.
[ { "created": "Mon, 15 Apr 2019 12:59:23 GMT", "version": "v1" } ]
2019-04-16
[ [ "Huang", "Li", "" ], [ "Kang", "Eun-Young", "" ] ]
Modeling and analysis of timing constraints is crucial in cyber-physical systems (CPS). EAST-ADL is an architectural language dedicated to safety-critical embedded system design. SIMULINK/STATEFLOW (S/S) is a widely used industrial tool for modeling and analysis of embedded systems. In most cases, a bounded number of violations of timing constraints in systems would not lead to system failures when the results of the violations are negligible, called Weakly-Hard (WH). We have previously defined a probabilistic extension of Clock Constraint Specification Language (CCSL), called PrCCSL, for formal specification of EAST-ADL timing constraints in the context of WH. In this paper, we propose an SMT-based approach for probabilistic analysis of EAST-ADL timing constraints in CPS modeled in S/S: an automatic transformation from S/S models to the input language of SMT solver is provided; timing constraints specified in PrCCSL are encoded into SMT formulas and the probabilistic analysis of timing constraints is reduced to the validity checking of the resulting SMT encodings. Our approach is demonstrated a cooperative automotive system case study.
1408.0595
T.R. Gopalakrishnan Nair
T. R. Gopalakrishnan Nair and Meenakshi Malhotra
Correlating and Cross-linking Knowledge Threads in Informledge System for Creating New Knowledge
6 pages, 6 figures, 3 tables, International Conference on Knowledge Engineering and Ontology Development, 2012
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a considerable advance in computing, to mimic the way in which the brain tries to comprehend and structure the information to retrieve meaningful knowledge. It is identified that neuronal entities hold whole of the knowledge that the species makes use of. We intended to develop a modified knowledge based system, termed as Informledge System (ILS) with autonomous nodes and intelligent links that integrate and structure the pieces of knowledge. We conceive that every piece of knowledge is a cluster of cross-linked and correlated structure. In this paper, we put forward the theory of the nodes depicting concepts, referred as Entity Concept State which in turn is dealt with Concept State Diagrams (CSD). This theory is based on an abstract framework provided by the concepts. The framework represents the ILS as the weighted graph where the weights attached with the linked nodes help in knowledge retrieval by providing the direction of connectivity of autonomous nodes present in knowledge thread traversal. Here for the first time in the process of developing Informledge, we apply tenor computation for creating intelligent combinatorial knowledge with cross mutation to create fresh knowledge which looks to be the fundamentals of a typical thought process.
[ { "created": "Mon, 4 Aug 2014 06:10:24 GMT", "version": "v1" } ]
2014-08-05
[ [ "Nair", "T. R. Gopalakrishnan", "" ], [ "Malhotra", "Meenakshi", "" ] ]
There has been a considerable advance in computing, to mimic the way in which the brain tries to comprehend and structure the information to retrieve meaningful knowledge. It is identified that neuronal entities hold whole of the knowledge that the species makes use of. We intended to develop a modified knowledge based system, termed as Informledge System (ILS) with autonomous nodes and intelligent links that integrate and structure the pieces of knowledge. We conceive that every piece of knowledge is a cluster of cross-linked and correlated structure. In this paper, we put forward the theory of the nodes depicting concepts, referred as Entity Concept State which in turn is dealt with Concept State Diagrams (CSD). This theory is based on an abstract framework provided by the concepts. The framework represents the ILS as the weighted graph where the weights attached with the linked nodes help in knowledge retrieval by providing the direction of connectivity of autonomous nodes present in knowledge thread traversal. Here for the first time in the process of developing Informledge, we apply tenor computation for creating intelligent combinatorial knowledge with cross mutation to create fresh knowledge which looks to be the fundamentals of a typical thought process.
2111.11631
Zhaobo Qi
Zhaobo Qi, Shuhui Wang, Chi Su, Li Su, Qingming Huang, and Qi Tian
Self-Regulated Learning for Egocentric Video Activity Anticipation
null
null
10.1109/TPAMI.2021.3059923
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Future activity anticipation is a challenging problem in egocentric vision. As a standard future activity anticipation paradigm, recursive sequence prediction suffers from the accumulation of errors. To address this problem, we propose a simple and effective Self-Regulated Learning framework, which aims to regulate the intermediate representation consecutively to produce representation that (a) emphasizes the novel information in the frame of the current time-stamp in contrast to previously observed content, and (b) reflects its correlation with previously observed frames. The former is achieved by minimizing a contrastive loss, and the latter can be achieved by a dynamic reweighing mechanism to attend to informative frames in the observed content with a similarity comparison between feature of the current frame and observed frames. The learned final video representation can be further enhanced by multi-task learning which performs joint feature learning on the target activity labels and the automatically detected action and object class tokens. SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets. Its effectiveness is also verified by the experimental fact that the action and object concepts that support the activity semantics can be accurately identified.
[ { "created": "Tue, 23 Nov 2021 03:29:18 GMT", "version": "v1" } ]
2021-11-24
[ [ "Qi", "Zhaobo", "" ], [ "Wang", "Shuhui", "" ], [ "Su", "Chi", "" ], [ "Su", "Li", "" ], [ "Huang", "Qingming", "" ], [ "Tian", "Qi", "" ] ]
Future activity anticipation is a challenging problem in egocentric vision. As a standard future activity anticipation paradigm, recursive sequence prediction suffers from the accumulation of errors. To address this problem, we propose a simple and effective Self-Regulated Learning framework, which aims to regulate the intermediate representation consecutively to produce representation that (a) emphasizes the novel information in the frame of the current time-stamp in contrast to previously observed content, and (b) reflects its correlation with previously observed frames. The former is achieved by minimizing a contrastive loss, and the latter can be achieved by a dynamic reweighing mechanism to attend to informative frames in the observed content with a similarity comparison between feature of the current frame and observed frames. The learned final video representation can be further enhanced by multi-task learning which performs joint feature learning on the target activity labels and the automatically detected action and object class tokens. SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets. Its effectiveness is also verified by the experimental fact that the action and object concepts that support the activity semantics can be accurately identified.
2309.00578
Shankar Bhamidi
Dhruv Patel and Hui Shen and Shankar Bhamidi and Yufeng Liu and Vladas Pipiras
Consistency of Lloyd's Algorithm Under Perturbations
Preprint version 1
null
null
null
cs.LG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of unsupervised learning, Lloyd's algorithm is one of the most widely used clustering algorithms. It has inspired a plethora of work investigating the correctness of the algorithm under various settings with ground truth clusters. In particular, in 2016, Lu and Zhou have shown that the mis-clustering rate of Lloyd's algorithm on $n$ independent samples from a sub-Gaussian mixture is exponentially bounded after $O(\log(n))$ iterations, assuming proper initialization of the algorithm. However, in many applications, the true samples are unobserved and need to be learned from the data via pre-processing pipelines such as spectral methods on appropriate data matrices. We show that the mis-clustering rate of Lloyd's algorithm on perturbed samples from a sub-Gaussian mixture is also exponentially bounded after $O(\log(n))$ iterations under the assumptions of proper initialization and that the perturbation is small relative to the sub-Gaussian noise. In canonical settings with ground truth clusters, we derive bounds for algorithms such as $k$-means$++$ to find good initializations and thus leading to the correctness of clustering via the main result. We show the implications of the results for pipelines measuring the statistical significance of derived clusters from data such as SigClust. We use these general results to derive implications in providing theoretical guarantees on the misclustering rate for Lloyd's algorithm in a host of applications, including high-dimensional time series, multi-dimensional scaling, and community detection for sparse networks via spectral clustering.
[ { "created": "Fri, 1 Sep 2023 16:45:52 GMT", "version": "v1" } ]
2023-09-04
[ [ "Patel", "Dhruv", "" ], [ "Shen", "Hui", "" ], [ "Bhamidi", "Shankar", "" ], [ "Liu", "Yufeng", "" ], [ "Pipiras", "Vladas", "" ] ]
In the context of unsupervised learning, Lloyd's algorithm is one of the most widely used clustering algorithms. It has inspired a plethora of work investigating the correctness of the algorithm under various settings with ground truth clusters. In particular, in 2016, Lu and Zhou have shown that the mis-clustering rate of Lloyd's algorithm on $n$ independent samples from a sub-Gaussian mixture is exponentially bounded after $O(\log(n))$ iterations, assuming proper initialization of the algorithm. However, in many applications, the true samples are unobserved and need to be learned from the data via pre-processing pipelines such as spectral methods on appropriate data matrices. We show that the mis-clustering rate of Lloyd's algorithm on perturbed samples from a sub-Gaussian mixture is also exponentially bounded after $O(\log(n))$ iterations under the assumptions of proper initialization and that the perturbation is small relative to the sub-Gaussian noise. In canonical settings with ground truth clusters, we derive bounds for algorithms such as $k$-means$++$ to find good initializations and thus leading to the correctness of clustering via the main result. We show the implications of the results for pipelines measuring the statistical significance of derived clusters from data such as SigClust. We use these general results to derive implications in providing theoretical guarantees on the misclustering rate for Lloyd's algorithm in a host of applications, including high-dimensional time series, multi-dimensional scaling, and community detection for sparse networks via spectral clustering.
1508.04525
Wei Zhang
Wei Zhang, Yang Yu, Osho Gupta, Judith Gelernter
Recognizing Extended Spatiotemporal Expressions by Actively Trained Average Perceptron Ensembles
10 pages
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise geocoding and time normalization for text requires that location and time phrases be identified. Many state-of-the-art geoparsers and temporal parsers suffer from low recall. Categories commonly missed by parsers are: nouns used in a non- spatiotemporal sense, adjectival and adverbial phrases, prepositional phrases, and numerical phrases. We collected and annotated data set by querying commercial web searches API with such spatiotemporal expressions as were missed by state-of-the- art parsers. Due to the high cost of sentence annotation, active learning was used to label training data, and a new strategy was designed to better select training examples to reduce labeling cost. For the learning algorithm, we applied an average perceptron trained Featurized Hidden Markov Model (FHMM). Five FHMM instances were used to create an ensemble, with the output phrase selected by voting. Our ensemble model was tested on a range of sequential labeling tasks, and has shown competitive performance. Our contributions include (1) an new dataset annotated with named entities and expanded spatiotemporal expressions; (2) a comparison of inference algorithms for ensemble models showing the superior accuracy of Belief Propagation over Viterbi Decoding; (3) a new example re-weighting method for active ensemble learning that 'memorizes' the latest examples trained; (4) a spatiotemporal parser that jointly recognizes expanded spatiotemporal expressions as well as named entities.
[ { "created": "Wed, 19 Aug 2015 04:17:47 GMT", "version": "v1" } ]
2015-08-20
[ [ "Zhang", "Wei", "" ], [ "Yu", "Yang", "" ], [ "Gupta", "Osho", "" ], [ "Gelernter", "Judith", "" ] ]
Precise geocoding and time normalization for text requires that location and time phrases be identified. Many state-of-the-art geoparsers and temporal parsers suffer from low recall. Categories commonly missed by parsers are: nouns used in a non- spatiotemporal sense, adjectival and adverbial phrases, prepositional phrases, and numerical phrases. We collected and annotated data set by querying commercial web searches API with such spatiotemporal expressions as were missed by state-of-the- art parsers. Due to the high cost of sentence annotation, active learning was used to label training data, and a new strategy was designed to better select training examples to reduce labeling cost. For the learning algorithm, we applied an average perceptron trained Featurized Hidden Markov Model (FHMM). Five FHMM instances were used to create an ensemble, with the output phrase selected by voting. Our ensemble model was tested on a range of sequential labeling tasks, and has shown competitive performance. Our contributions include (1) an new dataset annotated with named entities and expanded spatiotemporal expressions; (2) a comparison of inference algorithms for ensemble models showing the superior accuracy of Belief Propagation over Viterbi Decoding; (3) a new example re-weighting method for active ensemble learning that 'memorizes' the latest examples trained; (4) a spatiotemporal parser that jointly recognizes expanded spatiotemporal expressions as well as named entities.
1711.10348
Philippe Jacquod
Tommaso Coletta and Philippe Jacquod
Performance Measures in Electric Power Networks under Line Contingencies
11 pages, 3 figures. Final version as published in IEEE Transactions on Control of Network Systems
null
null
null
cs.SY nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classes of performance measures expressed in terms of ${\cal H}_2$-norms have been recently introduced to quantify the response of coupled dynamical systems to external perturbations. So far, investigations of these performance measures have been restricted to nodal perturbations. Here, we go beyond these earlier works and consider the equally important, but so far neglected case of line perturbations. We consider a network-reduced power system, where a Kron reduction has eliminated passive buses. Identifying the effect that a line fault in the physical network has on the Kron-reduced network, we find that performance measures depend on whether the faulted line connects two passive, two active buses or one active to one passive bus. In all cases, performance measures depend quadratically on the original load on the faulted line times a topology dependent factor. Our theoretical formalism being restricted to Dirac-$\delta$ perturbations, we investigate numerically the validity of our results for finite-time line faults. We find good agreement with theoretical predictions for longer fault durations in systems with more inertia.
[ { "created": "Tue, 28 Nov 2017 15:39:09 GMT", "version": "v1" }, { "created": "Tue, 24 Jul 2018 11:13:56 GMT", "version": "v2" }, { "created": "Tue, 14 May 2019 08:58:01 GMT", "version": "v3" } ]
2019-05-15
[ [ "Coletta", "Tommaso", "" ], [ "Jacquod", "Philippe", "" ] ]
Classes of performance measures expressed in terms of ${\cal H}_2$-norms have been recently introduced to quantify the response of coupled dynamical systems to external perturbations. So far, investigations of these performance measures have been restricted to nodal perturbations. Here, we go beyond these earlier works and consider the equally important, but so far neglected case of line perturbations. We consider a network-reduced power system, where a Kron reduction has eliminated passive buses. Identifying the effect that a line fault in the physical network has on the Kron-reduced network, we find that performance measures depend on whether the faulted line connects two passive, two active buses or one active to one passive bus. In all cases, performance measures depend quadratically on the original load on the faulted line times a topology dependent factor. Our theoretical formalism being restricted to Dirac-$\delta$ perturbations, we investigate numerically the validity of our results for finite-time line faults. We find good agreement with theoretical predictions for longer fault durations in systems with more inertia.
2308.08628
Eva Portelance
Eva Portelance and Michael C. Frank and Dan Jurafsky
Learning the meanings of function words from grounded language using a visual question answering model
Published in Cognitive Science 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Interpreting a seemingly-simple function word like "or", "behind", or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learnt by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning, as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on frequency in models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.
[ { "created": "Wed, 16 Aug 2023 18:53:39 GMT", "version": "v1" }, { "created": "Mon, 29 Jan 2024 18:00:02 GMT", "version": "v2" }, { "created": "Mon, 22 Apr 2024 19:00:51 GMT", "version": "v3" } ]
2024-04-24
[ [ "Portelance", "Eva", "" ], [ "Frank", "Michael C.", "" ], [ "Jurafsky", "Dan", "" ] ]
Interpreting a seemingly-simple function word like "or", "behind", or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learnt by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning, as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on frequency in models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.
2001.11394
Lichao Mou
Lichao Mou, Yuansheng Hua, Pu Jin, Xiao Xiang Zhu
ERA: A Dataset and Deep Learning Benchmark for Event Recognition in Aerial Videos
IEEE Geoscience and Remote Sensing Magazine. Project page: https://lcmou.github.io/ERA_Dataset/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Along with the increasing use of unmanned aerial vehicles (UAVs), large volumes of aerial videos have been produced. It is unrealistic for humans to screen such big data and understand their contents. Hence methodological research on the automatic understanding of UAV videos is of paramount importance. In this paper, we introduce a novel problem of event recognition in unconstrained aerial videos in the remote sensing community and present a large-scale, human-annotated dataset, named ERA (Event Recognition in Aerial videos), consisting of 2,864 videos each with a label from 25 different classes corresponding to an event unfolding 5 seconds. The ERA dataset is designed to have a significant intra-class variation and inter-class similarity and captures dynamic events in various circumstances and at dramatically various scales. Moreover, to offer a benchmark for this task, we extensively validate existing deep networks. We expect that the ERA dataset will facilitate further progress in automatic aerial video comprehension. The website is https://lcmou.github.io/ERA_Dataset/
[ { "created": "Thu, 30 Jan 2020 15:25:54 GMT", "version": "v1" }, { "created": "Fri, 31 Jan 2020 12:47:47 GMT", "version": "v2" }, { "created": "Thu, 5 Mar 2020 10:00:19 GMT", "version": "v3" }, { "created": "Thu, 25 Jun 2020 10:23:08 GMT", "version": "v4" } ]
2020-06-26
[ [ "Mou", "Lichao", "" ], [ "Hua", "Yuansheng", "" ], [ "Jin", "Pu", "" ], [ "Zhu", "Xiao Xiang", "" ] ]
Along with the increasing use of unmanned aerial vehicles (UAVs), large volumes of aerial videos have been produced. It is unrealistic for humans to screen such big data and understand their contents. Hence methodological research on the automatic understanding of UAV videos is of paramount importance. In this paper, we introduce a novel problem of event recognition in unconstrained aerial videos in the remote sensing community and present a large-scale, human-annotated dataset, named ERA (Event Recognition in Aerial videos), consisting of 2,864 videos each with a label from 25 different classes corresponding to an event unfolding 5 seconds. The ERA dataset is designed to have a significant intra-class variation and inter-class similarity and captures dynamic events in various circumstances and at dramatically various scales. Moreover, to offer a benchmark for this task, we extensively validate existing deep networks. We expect that the ERA dataset will facilitate further progress in automatic aerial video comprehension. The website is https://lcmou.github.io/ERA_Dataset/
1604.00162
Christian Stra{\ss}er
Jesse Heyninck and Christian Stra{\ss}er
Relations between assumption-based approaches in nonmonotonic logic and formal argumentation
Contribution to the 16th International Workshop on Non-Monotonic Reasoning (NMR'16), Cape Town
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we make a contribution to the unification of formal models of defeasible reasoning. We present several translations between formal argumentation frameworks and nonmonotonic logics for reasoning with plausible assumptions. More specifically, we translate adaptive logics into assumption-based argumentation and ASPIC+, ASPIC+ into assumption-based argumentation and a fragment of assumption-based argumentation into adaptive logics. Adaptive logics are closely related to Makinson's default assumptions and to a significant class of systems within the tradition of preferential semantics in the vein of KLM and Shoham. Thus, our results also provide close links between formal argumentation and the latter approaches.
[ { "created": "Fri, 1 Apr 2016 08:14:30 GMT", "version": "v1" } ]
2016-04-04
[ [ "Heyninck", "Jesse", "" ], [ "Straßer", "Christian", "" ] ]
In this paper we make a contribution to the unification of formal models of defeasible reasoning. We present several translations between formal argumentation frameworks and nonmonotonic logics for reasoning with plausible assumptions. More specifically, we translate adaptive logics into assumption-based argumentation and ASPIC+, ASPIC+ into assumption-based argumentation and a fragment of assumption-based argumentation into adaptive logics. Adaptive logics are closely related to Makinson's default assumptions and to a significant class of systems within the tradition of preferential semantics in the vein of KLM and Shoham. Thus, our results also provide close links between formal argumentation and the latter approaches.
1904.05643
Jisun An
Jisun An, Haewoon Kwak, Oliver Posegga, Andreas Jungherr
Political Discussions in Homogeneous and Cross-Cutting Communication Spaces
Proc. 13th International Conference on Web and Social Media (ICWSM'19)
null
null
null
cs.CY cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online platforms, such as Facebook, Twitter, and Reddit, provide users with a rich set of features for sharing and consuming political information, expressing political opinions, and exchanging potentially contrary political views. In such activities, two types of communication spaces naturally emerge: those dominated by exchanges between politically homogeneous users and those that allow and encourage cross-cutting exchanges in politically heterogeneous groups. While research on political talk in online environments abounds, we know surprisingly little about the potentially varying nature of discussions in politically homogeneous spaces as compared to cross-cutting communication spaces. To fill this gap, we use Reddit to explore the nature of political discussions in homogeneous and cross-cutting communication spaces. In particular, we develop an analytical template to study interaction and linguistic patterns within and between politically homogeneous and heterogeneous communication spaces. Our analyses reveal different behavioral patterns in homogeneous and cross-cutting communications spaces. We discuss theoretical and practical implications in the context of research on political talk online.
[ { "created": "Thu, 11 Apr 2019 11:46:07 GMT", "version": "v1" } ]
2019-04-12
[ [ "An", "Jisun", "" ], [ "Kwak", "Haewoon", "" ], [ "Posegga", "Oliver", "" ], [ "Jungherr", "Andreas", "" ] ]
Online platforms, such as Facebook, Twitter, and Reddit, provide users with a rich set of features for sharing and consuming political information, expressing political opinions, and exchanging potentially contrary political views. In such activities, two types of communication spaces naturally emerge: those dominated by exchanges between politically homogeneous users and those that allow and encourage cross-cutting exchanges in politically heterogeneous groups. While research on political talk in online environments abounds, we know surprisingly little about the potentially varying nature of discussions in politically homogeneous spaces as compared to cross-cutting communication spaces. To fill this gap, we use Reddit to explore the nature of political discussions in homogeneous and cross-cutting communication spaces. In particular, we develop an analytical template to study interaction and linguistic patterns within and between politically homogeneous and heterogeneous communication spaces. Our analyses reveal different behavioral patterns in homogeneous and cross-cutting communications spaces. We discuss theoretical and practical implications in the context of research on political talk online.
1107.1089
Antonio Fern\'andez Anta
Andr\'es Sevilla and Alberto Mozo and Antonio Fern\'andez Anta
Node Sampling using Random Centrifugal Walks
null
null
null
null
cs.DC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the \emph{source}, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes' weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.
[ { "created": "Wed, 6 Jul 2011 10:46:33 GMT", "version": "v1" }, { "created": "Sun, 20 May 2012 04:10:29 GMT", "version": "v2" }, { "created": "Thu, 27 Sep 2012 17:39:50 GMT", "version": "v3" } ]
2012-09-28
[ [ "Sevilla", "Andrés", "" ], [ "Mozo", "Alberto", "" ], [ "Anta", "Antonio Fernández", "" ] ]
Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the \emph{source}, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes' weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.
2110.05015
Viswanath Ganapathy
Viswanath Ganapathy, Sauptik Dhar, Olimpiya Saha, Pelin Kurt Garberson, Javad Heydari and Mohak Shah
A Survey on Proactive Customer Care: Enabling Science and Steps to Realize it
arXiv admin note: substantial text overlap with arXiv:1912.07383, arXiv:2007.02500 by other authors
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent times, advances in artificial intelligence (AI) and IoT have enabled seamless and viable maintenance of appliances in home and building environments. Several studies have shown that AI has the potential to provide personalized customer support which could predict and avoid errors more reliably than ever before. In this paper, we have analyzed the various building blocks needed to enable a successful AI-driven predictive maintenance use-case. Unlike, existing surveys which mostly provide a deep dive into the recent AI algorithms for Predictive Maintenance (PdM), our survey provides the complete view; starting from business impact to recent technology advancements in algorithms as well as systems research and model deployment. Furthermore, we provide exemplar use-cases on predictive maintenance of appliances using publicly available data sets. Our survey can serve as a template needed to design a successful predictive maintenance use-case. Finally, we touch upon existing public data sources and provide a step-wise breakdown of an AI-driven proactive customer care (PCC) use-case, starting from generic anomaly detection to fault prediction and finally root-cause analysis. We highlight how such a step-wise approach can be advantageous for accurate model building and helpful for gaining insights into predictive maintenance of electromechanical appliances.
[ { "created": "Mon, 11 Oct 2021 05:56:03 GMT", "version": "v1" } ]
2021-10-12
[ [ "Ganapathy", "Viswanath", "" ], [ "Dhar", "Sauptik", "" ], [ "Saha", "Olimpiya", "" ], [ "Garberson", "Pelin Kurt", "" ], [ "Heydari", "Javad", "" ], [ "Shah", "Mohak", "" ] ]
In recent times, advances in artificial intelligence (AI) and IoT have enabled seamless and viable maintenance of appliances in home and building environments. Several studies have shown that AI has the potential to provide personalized customer support which could predict and avoid errors more reliably than ever before. In this paper, we have analyzed the various building blocks needed to enable a successful AI-driven predictive maintenance use-case. Unlike, existing surveys which mostly provide a deep dive into the recent AI algorithms for Predictive Maintenance (PdM), our survey provides the complete view; starting from business impact to recent technology advancements in algorithms as well as systems research and model deployment. Furthermore, we provide exemplar use-cases on predictive maintenance of appliances using publicly available data sets. Our survey can serve as a template needed to design a successful predictive maintenance use-case. Finally, we touch upon existing public data sources and provide a step-wise breakdown of an AI-driven proactive customer care (PCC) use-case, starting from generic anomaly detection to fault prediction and finally root-cause analysis. We highlight how such a step-wise approach can be advantageous for accurate model building and helpful for gaining insights into predictive maintenance of electromechanical appliances.
2312.16805
Jiazhang Zheng
Jiazhang Zheng, Lei Li, Qiuping Liao, Cheng Li, Li Li, Yangxing Liu
DarkShot: Lighting Dark Images with Low-Compute and High-Quality
Accepted by IEEE ICASSP 2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nighttime photography encounters escalating challenges in extremely low-light conditions, primarily attributable to the ultra-low signal-to-noise ratio. For real-world deployment, a practical solution must not only produce visually appealing results but also require minimal computation. However, most existing methods are either focused on improving restoration performance or employ lightweight models at the cost of quality. This paper proposes a lightweight network that outperforms existing state-of-the-art (SOTA) methods in low-light enhancement tasks while minimizing computation. The proposed network incorporates Siamese Self-Attention Block (SSAB) and Skip-Channel Attention (SCA) modules, which enhance the model's capacity to aggregate global information and are well-suited for high-resolution images. Additionally, based on our analysis of the low-light image restoration process, we propose a Two-Stage Framework that achieves superior results. Our model can restore a UHD 4K resolution image with minimal computation while keeping SOTA restoration quality.
[ { "created": "Thu, 28 Dec 2023 03:26:50 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2023 02:23:15 GMT", "version": "v2" }, { "created": "Wed, 10 Jan 2024 02:51:27 GMT", "version": "v3" } ]
2024-01-11
[ [ "Zheng", "Jiazhang", "" ], [ "Li", "Lei", "" ], [ "Liao", "Qiuping", "" ], [ "Li", "Cheng", "" ], [ "Li", "Li", "" ], [ "Liu", "Yangxing", "" ] ]
Nighttime photography encounters escalating challenges in extremely low-light conditions, primarily attributable to the ultra-low signal-to-noise ratio. For real-world deployment, a practical solution must not only produce visually appealing results but also require minimal computation. However, most existing methods are either focused on improving restoration performance or employ lightweight models at the cost of quality. This paper proposes a lightweight network that outperforms existing state-of-the-art (SOTA) methods in low-light enhancement tasks while minimizing computation. The proposed network incorporates Siamese Self-Attention Block (SSAB) and Skip-Channel Attention (SCA) modules, which enhance the model's capacity to aggregate global information and are well-suited for high-resolution images. Additionally, based on our analysis of the low-light image restoration process, we propose a Two-Stage Framework that achieves superior results. Our model can restore a UHD 4K resolution image with minimal computation while keeping SOTA restoration quality.
2206.00377
Zhaolin Wang
Xidong Mu, Zhaolin Wang, Yuanwei Liu
NOMA for Integrating Sensing and Communications towards 6G: A Multiple Access Perspective
7 pages, 5 figures
null
10.1109/MWC.015.2200559
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article focuses on the development of integrated sensing and communications (ISAC) from a multiple access (MA) perspective, where the idea of non-orthogonal multiple access (NOMA) is exploited for harmoniously accommodating the sensing and communication functionalities. We first reveal that the developing trend of ISAC is from \emph{orthogonality} to \emph{non-orthogonality}, and introduce the fundamental models of the downlink and uplink ISAC while identifying the design challenges from the MA perspective. (1) For the downlink ISAC, we propose two novel designs, namely \emph{NOMA-empowered} downlink ISAC and \emph{NOMA-inspired} downlink ISAC to effectively coordinate the inter-user interference and the sensing-to-communication interference, respectively. (2) For the uplink ISAC, we first propose a \emph{pure-NOMA-based} uplink ISAC design, where a fixed communication-to-sensing successive interference cancellation order is employed for distinguishing the mixed sensing-communication signal received over the fully shared radio resources. Then, we propose a general \emph{semi-NOMA-based} uplink ISAC design, which includes the conventional orthogonal multiple access-based and pure-NOMA-based uplink ISAC as special cases, thus being capable of providing flexible resource allocation strategies between sensing and communication. Along each proposed NOMA-ISAC design, numerical results are provided for showing the superiority over conventional ISAC designs.
[ { "created": "Wed, 1 Jun 2022 10:25:06 GMT", "version": "v1" } ]
2023-06-19
[ [ "Mu", "Xidong", "" ], [ "Wang", "Zhaolin", "" ], [ "Liu", "Yuanwei", "" ] ]
This article focuses on the development of integrated sensing and communications (ISAC) from a multiple access (MA) perspective, where the idea of non-orthogonal multiple access (NOMA) is exploited for harmoniously accommodating the sensing and communication functionalities. We first reveal that the developing trend of ISAC is from \emph{orthogonality} to \emph{non-orthogonality}, and introduce the fundamental models of the downlink and uplink ISAC while identifying the design challenges from the MA perspective. (1) For the downlink ISAC, we propose two novel designs, namely \emph{NOMA-empowered} downlink ISAC and \emph{NOMA-inspired} downlink ISAC to effectively coordinate the inter-user interference and the sensing-to-communication interference, respectively. (2) For the uplink ISAC, we first propose a \emph{pure-NOMA-based} uplink ISAC design, where a fixed communication-to-sensing successive interference cancellation order is employed for distinguishing the mixed sensing-communication signal received over the fully shared radio resources. Then, we propose a general \emph{semi-NOMA-based} uplink ISAC design, which includes the conventional orthogonal multiple access-based and pure-NOMA-based uplink ISAC as special cases, thus being capable of providing flexible resource allocation strategies between sensing and communication. Along each proposed NOMA-ISAC design, numerical results are provided for showing the superiority over conventional ISAC designs.
1207.6188
Mahyuddin K. M. Nasution
Mahyuddin K. M. Nasution
Kolmogorov Complexity: Clustering Objects and Similarity
13 pages; Bulletin of Mathematics, Vol. 3 (2011), No. 1: 1-16
null
null
null
cs.CC cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The clustering objects has become one of themes in many studies, and do not few researchers use the similarity to cluster the instances automatically. However, few research consider using Kommogorov Complexity to get information about objects from documents, such as Web pages, where the rich information from an approach proved to be difficult to. In this paper, we proposed a similarity measure from Kolmogorov Complexity, and we demonstrate the possibility of exploiting features from Web based on hit counts for objects of Indonesia Intellectual.
[ { "created": "Thu, 26 Jul 2012 07:35:53 GMT", "version": "v1" } ]
2012-07-27
[ [ "Nasution", "Mahyuddin K. M.", "" ] ]
The clustering objects has become one of themes in many studies, and do not few researchers use the similarity to cluster the instances automatically. However, few research consider using Kommogorov Complexity to get information about objects from documents, such as Web pages, where the rich information from an approach proved to be difficult to. In this paper, we proposed a similarity measure from Kolmogorov Complexity, and we demonstrate the possibility of exploiting features from Web based on hit counts for objects of Indonesia Intellectual.
2210.01744
Alexander LaValle
Alexander J. LaValle, Basak Sakcak, and Steven M. LaValle
Bang-Bang Boosting of RRTs
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
This paper presents methods for dramatically improving the performance of sampling-based kinodynamic planners. The key component is the first-known complete, exact steering method that produces a time-optimal trajectory between any states for a vector of synchronized double integrators. This method is applied in three ways: 1) to generate RRT edges that quickly solve the two-point boundary-value problems, 2) to produce a (quasi)metric for more accurate Voronoi bias in RRTs, and 3) to iteratively time-optimize a given collision-free trajectory. Experiments are performed for state spaces with up to 2000 dimensions, resulting in improved computed trajectories and orders of magnitude computation time improvements over using ordinary metrics and constant controls.
[ { "created": "Tue, 4 Oct 2022 17:03:47 GMT", "version": "v1" }, { "created": "Thu, 19 Jan 2023 13:44:27 GMT", "version": "v2" }, { "created": "Thu, 2 Mar 2023 09:21:29 GMT", "version": "v3" } ]
2023-03-03
[ [ "LaValle", "Alexander J.", "" ], [ "Sakcak", "Basak", "" ], [ "LaValle", "Steven M.", "" ] ]
This paper presents methods for dramatically improving the performance of sampling-based kinodynamic planners. The key component is the first-known complete, exact steering method that produces a time-optimal trajectory between any states for a vector of synchronized double integrators. This method is applied in three ways: 1) to generate RRT edges that quickly solve the two-point boundary-value problems, 2) to produce a (quasi)metric for more accurate Voronoi bias in RRTs, and 3) to iteratively time-optimize a given collision-free trajectory. Experiments are performed for state spaces with up to 2000 dimensions, resulting in improved computed trajectories and orders of magnitude computation time improvements over using ordinary metrics and constant controls.
2003.11562
Abhilash Jain
Abhilash Jain, Aku Ruohe, Stig-Arne Gr\"onroos, Mikko Kurimo
Finnish Language Modeling with Deep Transformer Models
4 pages
null
null
null
cs.CL cs.LG cs.SD eess.AS stat.ML
http://creativecommons.org/publicdomain/zero/1.0/
Transformers have recently taken the center stage in language modeling after LSTM's were considered the dominant model architecture for a long time. In this project, we investigate the performance of the Transformer architectures-BERT and Transformer-XL for the language modeling task. We use a sub-word model setting with the Finnish language and compare it to the previous State of the art (SOTA) LSTM model. BERT achieves a pseudo-perplexity score of 14.5, which is the first such measure achieved as far as we know. Transformer-XL improves upon the perplexity score to 73.58 which is 27\% better than the LSTM model.
[ { "created": "Sat, 14 Mar 2020 15:12:03 GMT", "version": "v1" }, { "created": "Fri, 27 Mar 2020 10:02:24 GMT", "version": "v2" } ]
2020-03-30
[ [ "Jain", "Abhilash", "" ], [ "Ruohe", "Aku", "" ], [ "Grönroos", "Stig-Arne", "" ], [ "Kurimo", "Mikko", "" ] ]
Transformers have recently taken the center stage in language modeling after LSTM's were considered the dominant model architecture for a long time. In this project, we investigate the performance of the Transformer architectures-BERT and Transformer-XL for the language modeling task. We use a sub-word model setting with the Finnish language and compare it to the previous State of the art (SOTA) LSTM model. BERT achieves a pseudo-perplexity score of 14.5, which is the first such measure achieved as far as we know. Transformer-XL improves upon the perplexity score to 73.58 which is 27\% better than the LSTM model.
1607.05952
Luca Pappalardo
Luca Pappalardo and Filippo Simini
Data-driven generation of spatio-temporal routines in human mobility
Data Mining and Knowledge Discovery, 2018
null
10.1007/s10618-017-0548-4
null
cs.SI cs.LG physics.data-an physics.soc-ph stat.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generation of realistic spatio-temporal trajectories of human mobility is of fundamental importance in a wide range of applications, such as the developing of protocols for mobile ad-hoc networks or what-if analysis in urban ecosystems. Current generative algorithms fail in accurately reproducing the individuals' recurrent schedules and at the same time in accounting for the possibility that individuals may break the routine during periods of variable duration. In this article we present DITRAS (DIary-based TRAjectory Simulator), a framework to simulate the spatio-temporal patterns of human mobility. DITRAS operates in two steps: the generation of a mobility diary and the translation of the mobility diary into a mobility trajectory. We propose a data-driven algorithm which constructs a diary generator from real data, capturing the tendency of individuals to follow or break their routine. We also propose a trajectory generator based on the concept of preferential exploration and preferential return. We instantiate DITRAS with the proposed diary and trajectory generators and compare the resulting algorithm with real data and synthetic data produced by other generative algorithms, built by instantiating DITRAS with several combinations of diary and trajectory generators. We show that the proposed algorithm reproduces the statistical properties of real trajectories in the most accurate way, making a step forward the understanding of the origin of the spatio-temporal patterns of human mobility.
[ { "created": "Sat, 16 Jul 2016 11:54:27 GMT", "version": "v1" }, { "created": "Sat, 29 Apr 2017 18:51:08 GMT", "version": "v2" }, { "created": "Sat, 9 Dec 2017 10:51:19 GMT", "version": "v3" } ]
2017-12-12
[ [ "Pappalardo", "Luca", "" ], [ "Simini", "Filippo", "" ] ]
The generation of realistic spatio-temporal trajectories of human mobility is of fundamental importance in a wide range of applications, such as the developing of protocols for mobile ad-hoc networks or what-if analysis in urban ecosystems. Current generative algorithms fail in accurately reproducing the individuals' recurrent schedules and at the same time in accounting for the possibility that individuals may break the routine during periods of variable duration. In this article we present DITRAS (DIary-based TRAjectory Simulator), a framework to simulate the spatio-temporal patterns of human mobility. DITRAS operates in two steps: the generation of a mobility diary and the translation of the mobility diary into a mobility trajectory. We propose a data-driven algorithm which constructs a diary generator from real data, capturing the tendency of individuals to follow or break their routine. We also propose a trajectory generator based on the concept of preferential exploration and preferential return. We instantiate DITRAS with the proposed diary and trajectory generators and compare the resulting algorithm with real data and synthetic data produced by other generative algorithms, built by instantiating DITRAS with several combinations of diary and trajectory generators. We show that the proposed algorithm reproduces the statistical properties of real trajectories in the most accurate way, making a step forward the understanding of the origin of the spatio-temporal patterns of human mobility.
2107.02407
Marc Habermann
Marc Habermann, Weipeng Xu, Helge Rhodin, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt
NRST: Non-rigid Surface Tracking from Monocular Video
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose an efficient method for non-rigid surface tracking from monocular RGB videos. Given a video and a template mesh, our algorithm sequentially registers the template non-rigidly to each frame. We formulate the per-frame registration as an optimization problem that includes a novel texture term specifically tailored towards tracking objects with uniform texture but fine-scale structure, such as the regular micro-structural patterns of fabric. Our texture term exploits the orientation information in the micro-structures of the objects, e.g., the yarn patterns of fabrics. This enables us to accurately track uniformly colored materials that have these high frequency micro-structures, for which traditional photometric terms are usually less effective. The results demonstrate the effectiveness of our method on both general textured non-rigid objects and monochromatic fabrics.
[ { "created": "Tue, 6 Jul 2021 06:06:45 GMT", "version": "v1" }, { "created": "Mon, 12 Jul 2021 08:55:46 GMT", "version": "v2" } ]
2021-07-13
[ [ "Habermann", "Marc", "" ], [ "Xu", "Weipeng", "" ], [ "Rhodin", "Helge", "" ], [ "Zollhoefer", "Michael", "" ], [ "Pons-Moll", "Gerard", "" ], [ "Theobalt", "Christian", "" ] ]
We propose an efficient method for non-rigid surface tracking from monocular RGB videos. Given a video and a template mesh, our algorithm sequentially registers the template non-rigidly to each frame. We formulate the per-frame registration as an optimization problem that includes a novel texture term specifically tailored towards tracking objects with uniform texture but fine-scale structure, such as the regular micro-structural patterns of fabric. Our texture term exploits the orientation information in the micro-structures of the objects, e.g., the yarn patterns of fabrics. This enables us to accurately track uniformly colored materials that have these high frequency micro-structures, for which traditional photometric terms are usually less effective. The results demonstrate the effectiveness of our method on both general textured non-rigid objects and monochromatic fabrics.
2004.14620
Tomasz Limisiewicz
Tomasz Limisiewicz and Rudolf Rosa and David Mare\v{c}ek
Universal Dependencies according to BERT: both more specific and more general
null
Findings of the Association for Computational Linguistics: EMNLP 2020
10.18653/v1/2020.findings-emnlp.245
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This work focuses on analyzing the form and extent of syntactic abstraction captured by BERT by extracting labeled dependency trees from self-attentions. Previous work showed that individual BERT heads tend to encode particular dependency relation types. We extend these findings by explicitly comparing BERT relations to Universal Dependencies (UD) annotations, showing that they often do not match one-to-one. We suggest a method for relation identification and syntactic tree construction. Our approach produces significantly more consistent dependency trees than previous work, showing that it better explains the syntactic abstractions in BERT. At the same time, it can be successfully applied with only a minimal amount of supervision and generalizes well across languages.
[ { "created": "Thu, 30 Apr 2020 07:48:07 GMT", "version": "v1" }, { "created": "Fri, 1 May 2020 00:34:10 GMT", "version": "v2" }, { "created": "Tue, 6 Oct 2020 10:22:33 GMT", "version": "v3" } ]
2021-01-01
[ [ "Limisiewicz", "Tomasz", "" ], [ "Rosa", "Rudolf", "" ], [ "Mareček", "David", "" ] ]
This work focuses on analyzing the form and extent of syntactic abstraction captured by BERT by extracting labeled dependency trees from self-attentions. Previous work showed that individual BERT heads tend to encode particular dependency relation types. We extend these findings by explicitly comparing BERT relations to Universal Dependencies (UD) annotations, showing that they often do not match one-to-one. We suggest a method for relation identification and syntactic tree construction. Our approach produces significantly more consistent dependency trees than previous work, showing that it better explains the syntactic abstractions in BERT. At the same time, it can be successfully applied with only a minimal amount of supervision and generalizes well across languages.
2006.13309
Vitaly Zankin
Clement Etienam, Kody Law, Sara Wade, Vitaly Zankin
Fast Deep Mixtures of Gaussian Process Experts
22 pages, 28 figures, to be published in Machine Learning journal
Machine Learning (2024)
10.1007/s10994-023-06491-x
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mixtures of experts have become an indispensable tool for flexible modelling in a supervised learning context, allowing not only the mean function but the entire density of the output to change with the inputs. Sparse Gaussian processes (GP) have shown promise as a leading candidate for the experts in such models, and in this article, we propose to design the gating network for selecting the experts from such mixtures of sparse GPs using a deep neural network (DNN). Furthermore, a fast one pass algorithm called Cluster-Classify-Regress (CCR) is leveraged to approximate the maximum a posteriori (MAP) estimator extremely quickly. This powerful combination of model and algorithm together delivers a novel method which is flexible, robust, and extremely efficient. In particular, the method is able to outperform competing methods in terms of accuracy and uncertainty quantification. The cost is competitive on low-dimensional and small data sets, but is significantly lower for higher-dimensional and big data sets. Iteratively maximizing the distribution of experts given allocations and allocations given experts does not provide significant improvement, which indicates that the algorithm achieves a good approximation to the local MAP estimator very fast. This insight can be useful also in the context of other mixture of experts models.
[ { "created": "Thu, 11 Jun 2020 18:52:34 GMT", "version": "v1" }, { "created": "Tue, 1 Feb 2022 15:59:12 GMT", "version": "v2" }, { "created": "Mon, 31 Oct 2022 17:29:41 GMT", "version": "v3" }, { "created": "Fri, 1 Dec 2023 01:03:08 GMT", "version": "v4" } ]
2024-01-23
[ [ "Etienam", "Clement", "" ], [ "Law", "Kody", "" ], [ "Wade", "Sara", "" ], [ "Zankin", "Vitaly", "" ] ]
Mixtures of experts have become an indispensable tool for flexible modelling in a supervised learning context, allowing not only the mean function but the entire density of the output to change with the inputs. Sparse Gaussian processes (GP) have shown promise as a leading candidate for the experts in such models, and in this article, we propose to design the gating network for selecting the experts from such mixtures of sparse GPs using a deep neural network (DNN). Furthermore, a fast one pass algorithm called Cluster-Classify-Regress (CCR) is leveraged to approximate the maximum a posteriori (MAP) estimator extremely quickly. This powerful combination of model and algorithm together delivers a novel method which is flexible, robust, and extremely efficient. In particular, the method is able to outperform competing methods in terms of accuracy and uncertainty quantification. The cost is competitive on low-dimensional and small data sets, but is significantly lower for higher-dimensional and big data sets. Iteratively maximizing the distribution of experts given allocations and allocations given experts does not provide significant improvement, which indicates that the algorithm achieves a good approximation to the local MAP estimator very fast. This insight can be useful also in the context of other mixture of experts models.
2207.12214
Yan Sun
Yan Sun, Yi Han, Jicong Fan
Laplacian-based Cluster-Contractive t-SNE for High Dimensional Data Visualization
null
null
null
null
cs.LG cs.HC
http://creativecommons.org/publicdomain/zero/1.0/
Dimensionality reduction techniques aim at representing high-dimensional data in low-dimensional spaces to extract hidden and useful information or facilitate visual understanding and interpretation of the data. However, few of them take into consideration the potential cluster information contained implicitly in the high-dimensional data. In this paper, we propose LaptSNE, a new graph-layout nonlinear dimensionality reduction method based on t-SNE, one of the best techniques for visualizing high-dimensional data as 2D scatter plots. Specifically, LaptSNE leverages the eigenvalue information of the graph Laplacian to shrink the potential clusters in the low-dimensional embedding when learning to preserve the local and global structure from high-dimensional space to low-dimensional space. It is nontrivial to solve the proposed model because the eigenvalues of normalized symmetric Laplacian are functions of the decision variable. We provide a majorization-minimization algorithm with convergence guarantee to solve the optimization problem of LaptSNE and show how to calculate the gradient analytically, which may be of broad interest when considering optimization with Laplacian-composited objective. We evaluate our method by a formal comparison with state-of-the-art methods on seven benchmark datasets, both visually and via established quantitative measurements. The results demonstrate the superiority of our method over baselines such as t-SNE and UMAP. We also provide out-of-sample extension, large-scale extension and mini-batch extension for our LaptSNE to facilitate dimensionality reduction in various scenarios.
[ { "created": "Mon, 25 Jul 2022 14:10:24 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2022 06:50:25 GMT", "version": "v2" }, { "created": "Mon, 24 Oct 2022 08:17:48 GMT", "version": "v3" } ]
2022-10-25
[ [ "Sun", "Yan", "" ], [ "Han", "Yi", "" ], [ "Fan", "Jicong", "" ] ]
Dimensionality reduction techniques aim at representing high-dimensional data in low-dimensional spaces to extract hidden and useful information or facilitate visual understanding and interpretation of the data. However, few of them take into consideration the potential cluster information contained implicitly in the high-dimensional data. In this paper, we propose LaptSNE, a new graph-layout nonlinear dimensionality reduction method based on t-SNE, one of the best techniques for visualizing high-dimensional data as 2D scatter plots. Specifically, LaptSNE leverages the eigenvalue information of the graph Laplacian to shrink the potential clusters in the low-dimensional embedding when learning to preserve the local and global structure from high-dimensional space to low-dimensional space. It is nontrivial to solve the proposed model because the eigenvalues of normalized symmetric Laplacian are functions of the decision variable. We provide a majorization-minimization algorithm with convergence guarantee to solve the optimization problem of LaptSNE and show how to calculate the gradient analytically, which may be of broad interest when considering optimization with Laplacian-composited objective. We evaluate our method by a formal comparison with state-of-the-art methods on seven benchmark datasets, both visually and via established quantitative measurements. The results demonstrate the superiority of our method over baselines such as t-SNE and UMAP. We also provide out-of-sample extension, large-scale extension and mini-batch extension for our LaptSNE to facilitate dimensionality reduction in various scenarios.
2310.08471
Tom Kelly
Tom Kelly, John Femiani, and Peter Wonka
WinSyn: A High Resolution Testbed for Synthetic Data
cvpr version
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present WinSyn, a unique dataset and testbed for creating high-quality synthetic data with procedural modeling techniques. The dataset contains high-resolution photographs of windows, selected from locations around the world, with 89,318 individual window crops showcasing diverse geometric and material characteristics. We evaluate a procedural model by training semantic segmentation networks on both synthetic and real images and then comparing their performances on a shared test set of real images. Specifically, we measure the difference in mean Intersection over Union (mIoU) and determine the effective number of real images to match synthetic data's training performance. We design a baseline procedural model as a benchmark and provide 21,290 synthetically generated images. By tuning the procedural model, key factors are identified which significantly influence the model's fidelity in replicating real-world scenarios. Importantly, we highlight the challenge of procedural modeling using current techniques, especially in their ability to replicate the spatial semantics of real-world scenarios. This insight is critical because of the potential of procedural models to bridge to hidden scene aspects such as depth, reflectivity, material properties, and lighting conditions.
[ { "created": "Mon, 9 Oct 2023 20:18:10 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2024 13:47:42 GMT", "version": "v2" } ]
2024-03-29
[ [ "Kelly", "Tom", "" ], [ "Femiani", "John", "" ], [ "Wonka", "Peter", "" ] ]
We present WinSyn, a unique dataset and testbed for creating high-quality synthetic data with procedural modeling techniques. The dataset contains high-resolution photographs of windows, selected from locations around the world, with 89,318 individual window crops showcasing diverse geometric and material characteristics. We evaluate a procedural model by training semantic segmentation networks on both synthetic and real images and then comparing their performances on a shared test set of real images. Specifically, we measure the difference in mean Intersection over Union (mIoU) and determine the effective number of real images to match synthetic data's training performance. We design a baseline procedural model as a benchmark and provide 21,290 synthetically generated images. By tuning the procedural model, key factors are identified which significantly influence the model's fidelity in replicating real-world scenarios. Importantly, we highlight the challenge of procedural modeling using current techniques, especially in their ability to replicate the spatial semantics of real-world scenarios. This insight is critical because of the potential of procedural models to bridge to hidden scene aspects such as depth, reflectivity, material properties, and lighting conditions.
1804.07131
Maria Predari
Roland Glantz, Maria Predari, Henning Meyerhenke
Topology-induced Enhancement of Mappings
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a new method to enhance a mapping $\mu(\cdot)$ of a parallel application's computational tasks to the processing elements (PEs) of a parallel computer. The idea behind our method \mswap is to enhance such a mapping by drawing on the observation that many topologies take the form of a partial cube. This class of graphs includes all rectangular and cubic meshes, any such torus with even extensions in each dimension, all hypercubes, and all trees. Following previous work, we represent the parallel application and the parallel computer by graphs $G_a = (V_a, E_a)$ and $G_p = (V_p, E_p)$. $G_p$ being a partial cube allows us to label its vertices, the PEs, by bitvectors such that the cost of exchanging one unit of information between two vertices $u_p$ and $v_p$ of $G_p$ amounts to the Hamming distance between the labels of $u_p$ and $v_p$. By transferring these bitvectors from $V_p$ to $V_a$ via $\mu^{-1}(\cdot)$ and extending them to be unique on $V_a$, we can enhance $\mu(\cdot)$ by swapping labels of $V_a$ in a new way. Pairs of swapped labels are local \wrt the PEs, but not \wrt $G_a$. Moreover, permutations of the bitvectors' entries give rise to a plethora of hierarchies on the PEs. Through these hierarchies we turn \mswap into a hierarchical method for improving $\mu(\cdot)$ that is complementary to state-of-the-art methods for computing $\mu(\cdot)$ in the first place. In our experiments we use \mswap to enhance mappings of complex networks onto rectangular meshes and tori with 256 and 512 nodes, as well as hypercubes with 256 nodes. It turns out that common quality measures of mappings derived from state-of-the-art algorithms can be improved considerably.
[ { "created": "Thu, 19 Apr 2018 13:08:39 GMT", "version": "v1" } ]
2018-04-20
[ [ "Glantz", "Roland", "" ], [ "Predari", "Maria", "" ], [ "Meyerhenke", "Henning", "" ] ]
In this paper we propose a new method to enhance a mapping $\mu(\cdot)$ of a parallel application's computational tasks to the processing elements (PEs) of a parallel computer. The idea behind our method \mswap is to enhance such a mapping by drawing on the observation that many topologies take the form of a partial cube. This class of graphs includes all rectangular and cubic meshes, any such torus with even extensions in each dimension, all hypercubes, and all trees. Following previous work, we represent the parallel application and the parallel computer by graphs $G_a = (V_a, E_a)$ and $G_p = (V_p, E_p)$. $G_p$ being a partial cube allows us to label its vertices, the PEs, by bitvectors such that the cost of exchanging one unit of information between two vertices $u_p$ and $v_p$ of $G_p$ amounts to the Hamming distance between the labels of $u_p$ and $v_p$. By transferring these bitvectors from $V_p$ to $V_a$ via $\mu^{-1}(\cdot)$ and extending them to be unique on $V_a$, we can enhance $\mu(\cdot)$ by swapping labels of $V_a$ in a new way. Pairs of swapped labels are local \wrt the PEs, but not \wrt $G_a$. Moreover, permutations of the bitvectors' entries give rise to a plethora of hierarchies on the PEs. Through these hierarchies we turn \mswap into a hierarchical method for improving $\mu(\cdot)$ that is complementary to state-of-the-art methods for computing $\mu(\cdot)$ in the first place. In our experiments we use \mswap to enhance mappings of complex networks onto rectangular meshes and tori with 256 and 512 nodes, as well as hypercubes with 256 nodes. It turns out that common quality measures of mappings derived from state-of-the-art algorithms can be improved considerably.
2110.00307
Giovanni Colavizza
Giovanni Colavizza, Silvio Peroni, Matteo Romanello
The case for the Humanities Citation Index (HuCI): a citation index by the humanities, for the humanities
null
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
Citation indexes are by now part of the research infrastructure in use by most scientists: a necessary tool in order to cope with the increasing amounts of scientific literature being published. Commercial citation indexes are designed for the sciences and have uneven coverage and unsatisfactory characteristics for humanities scholars, while no comprehensive citation index is published by a public organization. We argue that an open citation index for the humanities is desirable, for four reasons: it would greatly improve and accelerate the retrieval of sources, it would offer a way to interlink collections across repositories (such as archives and libraries), it would foster the adoption of metadata standards and best practices by all stakeholders (including publishers) and it would contribute research data to fields such as bibliometrics and science studies. We also suggest that the citation index should be informed by a set of requirements relevant to the humanities. We discuss four such requirements: source coverage must be comprehensive, including books and citations to primary sources; there needs to be chronological depth, as scholarship in the humanities remains relevant over time; the index should be collection-driven, leveraging the accumulated thematic collections of specialized research libraries; and it should be rich in context in order to allow for the qualification of each citation, for example by providing citation excerpts. We detail the fit-for-purpose research infrastructure which can make the Humanities Citation Index a reality. Ultimately, we argue that a citation index for the humanities can be created by humanists, via a collaborative, distributed and open effort.
[ { "created": "Fri, 1 Oct 2021 10:41:44 GMT", "version": "v1" }, { "created": "Fri, 18 Feb 2022 10:25:12 GMT", "version": "v2" }, { "created": "Sat, 14 May 2022 07:59:50 GMT", "version": "v3" } ]
2022-05-17
[ [ "Colavizza", "Giovanni", "" ], [ "Peroni", "Silvio", "" ], [ "Romanello", "Matteo", "" ] ]
Citation indexes are by now part of the research infrastructure in use by most scientists: a necessary tool in order to cope with the increasing amounts of scientific literature being published. Commercial citation indexes are designed for the sciences and have uneven coverage and unsatisfactory characteristics for humanities scholars, while no comprehensive citation index is published by a public organization. We argue that an open citation index for the humanities is desirable, for four reasons: it would greatly improve and accelerate the retrieval of sources, it would offer a way to interlink collections across repositories (such as archives and libraries), it would foster the adoption of metadata standards and best practices by all stakeholders (including publishers) and it would contribute research data to fields such as bibliometrics and science studies. We also suggest that the citation index should be informed by a set of requirements relevant to the humanities. We discuss four such requirements: source coverage must be comprehensive, including books and citations to primary sources; there needs to be chronological depth, as scholarship in the humanities remains relevant over time; the index should be collection-driven, leveraging the accumulated thematic collections of specialized research libraries; and it should be rich in context in order to allow for the qualification of each citation, for example by providing citation excerpts. We detail the fit-for-purpose research infrastructure which can make the Humanities Citation Index a reality. Ultimately, we argue that a citation index for the humanities can be created by humanists, via a collaborative, distributed and open effort.
2001.05458
Pinkesh Badjatiya
Pinkesh Badjatiya, Mausoom Sarkar, Abhishek Sinha, Siddharth Singh, Nikaash Puri, Jayakumar Subramanian, Balaji Krishnamurthy
Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss
null
null
null
null
cs.AI cs.GT cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In social dilemma situations, individual rationality leads to sub-optimal group outcomes. Several human engagements can be modeled as a sequential (multi-step) social dilemmas. However, in contrast to humans, Deep Reinforcement Learning agents trained to optimize individual rewards in sequential social dilemmas converge to selfish, mutually harmful behavior. We introduce a status-quo loss (SQLoss) that encourages an agent to stick to the status quo, rather than repeatedly changing its policy. We show how agents trained with SQLoss evolve cooperative behavior in several social dilemma matrix games. To work with social dilemma games that have visual input, we propose GameDistill. GameDistill uses self-supervision and clustering to automatically extract cooperative and selfish policies from a social dilemma game. We combine GameDistill and SQLoss to show how agents evolve socially desirable cooperative behavior in the Coin Game.
[ { "created": "Wed, 15 Jan 2020 18:10:46 GMT", "version": "v1" }, { "created": "Thu, 13 Feb 2020 09:55:17 GMT", "version": "v2" } ]
2020-02-14
[ [ "Badjatiya", "Pinkesh", "" ], [ "Sarkar", "Mausoom", "" ], [ "Sinha", "Abhishek", "" ], [ "Singh", "Siddharth", "" ], [ "Puri", "Nikaash", "" ], [ "Subramanian", "Jayakumar", "" ], [ "Krishnamurthy", "Balaji", "" ] ]
In social dilemma situations, individual rationality leads to sub-optimal group outcomes. Several human engagements can be modeled as a sequential (multi-step) social dilemmas. However, in contrast to humans, Deep Reinforcement Learning agents trained to optimize individual rewards in sequential social dilemmas converge to selfish, mutually harmful behavior. We introduce a status-quo loss (SQLoss) that encourages an agent to stick to the status quo, rather than repeatedly changing its policy. We show how agents trained with SQLoss evolve cooperative behavior in several social dilemma matrix games. To work with social dilemma games that have visual input, we propose GameDistill. GameDistill uses self-supervision and clustering to automatically extract cooperative and selfish policies from a social dilemma game. We combine GameDistill and SQLoss to show how agents evolve socially desirable cooperative behavior in the Coin Game.
1504.05770
Takahiro Wada
Ryota Nishimura, Takahiro Wada, Seiji Sugiyama
Haptic Shared Control in Steering Operation Based on Cooperative Status Between a Driver and a Driver Assistance System
Accepted for International Journal of Human Robot Interaction
null
null
null
cs.RO
http://creativecommons.org/licenses/by/3.0/
Haptic shared control is expected to achieve a smooth collaboration between humans and automated systems, because haptics facilitate mutual communication. A methodology for sharing a given task is important to achieve effective shared control. Therefore, the appropriate cooperative relationship between a human operator and automated system should be considered. This paper proposes a methodology to evaluate the cooperative status between the operator and the automated system in the haptic shared control of a steering operation using a pseudo-power pair of torque from each agent and the vehicle lateral velocity as each agent's contribution to vehicle motion. This method allows us to estimate cooperative status based on two axes: the initiative holder and the intent consistency between the two agents. A control method for a lane-keeping assist system (LKAS) that enables drivers to change lanes smoothly is proposed based on the estimated cooperative status. A gain-tuning control method based on the estimated cooperative status is proposed to decrease the assistance system's pseudo-power when intent inconsistency occurs. A method for switching the followed lane to match the driver's and assistance system's intentions is also proposed. A user study using a driving simulator is conducted to demonstrate the effectiveness of the proposed methods. The results demonstrate that the proposed methods facilitate smooth driver-initiated lane changes without significantly affecting the driver's torque or steering wheel angle while significantly improve lane-keeping performance.
[ { "created": "Wed, 22 Apr 2015 12:56:25 GMT", "version": "v1" } ]
2015-04-23
[ [ "Nishimura", "Ryota", "" ], [ "Wada", "Takahiro", "" ], [ "Sugiyama", "Seiji", "" ] ]
Haptic shared control is expected to achieve a smooth collaboration between humans and automated systems, because haptics facilitate mutual communication. A methodology for sharing a given task is important to achieve effective shared control. Therefore, the appropriate cooperative relationship between a human operator and automated system should be considered. This paper proposes a methodology to evaluate the cooperative status between the operator and the automated system in the haptic shared control of a steering operation using a pseudo-power pair of torque from each agent and the vehicle lateral velocity as each agent's contribution to vehicle motion. This method allows us to estimate cooperative status based on two axes: the initiative holder and the intent consistency between the two agents. A control method for a lane-keeping assist system (LKAS) that enables drivers to change lanes smoothly is proposed based on the estimated cooperative status. A gain-tuning control method based on the estimated cooperative status is proposed to decrease the assistance system's pseudo-power when intent inconsistency occurs. A method for switching the followed lane to match the driver's and assistance system's intentions is also proposed. A user study using a driving simulator is conducted to demonstrate the effectiveness of the proposed methods. The results demonstrate that the proposed methods facilitate smooth driver-initiated lane changes without significantly affecting the driver's torque or steering wheel angle while significantly improve lane-keeping performance.
2101.02275
Sandipan Choudhuri
Sandipan Choudhuri, Riti Paul, Arunabha Sen, Baoxin Li, Hemanth Venkateswara
Partial Domain Adaptation Using Selective Representation Learning For Class-Weight Computation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The generalization power of deep-learning models is dependent on rich-labelled data. This supervision using large-scaled annotated information is restrictive in most real-world scenarios where data collection and their annotation involve huge cost. Various domain adaptation techniques exist in literature that bridge this distribution discrepancy. However, a majority of these models require the label sets of both the domains to be identical. To tackle a more practical and challenging scenario, we formulate the problem statement from a partial domain adaptation perspective, where the source label set is a super set of the target label set. Driven by the motivation that image styles are private to each domain, in this work, we develop a method that identifies outlier classes exclusively from image content information and train a label classifier exclusively on class-content from source images. Additionally, elimination of negative transfer of samples from classes private to the source domain is achieved by transforming the soft class-level weights into two clusters, 0 (outlier source classes) and 1 (shared classes) by maximizing the between-cluster variance between them.
[ { "created": "Wed, 6 Jan 2021 21:37:56 GMT", "version": "v1" } ]
2021-01-08
[ [ "Choudhuri", "Sandipan", "" ], [ "Paul", "Riti", "" ], [ "Sen", "Arunabha", "" ], [ "Li", "Baoxin", "" ], [ "Venkateswara", "Hemanth", "" ] ]
The generalization power of deep-learning models is dependent on rich-labelled data. This supervision using large-scaled annotated information is restrictive in most real-world scenarios where data collection and their annotation involve huge cost. Various domain adaptation techniques exist in literature that bridge this distribution discrepancy. However, a majority of these models require the label sets of both the domains to be identical. To tackle a more practical and challenging scenario, we formulate the problem statement from a partial domain adaptation perspective, where the source label set is a super set of the target label set. Driven by the motivation that image styles are private to each domain, in this work, we develop a method that identifies outlier classes exclusively from image content information and train a label classifier exclusively on class-content from source images. Additionally, elimination of negative transfer of samples from classes private to the source domain is achieved by transforming the soft class-level weights into two clusters, 0 (outlier source classes) and 1 (shared classes) by maximizing the between-cluster variance between them.
2005.14638
Rui Shao
Rui Shao, Pramuditha Perera, Pong C. Yuen, Vishal M. Patel
Federated Face Presentation Attack Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline. A face presentation attack detection model with good generalization can be obtained when it is trained with face images from different input distributions and different types of spoof attacks. In reality, training data (both real face images and spoof images) are not directly shared between data owners due to legal and privacy issues. In this paper, with the motivation of circumventing this challenge, we propose Federated Face Presentation Attack Detection (FedPAD) framework. FedPAD simultaneously takes advantage of rich fPAD information available at different data owners while preserving data privacy. In the proposed framework, each data owner (referred to as \textit{data centers}) locally trains its own fPAD model. A server learns a global fPAD model by iteratively aggregating model updates from all data centers without accessing private data in each of them. Once the learned global model converges, it is used for fPAD inference. We introduce the experimental setting to evaluate the proposed FedPAD framework and carry out extensive experiments to provide various insights about federated learning for fPAD.
[ { "created": "Fri, 29 May 2020 15:56:01 GMT", "version": "v1" }, { "created": "Tue, 29 Sep 2020 03:01:14 GMT", "version": "v2" } ]
2020-09-30
[ [ "Shao", "Rui", "" ], [ "Perera", "Pramuditha", "" ], [ "Yuen", "Pong C.", "" ], [ "Patel", "Vishal M.", "" ] ]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline. A face presentation attack detection model with good generalization can be obtained when it is trained with face images from different input distributions and different types of spoof attacks. In reality, training data (both real face images and spoof images) are not directly shared between data owners due to legal and privacy issues. In this paper, with the motivation of circumventing this challenge, we propose Federated Face Presentation Attack Detection (FedPAD) framework. FedPAD simultaneously takes advantage of rich fPAD information available at different data owners while preserving data privacy. In the proposed framework, each data owner (referred to as \textit{data centers}) locally trains its own fPAD model. A server learns a global fPAD model by iteratively aggregating model updates from all data centers without accessing private data in each of them. Once the learned global model converges, it is used for fPAD inference. We introduce the experimental setting to evaluate the proposed FedPAD framework and carry out extensive experiments to provide various insights about federated learning for fPAD.
2404.13393
Thorren Kirschbaum
Thorren Kirschbaum and Annika Bande
Transfer Learning for Molecular Property Predictions from Small Data Sets
null
null
null
null
cs.LG physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Machine learning has emerged as a new tool in chemistry to bypass expensive experiments or quantum-chemical calculations, for example, in high-throughput screening applications. However, many machine learning studies rely on small data sets, making it difficult to efficiently implement powerful deep learning architectures such as message passing neural networks. In this study, we benchmark common machine learning models for the prediction of molecular properties on small data sets, for which the best results are obtained with the message passing neural network PaiNN, as well as SOAP molecular descriptors concatenated to a set of simple molecular descriptors tailored to gradient boosting with regression trees. To further improve the predictive capabilities of PaiNN, we present a transfer learning strategy that uses large data sets to pre-train the respective models and allows to obtain more accurate models after fine-tuning on the original data sets. The pre-training labels are obtained from computationally cheap ab initio or semi-empirical models and corrected by simple linear regression on the target data set to obtain labels that are close to those of the original data. This strategy is tested on the Harvard Oxford Photovoltaics data set (HOPV, HOMO-LUMO-gaps), for which excellent results are obtained, and on the Freesolv data set (solvation energies), where this method is unsuccessful due to a complex underlying learning task and the dissimilar methods used to obtain pre-training and fine-tuning labels. Finally, we find that the final training results do not improve monotonically with the size of the pre-training data set, but pre-training with fewer data points can lead to more biased pre-trained models and higher accuracy after fine-tuning.
[ { "created": "Sat, 20 Apr 2024 14:25:34 GMT", "version": "v1" } ]
2024-04-23
[ [ "Kirschbaum", "Thorren", "" ], [ "Bande", "Annika", "" ] ]
Machine learning has emerged as a new tool in chemistry to bypass expensive experiments or quantum-chemical calculations, for example, in high-throughput screening applications. However, many machine learning studies rely on small data sets, making it difficult to efficiently implement powerful deep learning architectures such as message passing neural networks. In this study, we benchmark common machine learning models for the prediction of molecular properties on small data sets, for which the best results are obtained with the message passing neural network PaiNN, as well as SOAP molecular descriptors concatenated to a set of simple molecular descriptors tailored to gradient boosting with regression trees. To further improve the predictive capabilities of PaiNN, we present a transfer learning strategy that uses large data sets to pre-train the respective models and allows to obtain more accurate models after fine-tuning on the original data sets. The pre-training labels are obtained from computationally cheap ab initio or semi-empirical models and corrected by simple linear regression on the target data set to obtain labels that are close to those of the original data. This strategy is tested on the Harvard Oxford Photovoltaics data set (HOPV, HOMO-LUMO-gaps), for which excellent results are obtained, and on the Freesolv data set (solvation energies), where this method is unsuccessful due to a complex underlying learning task and the dissimilar methods used to obtain pre-training and fine-tuning labels. Finally, we find that the final training results do not improve monotonically with the size of the pre-training data set, but pre-training with fewer data points can lead to more biased pre-trained models and higher accuracy after fine-tuning.
1009.2305
Xiangqiong Shi
Xiangqiong Shi, Dan Schonfeld, Daniela Tuninetti
Message Error Analysis of Loopy Belief Propagation for the Sum-Product Algorithm
36 pages, 10 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Belief propagation is known to perform extremely well in many practical statistical inference and learning problems using graphical models, even in the presence of multiple loops. The iterative use of belief propagation algorithm on loopy graphs is referred to as Loopy Belief Propagation (LBP). Various sufficient conditions for convergence of LBP have been presented; however, general necessary conditions for its convergence to a unique fixed point remain unknown. Because the approximation of beliefs to true marginal probabilities has been shown to relate to the convergence of LBP, several methods have been explored whose aim is to obtain distance bounds on beliefs when LBP fails to converge. In this paper, we derive uniform and non-uniform error bounds on messages, which are tighter than existing ones in literature, and use these bounds to derive sufficient conditions for the convergence of LBP in terms of the sum-product algorithm. We subsequently use these bounds to study the dynamic behavior of the sum-product algorithm, and analyze the relation between convergence of LBP and sparsity and walk-summability of graphical models. We finally use the bounds derived to investigate the accuracy of LBP, as well as the scheduling priority in asynchronous LBP.
[ { "created": "Mon, 13 Sep 2010 06:32:29 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2011 20:31:09 GMT", "version": "v2" }, { "created": "Tue, 12 Feb 2013 01:59:00 GMT", "version": "v3" } ]
2013-02-13
[ [ "Shi", "Xiangqiong", "" ], [ "Schonfeld", "Dan", "" ], [ "Tuninetti", "Daniela", "" ] ]
Belief propagation is known to perform extremely well in many practical statistical inference and learning problems using graphical models, even in the presence of multiple loops. The iterative use of belief propagation algorithm on loopy graphs is referred to as Loopy Belief Propagation (LBP). Various sufficient conditions for convergence of LBP have been presented; however, general necessary conditions for its convergence to a unique fixed point remain unknown. Because the approximation of beliefs to true marginal probabilities has been shown to relate to the convergence of LBP, several methods have been explored whose aim is to obtain distance bounds on beliefs when LBP fails to converge. In this paper, we derive uniform and non-uniform error bounds on messages, which are tighter than existing ones in literature, and use these bounds to derive sufficient conditions for the convergence of LBP in terms of the sum-product algorithm. We subsequently use these bounds to study the dynamic behavior of the sum-product algorithm, and analyze the relation between convergence of LBP and sparsity and walk-summability of graphical models. We finally use the bounds derived to investigate the accuracy of LBP, as well as the scheduling priority in asynchronous LBP.
1510.04589
Michael Meidlinger
Alexios Balatsoukas-Stimming, Michael Meidlinger, Reza Ghanaatian, Gerald Matz, and Andreas Burg
A Fully-Unrolled LDPC Decoder Based on Quantized Message Passing
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a finite alphabet message passing algorithm for LDPC codes that replaces the standard min-sum variable node update rule by a mapping based on generic look-up tables. This mapping is designed in a way that maximizes the mutual information between the decoder messages and the codeword bits. We show that our decoder can deliver the same error rate performance as the conventional decoder with a much smaller message bit-width. Finally, we use the proposed algorithm to design a fully unrolled LDPC decoder hardware architecture.
[ { "created": "Thu, 15 Oct 2015 15:41:09 GMT", "version": "v1" } ]
2015-10-16
[ [ "Balatsoukas-Stimming", "Alexios", "" ], [ "Meidlinger", "Michael", "" ], [ "Ghanaatian", "Reza", "" ], [ "Matz", "Gerald", "" ], [ "Burg", "Andreas", "" ] ]
In this paper, we propose a finite alphabet message passing algorithm for LDPC codes that replaces the standard min-sum variable node update rule by a mapping based on generic look-up tables. This mapping is designed in a way that maximizes the mutual information between the decoder messages and the codeword bits. We show that our decoder can deliver the same error rate performance as the conventional decoder with a much smaller message bit-width. Finally, we use the proposed algorithm to design a fully unrolled LDPC decoder hardware architecture.
1707.03214
Cristina Fern\'andez-C\'ordoba
J. Borges, S. T. Dougherty, C. Fern\'andez-C\'ordoba, R. Ten-Valls
Binary Images of Z2Z4-Additive Cyclic Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Z2Z4-additive code C is called cyclic if the set of coordinates can be partitioned into two subsets, the set of Z_2 and the set of Z_4 coordinates, such that any cyclic shift of the coordinates of both subsets leaves the code invariant. We study the binary images of Z2Z4-additive cyclic codes. We determine all Z2Z4-additive cyclic codes with odd beta whose Gray images are linear binary codes.
[ { "created": "Tue, 11 Jul 2017 10:56:45 GMT", "version": "v1" } ]
2017-07-12
[ [ "Borges", "J.", "" ], [ "Dougherty", "S. T.", "" ], [ "Fernández-Córdoba", "C.", "" ], [ "Ten-Valls", "R.", "" ] ]
A Z2Z4-additive code C is called cyclic if the set of coordinates can be partitioned into two subsets, the set of Z_2 and the set of Z_4 coordinates, such that any cyclic shift of the coordinates of both subsets leaves the code invariant. We study the binary images of Z2Z4-additive cyclic codes. We determine all Z2Z4-additive cyclic codes with odd beta whose Gray images are linear binary codes.
1808.10639
Ma\"elick Claes
Ma\"elick Claes, Mika M\"antyl\"a, Umar Farooq
On the Use of Emoticons in Open Source Software Development
Short paper to be presented at the 12th International Symposium on Empirical Software Engineering and Measurement (ESEM)
null
10.1145/3239235.3267434
null
cs.SE cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Using sentiment analysis to study software developers' behavior comes with challenges such as the presence of a large amount of technical discussion unlikely to express any positive or negative sentiment. However, emoticons provide information about developer sentiments that can easily be extracted from software repositories. Aim: We investigate how software developers use emoticons differently in issue trackers in order to better understand the differences between developers and determine to which extent emoticons can be used as in place of sentiment analysis. Method: We extract emoticons from 1.3M comments from Apache's issue tracker and 4.5M from Mozilla's issue tracker using regular expressions built from a list of emoticons used by SentiStrength and Wikipedia. We check for statistical differences using Mann-Whitney U tests and determine the effect size with Cliff's delta. Results: Overall Mozilla developers rely more on emoticons than Apache developers. While the overall ratio of comments with emoticons is of 2% and 3.6% for Apache and Mozilla, some individual developers can have a ratio above 20%. Looking specifically at Mozilla developers, we find that western developers use significantly more emoticons (with large size effect) than eastern developers. While the majority of emoticons are used to express joy, we find that Mozilla developers use emoticons more frequently to express sadness and surprise than Apache developers. Finally, we find that developers use overall more emoticons during weekends than during weekdays, with the share of sad and surprised emoticons increasing during weekends. Conclusions: While emoticons are primarily used to express joy, the more occasional use of sad and surprised emoticons can potentially be utilized to detect frustration in place of sentiment analysis among developers using emoticons frequently enough.
[ { "created": "Fri, 31 Aug 2018 09:09:34 GMT", "version": "v1" }, { "created": "Tue, 9 Oct 2018 14:10:08 GMT", "version": "v2" } ]
2018-10-10
[ [ "Claes", "Maëlick", "" ], [ "Mäntylä", "Mika", "" ], [ "Farooq", "Umar", "" ] ]
Background: Using sentiment analysis to study software developers' behavior comes with challenges such as the presence of a large amount of technical discussion unlikely to express any positive or negative sentiment. However, emoticons provide information about developer sentiments that can easily be extracted from software repositories. Aim: We investigate how software developers use emoticons differently in issue trackers in order to better understand the differences between developers and determine to which extent emoticons can be used as in place of sentiment analysis. Method: We extract emoticons from 1.3M comments from Apache's issue tracker and 4.5M from Mozilla's issue tracker using regular expressions built from a list of emoticons used by SentiStrength and Wikipedia. We check for statistical differences using Mann-Whitney U tests and determine the effect size with Cliff's delta. Results: Overall Mozilla developers rely more on emoticons than Apache developers. While the overall ratio of comments with emoticons is of 2% and 3.6% for Apache and Mozilla, some individual developers can have a ratio above 20%. Looking specifically at Mozilla developers, we find that western developers use significantly more emoticons (with large size effect) than eastern developers. While the majority of emoticons are used to express joy, we find that Mozilla developers use emoticons more frequently to express sadness and surprise than Apache developers. Finally, we find that developers use overall more emoticons during weekends than during weekdays, with the share of sad and surprised emoticons increasing during weekends. Conclusions: While emoticons are primarily used to express joy, the more occasional use of sad and surprised emoticons can potentially be utilized to detect frustration in place of sentiment analysis among developers using emoticons frequently enough.
1502.05786
Arpan Mukhopadhyay
Arpan Mukhopadhyay, A. Karthik, Ravi R. Mazumdar
Randomized Assignment of Jobs to Servers in Heterogeneous Clusters of Shared Servers for Low Delay
null
null
null
null
cs.DC cs.PF cs.SY math.PR stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the job assignment problem in a multi-server system consisting of $N$ parallel processor sharing servers, categorized into $M$ ($\ll N$) different types according to their processing capacity or speed. Jobs of random sizes arrive at the system according to a Poisson process with rate $N \lambda$. Upon each arrival, a small number of servers from each type is sampled uniformly at random. The job is then assigned to one of the sampled servers based on a selection rule. We propose two schemes, each corresponding to a specific selection rule that aims at reducing the mean sojourn time of jobs in the system. We first show that both methods achieve the maximal stability region. We then analyze the system operating under the proposed schemes as $N \to \infty$ which corresponds to the mean field. Our results show that asymptotic independence among servers holds even when $M$ is finite and exchangeability holds only within servers of the same type. We further establish the existence and uniqueness of stationary solution of the mean field and show that the tail distribution of server occupancy decays doubly exponentially for each server type. When the estimates of arrival rates are not available, the proposed schemes offer simpler alternatives to achieving lower mean sojourn time of jobs, as shown by our numerical studies.
[ { "created": "Fri, 20 Feb 2015 06:51:01 GMT", "version": "v1" } ]
2015-02-23
[ [ "Mukhopadhyay", "Arpan", "" ], [ "Karthik", "A.", "" ], [ "Mazumdar", "Ravi R.", "" ] ]
We consider the job assignment problem in a multi-server system consisting of $N$ parallel processor sharing servers, categorized into $M$ ($\ll N$) different types according to their processing capacity or speed. Jobs of random sizes arrive at the system according to a Poisson process with rate $N \lambda$. Upon each arrival, a small number of servers from each type is sampled uniformly at random. The job is then assigned to one of the sampled servers based on a selection rule. We propose two schemes, each corresponding to a specific selection rule that aims at reducing the mean sojourn time of jobs in the system. We first show that both methods achieve the maximal stability region. We then analyze the system operating under the proposed schemes as $N \to \infty$ which corresponds to the mean field. Our results show that asymptotic independence among servers holds even when $M$ is finite and exchangeability holds only within servers of the same type. We further establish the existence and uniqueness of stationary solution of the mean field and show that the tail distribution of server occupancy decays doubly exponentially for each server type. When the estimates of arrival rates are not available, the proposed schemes offer simpler alternatives to achieving lower mean sojourn time of jobs, as shown by our numerical studies.
1910.00635
J\'an Komara
J\'an Komara and Paul J. Voda
Extraction of Efficient Programs in $I\Sigma_1$-arithmetic
16 pages; reprint of the technical report from July, 2000
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clausal Language (CL) is a declarative programming and verifying system used in our teaching of computer science. CL is an implementation of, what we call, $\mathit{PR}{+}I\Sigma_1$ paradigm (primitive recursive functions with $I\Sigma_1$-arithmetic). This paper introduces an extension of $I\Sigma_1$-proofs called extraction proofs where one can extract from the proofs of $\Pi_2$-specifications primitive recursive programs as efficient as the hand-coded ones. This is achieved by having the programming constructs correspond exactly to the proof rules with the computational content.
[ { "created": "Tue, 1 Oct 2019 19:47:29 GMT", "version": "v1" } ]
2019-10-03
[ [ "Komara", "Ján", "" ], [ "Voda", "Paul J.", "" ] ]
Clausal Language (CL) is a declarative programming and verifying system used in our teaching of computer science. CL is an implementation of, what we call, $\mathit{PR}{+}I\Sigma_1$ paradigm (primitive recursive functions with $I\Sigma_1$-arithmetic). This paper introduces an extension of $I\Sigma_1$-proofs called extraction proofs where one can extract from the proofs of $\Pi_2$-specifications primitive recursive programs as efficient as the hand-coded ones. This is achieved by having the programming constructs correspond exactly to the proof rules with the computational content.
2208.01298
Sam Thompson
Sam M. Thompson
Conjunctive Queries for Logic-Based Information Extraction
Based on the author's PhD thesis and contains work from two conference publications (arXiv:2104.04758, arXiv:1909.10869) which are joint work with Dominik D. Freydenberger
null
null
null
cs.LO cs.DB cs.FL
http://creativecommons.org/licenses/by/4.0/
This thesis offers two logic-based approaches to conjunctive queries in the context of information extraction. The first and main approach is the introduction of conjunctive query fragments of the logics FC and FC[REG], denoted as FC-CQ and FC[REG]-CQ respectively. FC is a first-order logic based on word equations, where the semantics are defined by limiting the universe to the factors of some finite input word. FC[REG] is FC extended with regular constraints. The second approach is to consider the dynamic complexity of FC.
[ { "created": "Tue, 2 Aug 2022 08:02:40 GMT", "version": "v1" } ]
2022-08-03
[ [ "Thompson", "Sam M.", "" ] ]
This thesis offers two logic-based approaches to conjunctive queries in the context of information extraction. The first and main approach is the introduction of conjunctive query fragments of the logics FC and FC[REG], denoted as FC-CQ and FC[REG]-CQ respectively. FC is a first-order logic based on word equations, where the semantics are defined by limiting the universe to the factors of some finite input word. FC[REG] is FC extended with regular constraints. The second approach is to consider the dynamic complexity of FC.
2308.03580
Ram Krishna Pandey
Akshit Achara, Ram Krishna Pandey
Revealing the Underlying Patterns: Investigating Dataset Similarity, Performance, and Generalization
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Supervised deep learning models require significant amount of labeled data to achieve an acceptable performance on a specific task. However, when tested on unseen data, the models may not perform well. Therefore, the models need to be trained with additional and varying labeled data to improve the generalization. In this work, our goal is to understand the models, their performance and generalization. We establish image-image, dataset-dataset, and image-dataset distances to gain insights into the model's behavior. Our proposed distance metric when combined with model performance can help in selecting an appropriate model/architecture from a pool of candidate architectures. We have shown that the generalization of these models can be improved by only adding a small number of unseen images (say 1, 3 or 7) into the training set. Our proposed approach reduces training and annotation costs while providing an estimate of model performance on unseen data in dynamic environments.
[ { "created": "Mon, 7 Aug 2023 13:35:53 GMT", "version": "v1" }, { "created": "Sat, 26 Aug 2023 13:39:47 GMT", "version": "v2" }, { "created": "Fri, 29 Dec 2023 15:48:41 GMT", "version": "v3" } ]
2024-01-01
[ [ "Achara", "Akshit", "" ], [ "Pandey", "Ram Krishna", "" ] ]
Supervised deep learning models require significant amount of labeled data to achieve an acceptable performance on a specific task. However, when tested on unseen data, the models may not perform well. Therefore, the models need to be trained with additional and varying labeled data to improve the generalization. In this work, our goal is to understand the models, their performance and generalization. We establish image-image, dataset-dataset, and image-dataset distances to gain insights into the model's behavior. Our proposed distance metric when combined with model performance can help in selecting an appropriate model/architecture from a pool of candidate architectures. We have shown that the generalization of these models can be improved by only adding a small number of unseen images (say 1, 3 or 7) into the training set. Our proposed approach reduces training and annotation costs while providing an estimate of model performance on unseen data in dynamic environments.
1509.07943
Qingqing Huang
Qingqing Huang, Sham M. Kakade
Super-Resolution Off the Grid
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least \Delta from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: - The algorithm uses Fourier measurements, whose frequencies are bounded by O(1/\Delta) (up to log factors). Previous algorithms require a cutoff frequency which may be as large as {\Omega}( d/\Delta). - The number of measurements taken by and the computational complexity of our algorithm are bounded by a polynomial in both the number of points k and the dimension d, with no dependence on the separation \Delta. In contrast, previous algorithms depended inverse polynomially on the minimal separation and exponentially on the dimension for both of these quantities. Our estimation procedure itself is simple: we take random bandlimited measurements (as opposed to taking an exponential number of measurements on the hyper-grid). Furthermore, our analysis and algorithm are elementary (based on concentration bounds for sampling and the singular value decomposition).
[ { "created": "Sat, 26 Sep 2015 03:49:27 GMT", "version": "v1" } ]
2015-09-29
[ [ "Huang", "Qingqing", "" ], [ "Kakade", "Sham M.", "" ] ]
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least \Delta from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: - The algorithm uses Fourier measurements, whose frequencies are bounded by O(1/\Delta) (up to log factors). Previous algorithms require a cutoff frequency which may be as large as {\Omega}( d/\Delta). - The number of measurements taken by and the computational complexity of our algorithm are bounded by a polynomial in both the number of points k and the dimension d, with no dependence on the separation \Delta. In contrast, previous algorithms depended inverse polynomially on the minimal separation and exponentially on the dimension for both of these quantities. Our estimation procedure itself is simple: we take random bandlimited measurements (as opposed to taking an exponential number of measurements on the hyper-grid). Furthermore, our analysis and algorithm are elementary (based on concentration bounds for sampling and the singular value decomposition).
2301.10473
Pasquale Lafiosca
Pasquale Lafiosca, Ip-Shing Fan, Nicolas P. Avdelidis
Aircraft Skin Inspections: Towards a New Model for Dent Evaluation
null
null
10.1784/insi.2023.65.7.378
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Aircraft maintenance, repair and overhaul industry is gradually switching to 3D scanning for dent inspection. High-accuracy devices allow quick and repeatable measurements, which translate into efficient reporting and more objective damage evaluations. However, the potential of 3D scanners is far from being exploited. This is due to the traditional way in which the structural repair manual deals with dents, that is, considering length, width and depth as the only relevant measures. Being equivalent to describing a dent similarly to a box, the current approach discards any information about the actual shape. This causes high degrees of ambiguity, with very different shapes (and corresponding fatigue life) being classified as the same, and nullifies the effort of acquiring such great amount of information from high-accuracy 3D scanners. In this paper a 7-parameter model is proposed to describe the actual dent shape, thus enabling the exploitation of the high fidelity data produced by 3D scanners. The compact set of values can then be compared against historical data and structural evaluations based on the same model. The proposed approach has been evaluated in both simulations and point cloud data generated by 8tree's dentCHECK tool, suggesting increased capability to evaluate damage, enabling more targeted interventions and, ultimately, saving costs.
[ { "created": "Wed, 25 Jan 2023 09:20:19 GMT", "version": "v1" }, { "created": "Tue, 11 Jul 2023 06:40:45 GMT", "version": "v2" } ]
2023-07-13
[ [ "Lafiosca", "Pasquale", "" ], [ "Fan", "Ip-Shing", "" ], [ "Avdelidis", "Nicolas P.", "" ] ]
Aircraft maintenance, repair and overhaul industry is gradually switching to 3D scanning for dent inspection. High-accuracy devices allow quick and repeatable measurements, which translate into efficient reporting and more objective damage evaluations. However, the potential of 3D scanners is far from being exploited. This is due to the traditional way in which the structural repair manual deals with dents, that is, considering length, width and depth as the only relevant measures. Being equivalent to describing a dent similarly to a box, the current approach discards any information about the actual shape. This causes high degrees of ambiguity, with very different shapes (and corresponding fatigue life) being classified as the same, and nullifies the effort of acquiring such great amount of information from high-accuracy 3D scanners. In this paper a 7-parameter model is proposed to describe the actual dent shape, thus enabling the exploitation of the high fidelity data produced by 3D scanners. The compact set of values can then be compared against historical data and structural evaluations based on the same model. The proposed approach has been evaluated in both simulations and point cloud data generated by 8tree's dentCHECK tool, suggesting increased capability to evaluate damage, enabling more targeted interventions and, ultimately, saving costs.
2403.05021
Yunhao Li
Yunhao Li, Qin Li, Hao Wang, Xue Ma, Jiali Yao, Shaohua Dong, Heng Fan, Libo Zhang
Beyond MOT: Semantic Multi-Object Tracking
Accepted to ECCV2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current multi-object tracking (MOT) aims to predict trajectories of targets (i.e., ''where'') in videos. Yet, knowing merely ''where'' is insufficient in many crucial applications. In comparison, semantic understanding such as fine-grained behaviors, interactions, and overall summarized captions (i.e., ''what'') from videos, associated with ''where'', is highly-desired for comprehensive video analysis. Thus motivated, we introduce Semantic Multi-Object Tracking (SMOT), that aims to estimate object trajectories and meanwhile understand semantic details of associated trajectories including instance captions, instance interactions, and overall video captions, integrating ''where'' and ''what'' for tracking. In order to foster the exploration of SMOT, we propose BenSMOT, a large-scale Benchmark for Semantic MOT. Specifically, BenSMOT comprises 3,292 videos with 151K frames, covering various scenarios for semantic tracking of humans. BenSMOT provides annotations for the trajectories of targets, along with associated instance captions in natural language, instance interactions, and overall caption for each video sequence. To our best knowledge, BenSMOT is the first publicly available benchmark for SMOT. Besides, to encourage future research, we present a novel tracker named SMOTer, which is specially designed and end-to-end trained for SMOT, showing promising performance. By releasing BenSMOT, we expect to go beyond conventional MOT by predicting ''where'' and ''what'' for SMOT, opening up a new direction in tracking for video understanding. We will release BenSMOT and SMOTer at https://github.com/Nathan-Li123/SMOTer.
[ { "created": "Fri, 8 Mar 2024 03:54:22 GMT", "version": "v1" }, { "created": "Mon, 11 Mar 2024 03:03:41 GMT", "version": "v2" }, { "created": "Fri, 26 Jul 2024 02:58:28 GMT", "version": "v3" }, { "created": "Mon, 29 Jul 2024 02:12:15 GMT", "version": "v4" } ]
2024-07-30
[ [ "Li", "Yunhao", "" ], [ "Li", "Qin", "" ], [ "Wang", "Hao", "" ], [ "Ma", "Xue", "" ], [ "Yao", "Jiali", "" ], [ "Dong", "Shaohua", "" ], [ "Fan", "Heng", "" ], [ "Zhang", "Libo", "" ] ]
Current multi-object tracking (MOT) aims to predict trajectories of targets (i.e., ''where'') in videos. Yet, knowing merely ''where'' is insufficient in many crucial applications. In comparison, semantic understanding such as fine-grained behaviors, interactions, and overall summarized captions (i.e., ''what'') from videos, associated with ''where'', is highly-desired for comprehensive video analysis. Thus motivated, we introduce Semantic Multi-Object Tracking (SMOT), that aims to estimate object trajectories and meanwhile understand semantic details of associated trajectories including instance captions, instance interactions, and overall video captions, integrating ''where'' and ''what'' for tracking. In order to foster the exploration of SMOT, we propose BenSMOT, a large-scale Benchmark for Semantic MOT. Specifically, BenSMOT comprises 3,292 videos with 151K frames, covering various scenarios for semantic tracking of humans. BenSMOT provides annotations for the trajectories of targets, along with associated instance captions in natural language, instance interactions, and overall caption for each video sequence. To our best knowledge, BenSMOT is the first publicly available benchmark for SMOT. Besides, to encourage future research, we present a novel tracker named SMOTer, which is specially designed and end-to-end trained for SMOT, showing promising performance. By releasing BenSMOT, we expect to go beyond conventional MOT by predicting ''where'' and ''what'' for SMOT, opening up a new direction in tracking for video understanding. We will release BenSMOT and SMOTer at https://github.com/Nathan-Li123/SMOTer.
2404.17335
Xin Zhang
Xin Zhang, Liangxiu Han, Tam Sobeih, Lianghao Han, and Darren Dancey
A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation
16 pages
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Depth estimation is crucial for interpreting complex environments, especially in areas such as autonomous vehicle navigation and robotics. Nonetheless, obtaining accurate depth readings from event camera data remains a formidable challenge. Event cameras operate differently from traditional digital cameras, continuously capturing data and generating asynchronous binary spikes that encode time, location, and light intensity. Yet, the unique sampling mechanisms of event cameras render standard image based algorithms inadequate for processing spike data. This necessitates the development of innovative, spike-aware algorithms tailored for event cameras, a task compounded by the irregularity, continuity, noise, and spatial and temporal characteristics inherent in spiking data.Harnessing the strong generalization capabilities of transformer neural networks for spatiotemporal data, we propose a purely spike-driven spike transformer network for depth estimation from spiking camera data. To address performance limitations with Spiking Neural Networks (SNN), we introduce a novel single-stage cross-modality knowledge transfer framework leveraging knowledge from a large vision foundational model of artificial neural networks (ANN) (DINOv2) to enhance the performance of SNNs with limited data. Our experimental results on both synthetic and real datasets show substantial improvements over existing models, with notable gains in Absolute Relative and Square Relative errors (49% and 39.77% improvements over the benchmark model Spike-T, respectively). Besides accuracy, the proposed model also demonstrates reduced power consumptions, a critical factor for practical applications.
[ { "created": "Fri, 26 Apr 2024 11:32:53 GMT", "version": "v1" }, { "created": "Wed, 1 May 2024 08:54:54 GMT", "version": "v2" } ]
2024-05-02
[ [ "Zhang", "Xin", "" ], [ "Han", "Liangxiu", "" ], [ "Sobeih", "Tam", "" ], [ "Han", "Lianghao", "" ], [ "Dancey", "Darren", "" ] ]
Depth estimation is crucial for interpreting complex environments, especially in areas such as autonomous vehicle navigation and robotics. Nonetheless, obtaining accurate depth readings from event camera data remains a formidable challenge. Event cameras operate differently from traditional digital cameras, continuously capturing data and generating asynchronous binary spikes that encode time, location, and light intensity. Yet, the unique sampling mechanisms of event cameras render standard image based algorithms inadequate for processing spike data. This necessitates the development of innovative, spike-aware algorithms tailored for event cameras, a task compounded by the irregularity, continuity, noise, and spatial and temporal characteristics inherent in spiking data.Harnessing the strong generalization capabilities of transformer neural networks for spatiotemporal data, we propose a purely spike-driven spike transformer network for depth estimation from spiking camera data. To address performance limitations with Spiking Neural Networks (SNN), we introduce a novel single-stage cross-modality knowledge transfer framework leveraging knowledge from a large vision foundational model of artificial neural networks (ANN) (DINOv2) to enhance the performance of SNNs with limited data. Our experimental results on both synthetic and real datasets show substantial improvements over existing models, with notable gains in Absolute Relative and Square Relative errors (49% and 39.77% improvements over the benchmark model Spike-T, respectively). Besides accuracy, the proposed model also demonstrates reduced power consumptions, a critical factor for practical applications.
1606.06343
Mark Dredze
Mark Dredze and Manuel Garc\'ia-Herranz and Alex Rutherford and Gideon Mann
Twitter as a Source of Global Mobility Patterns for Social Good
Presented at 2016 ICML Workshop on #Data4Good: Machine Learning in Social Good Applications, New York, NY
null
null
null
cs.SI physics.soc-ph stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data on human spatial distribution and movement is essential for understanding and analyzing social systems. However existing sources for this data are lacking in various ways; difficult to access, biased, have poor geographical or temporal resolution, or are significantly delayed. In this paper, we describe how geolocation data from Twitter can be used to estimate global mobility patterns and address these shortcomings. These findings will inform how this novel data source can be harnessed to address humanitarian and development efforts.
[ { "created": "Mon, 20 Jun 2016 21:39:51 GMT", "version": "v1" } ]
2016-06-22
[ [ "Dredze", "Mark", "" ], [ "García-Herranz", "Manuel", "" ], [ "Rutherford", "Alex", "" ], [ "Mann", "Gideon", "" ] ]
Data on human spatial distribution and movement is essential for understanding and analyzing social systems. However existing sources for this data are lacking in various ways; difficult to access, biased, have poor geographical or temporal resolution, or are significantly delayed. In this paper, we describe how geolocation data from Twitter can be used to estimate global mobility patterns and address these shortcomings. These findings will inform how this novel data source can be harnessed to address humanitarian and development efforts.
1608.04198
Ying Cui
Fan Lai, Feng Qiu, Wenjie Bian, Ying Cui, Edmund Yeh
Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks
to appear in ICN 2016. arXiv admin note: substantial text overlap with arXiv:1607.03270, arXiv:1310.5569
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging Information-Centric Networking (ICN) architectures seek to optimally utilize both bandwidth and storage for efficient content distribution over the network. The Virtual Interest Packet (VIP) framework has been proposed to enable joint design of forwarding and caching within the Named Data Networking (NDN) architecture. The virtual plane of the VIP framework captures the measured demand for content objects, but does not reflect interest collapse and suppression in the NDN network. We aim to further improve the performance of the existing VIP algorithms by using a modified virtual plane where VIP counts are appropriately scaled to reflect interest suppression effects. We characterize the stability region of the modified virtual plane with VIP scaling, develop a new distributed forwarding and caching algorithm operating on the scaled VIPs, and demonstrate the throughput optimality of the scaled VIP algorithm in the virtual plane. Numerical experiments demonstrate significantly enhanced performance relative to the existing VIP algorithm, as well as a number of other baseline algorithms.
[ { "created": "Mon, 15 Aug 2016 07:50:47 GMT", "version": "v1" } ]
2016-08-16
[ [ "Lai", "Fan", "" ], [ "Qiu", "Feng", "" ], [ "Bian", "Wenjie", "" ], [ "Cui", "Ying", "" ], [ "Yeh", "Edmund", "" ] ]
Emerging Information-Centric Networking (ICN) architectures seek to optimally utilize both bandwidth and storage for efficient content distribution over the network. The Virtual Interest Packet (VIP) framework has been proposed to enable joint design of forwarding and caching within the Named Data Networking (NDN) architecture. The virtual plane of the VIP framework captures the measured demand for content objects, but does not reflect interest collapse and suppression in the NDN network. We aim to further improve the performance of the existing VIP algorithms by using a modified virtual plane where VIP counts are appropriately scaled to reflect interest suppression effects. We characterize the stability region of the modified virtual plane with VIP scaling, develop a new distributed forwarding and caching algorithm operating on the scaled VIPs, and demonstrate the throughput optimality of the scaled VIP algorithm in the virtual plane. Numerical experiments demonstrate significantly enhanced performance relative to the existing VIP algorithm, as well as a number of other baseline algorithms.
1010.0225
Andreas Witzel
Andreas Witzel
Characterizing perfect recall using next-step temporal operators in S5 and sub-S5 Epistemic Temporal Logic
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review the notion of perfect recall in the literature on interpreted systems, game theory, and epistemic logic. In the context of Epistemic Temporal Logic (ETL), we give a (to our knowledge) novel frame condition for perfect recall, which is local and can straightforwardly be translated to a defining formula in a language that only has next-step temporal operators. This frame condition also gives rise to a complete axiomatization for S5 ETL frames with perfect recall. We then consider how to extend and consolidate the notion of perfect recall in sub-S5 settings, where the various notions discussed are no longer equivalent.
[ { "created": "Fri, 1 Oct 2010 18:02:05 GMT", "version": "v1" }, { "created": "Mon, 14 Mar 2011 16:02:08 GMT", "version": "v2" } ]
2011-03-15
[ [ "Witzel", "Andreas", "" ] ]
We review the notion of perfect recall in the literature on interpreted systems, game theory, and epistemic logic. In the context of Epistemic Temporal Logic (ETL), we give a (to our knowledge) novel frame condition for perfect recall, which is local and can straightforwardly be translated to a defining formula in a language that only has next-step temporal operators. This frame condition also gives rise to a complete axiomatization for S5 ETL frames with perfect recall. We then consider how to extend and consolidate the notion of perfect recall in sub-S5 settings, where the various notions discussed are no longer equivalent.
2307.02730
Yuning Ding
Sheng-Lan Liu, Yu-Ning Ding, Gang Yan, Si-Fan Zhang, Jin-Rong Zhang, Wen-Yue Chen, Xue-Hai Xu
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The fine-grained action analysis of the existing action datasets is challenged by insufficient action categories, low fine granularities, limited modalities, and tasks. In this paper, we propose a Multi-modality and Multi-task dataset of Figure Skating (MMFS) which was collected from the World Figure Skating Championships. MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels. The key contributions of our dataset fall into three aspects as follows. (1) Independently spatial and temporal categories are first proposed to further explore fine-grained action recognition and quality assessment. (2) MMFS first introduces the skeleton modality for complex fine-grained action quality assessment. (3) Our multi-modality and multi-task dataset encourage more action analysis models. To benchmark our dataset, we adopt RGB-based and skeleton-based baseline methods for action recognition and action quality assessment.
[ { "created": "Thu, 6 Jul 2023 02:30:56 GMT", "version": "v1" }, { "created": "Thu, 11 Jan 2024 08:24:16 GMT", "version": "v2" }, { "created": "Tue, 9 Apr 2024 13:18:22 GMT", "version": "v3" } ]
2024-04-10
[ [ "Liu", "Sheng-Lan", "" ], [ "Ding", "Yu-Ning", "" ], [ "Yan", "Gang", "" ], [ "Zhang", "Si-Fan", "" ], [ "Zhang", "Jin-Rong", "" ], [ "Chen", "Wen-Yue", "" ], [ "Xu", "Xue-Hai", "" ] ]
The fine-grained action analysis of the existing action datasets is challenged by insufficient action categories, low fine granularities, limited modalities, and tasks. In this paper, we propose a Multi-modality and Multi-task dataset of Figure Skating (MMFS) which was collected from the World Figure Skating Championships. MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels. The key contributions of our dataset fall into three aspects as follows. (1) Independently spatial and temporal categories are first proposed to further explore fine-grained action recognition and quality assessment. (2) MMFS first introduces the skeleton modality for complex fine-grained action quality assessment. (3) Our multi-modality and multi-task dataset encourage more action analysis models. To benchmark our dataset, we adopt RGB-based and skeleton-based baseline methods for action recognition and action quality assessment.
0712.0840
Leonid (Aryeh) Kontorovich
Leonid (Aryeh) Kontorovich
A Universal Kernel for Learning Regular Languages
7 pages
The 5th International Workshop on Mining and Learning with Graphs, 2007
null
null
cs.LG cs.DM
null
We give a universal kernel that renders all the regular languages linearly separable. We are not able to compute this kernel efficiently and conjecture that it is intractable, but we do have an efficient $\eps$-approximation.
[ { "created": "Wed, 5 Dec 2007 22:25:03 GMT", "version": "v1" } ]
2007-12-07
[ [ "Leonid", "", "", "Aryeh" ], [ "Kontorovich", "", "" ] ]
We give a universal kernel that renders all the regular languages linearly separable. We are not able to compute this kernel efficiently and conjecture that it is intractable, but we do have an efficient $\eps$-approximation.
2103.12591
Donald Lee
Arash Pakbin, Xiaochen Wang, Bobak J. Mortazavi, Donald K.K. Lee
BoXHED2.0: Scalable boosting of dynamic survival analysis
27 pages
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Modern applications of survival analysis increasingly involve time-dependent covariates. The Python package BoXHED2.0 is a tree-boosted hazard estimator that is fully nonparametric, and is applicable to survival settings far more general than right-censoring, including recurring events and competing risks. BoXHED2.0 is also scalable to the point of being on the same order of speed as parametric boosted survival models, in part because its core is written in C++ and it also supports the use of GPUs and multicore CPUs. BoXHED2.0 is available from PyPI and also from www.github.com/BoXHED.
[ { "created": "Tue, 23 Mar 2021 14:46:09 GMT", "version": "v1" }, { "created": "Thu, 14 Oct 2021 02:17:06 GMT", "version": "v2" }, { "created": "Fri, 15 Oct 2021 02:38:20 GMT", "version": "v3" }, { "created": "Sun, 26 Feb 2023 00:06:29 GMT", "version": "v4" }, { "created": "Wed, 6 Sep 2023 21:24:10 GMT", "version": "v5" } ]
2023-09-08
[ [ "Pakbin", "Arash", "" ], [ "Wang", "Xiaochen", "" ], [ "Mortazavi", "Bobak J.", "" ], [ "Lee", "Donald K. K.", "" ] ]
Modern applications of survival analysis increasingly involve time-dependent covariates. The Python package BoXHED2.0 is a tree-boosted hazard estimator that is fully nonparametric, and is applicable to survival settings far more general than right-censoring, including recurring events and competing risks. BoXHED2.0 is also scalable to the point of being on the same order of speed as parametric boosted survival models, in part because its core is written in C++ and it also supports the use of GPUs and multicore CPUs. BoXHED2.0 is available from PyPI and also from www.github.com/BoXHED.
2210.17004
Aiwei Liu
Aiwei Liu, Honghai Yu, Xuming Hu, Shu'ang Li, Li Lin, Fukun Ma, Yawen Yang, Lijie Wen
Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords Substitution
13 pages, 3 figures. EMNLP 2022
EMNLP 2022
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose the first character-level white-box adversarial attack method against transformer models. The intuition of our method comes from the observation that words are split into subtokens before being fed into the transformer models and the substitution between two close subtokens has a similar effect to the character modification. Our method mainly contains three steps. First, a gradient-based method is adopted to find the most vulnerable words in the sentence. Then we split the selected words into subtokens to replace the origin tokenization result from the transformer tokenizer. Finally, we utilize an adversarial loss to guide the substitution of attachable subtokens in which the Gumbel-softmax trick is introduced to ensure gradient propagation. Meanwhile, we introduce the visual and length constraint in the optimization process to achieve minimum character modifications. Extensive experiments on both sentence-level and token-level tasks demonstrate that our method could outperform the previous attack methods in terms of success rate and edit distance. Furthermore, human evaluation verifies our adversarial examples could preserve their origin labels.
[ { "created": "Mon, 31 Oct 2022 01:46:29 GMT", "version": "v1" } ]
2022-11-01
[ [ "Liu", "Aiwei", "" ], [ "Yu", "Honghai", "" ], [ "Hu", "Xuming", "" ], [ "Li", "Shu'ang", "" ], [ "Lin", "Li", "" ], [ "Ma", "Fukun", "" ], [ "Yang", "Yawen", "" ], [ "Wen", "Lijie", "" ] ]
We propose the first character-level white-box adversarial attack method against transformer models. The intuition of our method comes from the observation that words are split into subtokens before being fed into the transformer models and the substitution between two close subtokens has a similar effect to the character modification. Our method mainly contains three steps. First, a gradient-based method is adopted to find the most vulnerable words in the sentence. Then we split the selected words into subtokens to replace the origin tokenization result from the transformer tokenizer. Finally, we utilize an adversarial loss to guide the substitution of attachable subtokens in which the Gumbel-softmax trick is introduced to ensure gradient propagation. Meanwhile, we introduce the visual and length constraint in the optimization process to achieve minimum character modifications. Extensive experiments on both sentence-level and token-level tasks demonstrate that our method could outperform the previous attack methods in terms of success rate and edit distance. Furthermore, human evaluation verifies our adversarial examples could preserve their origin labels.
1806.06676
Richard Vogl
Richard Vogl and Gerhard Widmer and Peter Knees
Towards multi-instrument drum transcription
Published in Proceedings of the 21th International Conference on Digital Audio Effects (DAFx18), 4 - 8 September, 2018, Aveiro, Portugal
null
null
null
cs.SD cs.IR cs.NE eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at http://ifs.tuwien.ac.at/~vogl/dafx2018
[ { "created": "Mon, 18 Jun 2018 13:45:48 GMT", "version": "v1" }, { "created": "Wed, 3 Oct 2018 11:56:38 GMT", "version": "v2" } ]
2018-10-04
[ [ "Vogl", "Richard", "" ], [ "Widmer", "Gerhard", "" ], [ "Knees", "Peter", "" ] ]
Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at http://ifs.tuwien.ac.at/~vogl/dafx2018
2309.05662
Hongyu Li
Hongyu Li, Snehal Dikhale, Soshi Iba, Nawid Jamali
ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion
Accepted by RA-L
null
10.1109/LRA.2023.3313941
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this letter, we introduce ViHOPE, a novel framework for estimating the 6D pose of an in-hand object using visuotactile perception. Our key insight is that the accuracy of the 6D object pose estimate can be improved by explicitly completing the shape of the object. To this end, we introduce a novel visuotactile shape completion module that uses a conditional Generative Adversarial Network to complete the shape of an in-hand object based on volumetric representation. This approach improves over prior works that directly regress visuotactile observations to a 6D pose. By explicitly completing the shape of the in-hand object and jointly optimizing the shape completion and pose estimation tasks, we improve the accuracy of the 6D object pose estimate. We train and test our model on a synthetic dataset and compare it with the state-of-the-art. In the visuotactile shape completion task, we outperform the state-of-the-art by 265% using the Intersection of Union metric and achieve 88% lower Chamfer Distance. In the visuotactile pose estimation task, we present results that suggest our framework reduces position and angular errors by 35% and 64%, respectively. Furthermore, we ablate our framework to confirm the gain on the 6D object pose estimate from explicitly completing the shape. Ultimately, we show that our framework produces models that are robust to sim-to-real transfer on a real-world robot platform.
[ { "created": "Mon, 11 Sep 2023 17:58:14 GMT", "version": "v1" } ]
2023-09-12
[ [ "Li", "Hongyu", "" ], [ "Dikhale", "Snehal", "" ], [ "Iba", "Soshi", "" ], [ "Jamali", "Nawid", "" ] ]
In this letter, we introduce ViHOPE, a novel framework for estimating the 6D pose of an in-hand object using visuotactile perception. Our key insight is that the accuracy of the 6D object pose estimate can be improved by explicitly completing the shape of the object. To this end, we introduce a novel visuotactile shape completion module that uses a conditional Generative Adversarial Network to complete the shape of an in-hand object based on volumetric representation. This approach improves over prior works that directly regress visuotactile observations to a 6D pose. By explicitly completing the shape of the in-hand object and jointly optimizing the shape completion and pose estimation tasks, we improve the accuracy of the 6D object pose estimate. We train and test our model on a synthetic dataset and compare it with the state-of-the-art. In the visuotactile shape completion task, we outperform the state-of-the-art by 265% using the Intersection of Union metric and achieve 88% lower Chamfer Distance. In the visuotactile pose estimation task, we present results that suggest our framework reduces position and angular errors by 35% and 64%, respectively. Furthermore, we ablate our framework to confirm the gain on the 6D object pose estimate from explicitly completing the shape. Ultimately, we show that our framework produces models that are robust to sim-to-real transfer on a real-world robot platform.
2009.03281
Mohamed Hefeeda
Amgad Ahmed, Suhong Kim, Mohamed Elgharib, Mohamed Hefeeda
User-assisted Video Reflection Removal
null
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reflections in videos are obstructions that often occur when videos are taken behind reflective surfaces like glass. These reflections reduce the quality of such videos, lead to information loss and degrade the accuracy of many computer vision algorithms. A video containing reflections is a combination of background and reflection layers. Thus, reflection removal is equivalent to decomposing the video into two layers. This, however, is a challenging and ill-posed problem as there is an infinite number of valid decompositions. To address this problem, we propose a user-assisted method for video reflection removal. We rely on both spatial and temporal information and utilize sparse user hints to help improve separation. The key idea of the proposed method is to use motion cues to separate the background layer from the reflection layer with minimal user assistance. We show that user-assistance significantly improves the layer separation results. We implement and evaluate the proposed method through quantitative and qualitative results on real and synthetic videos. Our experiments show that the proposed method successfully removes reflection from video sequences, does not introduce visual distortions, and significantly outperforms the state-of-the-art reflection removal methods in the literature.
[ { "created": "Mon, 7 Sep 2020 17:42:40 GMT", "version": "v1" } ]
2020-09-08
[ [ "Ahmed", "Amgad", "" ], [ "Kim", "Suhong", "" ], [ "Elgharib", "Mohamed", "" ], [ "Hefeeda", "Mohamed", "" ] ]
Reflections in videos are obstructions that often occur when videos are taken behind reflective surfaces like glass. These reflections reduce the quality of such videos, lead to information loss and degrade the accuracy of many computer vision algorithms. A video containing reflections is a combination of background and reflection layers. Thus, reflection removal is equivalent to decomposing the video into two layers. This, however, is a challenging and ill-posed problem as there is an infinite number of valid decompositions. To address this problem, we propose a user-assisted method for video reflection removal. We rely on both spatial and temporal information and utilize sparse user hints to help improve separation. The key idea of the proposed method is to use motion cues to separate the background layer from the reflection layer with minimal user assistance. We show that user-assistance significantly improves the layer separation results. We implement and evaluate the proposed method through quantitative and qualitative results on real and synthetic videos. Our experiments show that the proposed method successfully removes reflection from video sequences, does not introduce visual distortions, and significantly outperforms the state-of-the-art reflection removal methods in the literature.
2109.00969
Robin Haunschild
Robin Haunschild and Lutz Bornmann
Reference Publication Year Spectroscopy (RPYS) in practice: A software tutorial
29 pages, 6 figures, and 5 tables
Scientometrics, 2022
10.1007/s11192-022-04369-8
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In course of the organization of Workshop III entitled "Cited References Analysis Using CRExplorer" at the International Conference of the International Society for Scientometrics and Informetrics (ISSI2021), we have prepared three reference publication year spectroscopy (RPYS) analyses: (i) papers published in Journal of Informetrics; (ii) papers regarding the topic altmetrics; and (iii) papers published by Ludo Waltman (we selected this researcher since he received the Derek de Solla Price Memorial Medal during the ISSI2021 conference). The first RPYS analysis has been presented live at the workshop and the second and third RPYS analyses have been left to the participants for undertaking after the workshop. Here, we present the results for all three RPYS analyses. The three analyses have shown quite different seminal papers with a few overlaps. Many of the foundational papers in the field of scientometrics (e.g., distributions of publications and citations, citation network and co-citation analyses, and citation analysis with the aim of impact measurement and research evaluation) were retrieved as seminal papers of the papers published in Journal of Informetrics. Mainly papers with discussions of the deficiencies of citation-based impact measurements and comparisons between altmetrics and citations were retrieved as seminal papers of the topic altmetrics. The RPYS analysis of the paper set published by Ludo Waltman mainly retrieved papers about network analyses, citation relations, and citation impact measurement.
[ { "created": "Thu, 2 Sep 2021 14:13:02 GMT", "version": "v1" }, { "created": "Wed, 6 Oct 2021 15:59:58 GMT", "version": "v2" }, { "created": "Wed, 16 Feb 2022 15:17:50 GMT", "version": "v3" } ]
2022-04-29
[ [ "Haunschild", "Robin", "" ], [ "Bornmann", "Lutz", "" ] ]
In course of the organization of Workshop III entitled "Cited References Analysis Using CRExplorer" at the International Conference of the International Society for Scientometrics and Informetrics (ISSI2021), we have prepared three reference publication year spectroscopy (RPYS) analyses: (i) papers published in Journal of Informetrics; (ii) papers regarding the topic altmetrics; and (iii) papers published by Ludo Waltman (we selected this researcher since he received the Derek de Solla Price Memorial Medal during the ISSI2021 conference). The first RPYS analysis has been presented live at the workshop and the second and third RPYS analyses have been left to the participants for undertaking after the workshop. Here, we present the results for all three RPYS analyses. The three analyses have shown quite different seminal papers with a few overlaps. Many of the foundational papers in the field of scientometrics (e.g., distributions of publications and citations, citation network and co-citation analyses, and citation analysis with the aim of impact measurement and research evaluation) were retrieved as seminal papers of the papers published in Journal of Informetrics. Mainly papers with discussions of the deficiencies of citation-based impact measurements and comparisons between altmetrics and citations were retrieved as seminal papers of the topic altmetrics. The RPYS analysis of the paper set published by Ludo Waltman mainly retrieved papers about network analyses, citation relations, and citation impact measurement.
2406.13679
Vlad-Andrei Badoiu
Mihai-Valentin Dumitru and Vlad-Andrei B\u{a}doiu and Costin Raiciu
Prose-to-P4: Leveraging High Level Languages
null
null
null
null
cs.NI cs.LG
http://creativecommons.org/licenses/by/4.0/
Languages such as P4 and NPL have enabled a wide and diverse range of networking applications that take advantage of programmable dataplanes. However, software development in these languages is difficult. To address this issue, high-level languages have been designed to offer programmers powerful abstractions that reduce the time, effort and domain-knowledge required for developing networking applications. These languages are then translated by a compiler into P4/NPL code. Inspired by the recent success of Large Language Models (LLMs) in the task of code generation, we propose to raise the level of abstraction even higher, employing LLMs to translate prose into high-level networking code. We analyze the problem, focusing on the motivation and opportunities, as well as the challenges involved and sketch out a roadmap for the development of a system that can generate high-level dataplane code from natural language instructions. We present some promising preliminary results on generating Lucid code from natural language.
[ { "created": "Wed, 19 Jun 2024 16:32:27 GMT", "version": "v1" } ]
2024-06-21
[ [ "Dumitru", "Mihai-Valentin", "" ], [ "Bădoiu", "Vlad-Andrei", "" ], [ "Raiciu", "Costin", "" ] ]
Languages such as P4 and NPL have enabled a wide and diverse range of networking applications that take advantage of programmable dataplanes. However, software development in these languages is difficult. To address this issue, high-level languages have been designed to offer programmers powerful abstractions that reduce the time, effort and domain-knowledge required for developing networking applications. These languages are then translated by a compiler into P4/NPL code. Inspired by the recent success of Large Language Models (LLMs) in the task of code generation, we propose to raise the level of abstraction even higher, employing LLMs to translate prose into high-level networking code. We analyze the problem, focusing on the motivation and opportunities, as well as the challenges involved and sketch out a roadmap for the development of a system that can generate high-level dataplane code from natural language instructions. We present some promising preliminary results on generating Lucid code from natural language.
2308.09546
Shu Wang
Shu Wang, Kun Sun, Qi Li
Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks
Accepted by 2024 Network and Distributed System Security Symposium (NDSS'24)
null
10.14722/ndss.2024.23150
null
cs.CR cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic speech recognition (ASR) provides diverse audio-to-text services for humans to communicate with machines. However, recent research reveals ASR systems are vulnerable to various malicious audio attacks. In particular, by removing the non-essential frequency components, a new spectrum reduction attack can generate adversarial audios that can be perceived by humans but cannot be correctly interpreted by ASR systems. It raises a new challenge for content moderation solutions to detect harmful content in audio and video available on social media platforms. In this paper, we propose an acoustic compensation system named ACE to counter the spectrum reduction attacks over ASR systems. Our system design is based on two observations, namely, frequency component dependencies and perturbation sensitivity. First, since the Discrete Fourier Transform computation inevitably introduces spectral leakage and aliasing effects to the audio frequency spectrum, the frequency components with similar frequencies will have a high correlation. Thus, considering the intrinsic dependencies between neighboring frequency components, it is possible to recover more of the original audio by compensating for the removed components based on the remaining ones. Second, since the removed components in the spectrum reduction attacks can be regarded as an inverse of adversarial noise, the attack success rate will decrease when the adversarial audio is replayed in an over-the-air scenario. Hence, we can model the acoustic propagation process to add over-the-air perturbations into the attacked audio. We implement a prototype of ACE and the experiments show ACE can effectively reduce up to 87.9% of ASR inference errors caused by spectrum reduction attacks. Also, by analyzing residual errors, we summarize six general types of ASR inference errors and investigate the error causes and potential mitigation solutions.
[ { "created": "Fri, 18 Aug 2023 13:23:26 GMT", "version": "v1" } ]
2023-08-21
[ [ "Wang", "Shu", "" ], [ "Sun", "Kun", "" ], [ "Li", "Qi", "" ] ]
Automatic speech recognition (ASR) provides diverse audio-to-text services for humans to communicate with machines. However, recent research reveals ASR systems are vulnerable to various malicious audio attacks. In particular, by removing the non-essential frequency components, a new spectrum reduction attack can generate adversarial audios that can be perceived by humans but cannot be correctly interpreted by ASR systems. It raises a new challenge for content moderation solutions to detect harmful content in audio and video available on social media platforms. In this paper, we propose an acoustic compensation system named ACE to counter the spectrum reduction attacks over ASR systems. Our system design is based on two observations, namely, frequency component dependencies and perturbation sensitivity. First, since the Discrete Fourier Transform computation inevitably introduces spectral leakage and aliasing effects to the audio frequency spectrum, the frequency components with similar frequencies will have a high correlation. Thus, considering the intrinsic dependencies between neighboring frequency components, it is possible to recover more of the original audio by compensating for the removed components based on the remaining ones. Second, since the removed components in the spectrum reduction attacks can be regarded as an inverse of adversarial noise, the attack success rate will decrease when the adversarial audio is replayed in an over-the-air scenario. Hence, we can model the acoustic propagation process to add over-the-air perturbations into the attacked audio. We implement a prototype of ACE and the experiments show ACE can effectively reduce up to 87.9% of ASR inference errors caused by spectrum reduction attacks. Also, by analyzing residual errors, we summarize six general types of ASR inference errors and investigate the error causes and potential mitigation solutions.
2203.14809
Sarah Winkler
Paolo Felli, Marco Montali, Sarah Winkler
Soundness of Data-Aware Processes with Arithmetic Conditions
null
null
null
null
cs.LO cs.AI
http://creativecommons.org/licenses/by/4.0/
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
[ { "created": "Mon, 28 Mar 2022 14:46:10 GMT", "version": "v1" } ]
2022-03-29
[ [ "Felli", "Paolo", "" ], [ "Montali", "Marco", "" ], [ "Winkler", "Sarah", "" ] ]
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
1812.03518
Petr Jancar
Petr Jancar
Equivalence of pushdown automata via first-order grammars
version accepted to JCSS
null
10.1016/j.jcss.2020.07.004
null
cs.LO cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A decidability proof for bisimulation equivalence of first-order grammars is given. It is an alternative proof for a result by S\'enizergues (1998, 2005) that subsumes his affirmative solution of the famous decidability question for deterministic pushdown automata. The presented proof is conceptually simpler, and a particular novelty is that it is not given as two semidecision procedures but it provides an explicit algorithm that might be amenable to a complexity analysis.
[ { "created": "Sun, 9 Dec 2018 16:44:37 GMT", "version": "v1" }, { "created": "Mon, 17 Aug 2020 11:22:53 GMT", "version": "v2" } ]
2020-08-18
[ [ "Jancar", "Petr", "" ] ]
A decidability proof for bisimulation equivalence of first-order grammars is given. It is an alternative proof for a result by S\'enizergues (1998, 2005) that subsumes his affirmative solution of the famous decidability question for deterministic pushdown automata. The presented proof is conceptually simpler, and a particular novelty is that it is not given as two semidecision procedures but it provides an explicit algorithm that might be amenable to a complexity analysis.
1909.09481
Yaoyao Zhong
Yaoyao Zhong and Weihong Deng
Adversarial Learning with Margin-based Triplet Embedding Regularization
Accepted by ICCV 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Deep neural networks (DNNs) have achieved great success on a variety of computer vision tasks, however, they are highly vulnerable to adversarial attacks. To address this problem, we propose to improve the local smoothness of the representation space, by integrating a margin-based triplet embedding regularization term into the classification objective, so that the obtained model learns to resist adversarial examples. The regularization term consists of two steps optimizations which find potential perturbations and punish them by a large margin in an iterative way. Experimental results on MNIST, CASIA-WebFace, VGGFace2 and MS-Celeb-1M reveal that our approach increases the robustness of the network against both feature and label adversarial attacks in simple object classification and deep face recognition.
[ { "created": "Fri, 20 Sep 2019 13:08:12 GMT", "version": "v1" } ]
2019-09-23
[ [ "Zhong", "Yaoyao", "" ], [ "Deng", "Weihong", "" ] ]
The Deep neural networks (DNNs) have achieved great success on a variety of computer vision tasks, however, they are highly vulnerable to adversarial attacks. To address this problem, we propose to improve the local smoothness of the representation space, by integrating a margin-based triplet embedding regularization term into the classification objective, so that the obtained model learns to resist adversarial examples. The regularization term consists of two steps optimizations which find potential perturbations and punish them by a large margin in an iterative way. Experimental results on MNIST, CASIA-WebFace, VGGFace2 and MS-Celeb-1M reveal that our approach increases the robustness of the network against both feature and label adversarial attacks in simple object classification and deep face recognition.
1507.08073
Jian Yu
Jian Yu
Communication: Words and Conceptual Systems
13 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Words (phrases or symbols) play a key role in human life. Word (phrase or symbol) representation is the fundamental problem for knowledge representation and understanding. A word (phrase or symbol) usually represents a name of a category. However, it is always a challenge that how to represent a category can make it easily understood. In this paper, a new representation for a category is discussed, which can be considered a generalization of classic set. In order to reduce representation complexity, the economy principle of category representation is proposed. The proposed category representation provides a powerful tool for analyzing conceptual systems, relations between words, communication, knowledge, situations. More specifically, the conceptual system, word relations and communication are mathematically defined and classified such as ideal conceptual system, perfect communication and so on; relation between words and sentences is also studied, which shows that knowledge are words. Furthermore, how conceptual systems and words depend on situations is presented, and how truth is defined is also discussed.
[ { "created": "Wed, 29 Jul 2015 09:21:15 GMT", "version": "v1" }, { "created": "Tue, 15 Sep 2015 09:23:02 GMT", "version": "v10" }, { "created": "Wed, 16 Sep 2015 02:13:24 GMT", "version": "v11" }, { "created": "Wed, 28 Oct 2015 00:56:45 GMT", "version": "v12" }, { "created": "Mon, 16 Nov 2015 02:12:17 GMT", "version": "v13" }, { "created": "Fri, 4 Dec 2015 03:36:06 GMT", "version": "v14" }, { "created": "Sun, 2 Aug 2015 12:13:07 GMT", "version": "v2" }, { "created": "Mon, 24 Aug 2015 14:24:38 GMT", "version": "v3" }, { "created": "Tue, 25 Aug 2015 14:02:14 GMT", "version": "v4" }, { "created": "Wed, 26 Aug 2015 16:58:08 GMT", "version": "v5" }, { "created": "Thu, 27 Aug 2015 14:39:39 GMT", "version": "v6" }, { "created": "Mon, 31 Aug 2015 03:35:03 GMT", "version": "v7" }, { "created": "Sun, 6 Sep 2015 22:23:44 GMT", "version": "v8" }, { "created": "Wed, 9 Sep 2015 09:37:39 GMT", "version": "v9" } ]
2015-12-07
[ [ "Yu", "Jian", "" ] ]
Words (phrases or symbols) play a key role in human life. Word (phrase or symbol) representation is the fundamental problem for knowledge representation and understanding. A word (phrase or symbol) usually represents a name of a category. However, it is always a challenge that how to represent a category can make it easily understood. In this paper, a new representation for a category is discussed, which can be considered a generalization of classic set. In order to reduce representation complexity, the economy principle of category representation is proposed. The proposed category representation provides a powerful tool for analyzing conceptual systems, relations between words, communication, knowledge, situations. More specifically, the conceptual system, word relations and communication are mathematically defined and classified such as ideal conceptual system, perfect communication and so on; relation between words and sentences is also studied, which shows that knowledge are words. Furthermore, how conceptual systems and words depend on situations is presented, and how truth is defined is also discussed.
1309.3849
Tong-Wook Shinn
Tong-Wook Shinn and Tadao Takaoka
Efficient Graph Algorithms for Network Analysis
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The GC problem is to identify a pre-determined number of center vertices such that the distances or costs from (or to) the centers to (or from) other vertices is minimized. The bottleneck of a path is the minimum capacity of edges on the path. The Bottleneck Paths (BP) problem is to compute the paths that give us the maximum bottleneck values between pairs of vertices. The Graph Bottleneck (GB) problem is to find the minimum bottleneck value out of bottleneck paths for all possible pairs of vertices. We give two similar algorithms that are based on binary search to solve the 1-center GC problem and the GB problem on directed graphs with unit edge costs. We achieve $\tilde{O}(n^{2.373})$ worst case time complexity for both the 1-center GC problem and the GB problem, where $n$ is the number of vertices in the graph. This is better than the straightforward methods of solving the two problems in $O(n^{2.575})$ and $O(n^{2.688})$ time bounds, respectively. We then combine the Bottleneck Paths (BP) problem with the well known Shortest Paths (SP) problem to compute the shortest paths for all possible flow values. We call this problem the Shortest Paths for All Flows (SP-AF) problem. We show that if the flow demand is uncertain, but between two consecutive capacity values, the unique shortest path can be computed to push that flow. If the uncertainty stretches over two intervals, we need to prepare two shortest paths to accommodate the uncertainty, etc. In introducing this new problem, we define a new semi-ring called the distance/flow semi-ring, and show that the well known algorithm by Floyd can be used over the distance/flow semi-ring to solve the All Pairs Shortest Paths for All Flows (APSP-AF) problem.
[ { "created": "Mon, 16 Sep 2013 08:25:21 GMT", "version": "v1" } ]
2013-09-17
[ [ "Shinn", "Tong-Wook", "" ], [ "Takaoka", "Tadao", "" ] ]
The GC problem is to identify a pre-determined number of center vertices such that the distances or costs from (or to) the centers to (or from) other vertices is minimized. The bottleneck of a path is the minimum capacity of edges on the path. The Bottleneck Paths (BP) problem is to compute the paths that give us the maximum bottleneck values between pairs of vertices. The Graph Bottleneck (GB) problem is to find the minimum bottleneck value out of bottleneck paths for all possible pairs of vertices. We give two similar algorithms that are based on binary search to solve the 1-center GC problem and the GB problem on directed graphs with unit edge costs. We achieve $\tilde{O}(n^{2.373})$ worst case time complexity for both the 1-center GC problem and the GB problem, where $n$ is the number of vertices in the graph. This is better than the straightforward methods of solving the two problems in $O(n^{2.575})$ and $O(n^{2.688})$ time bounds, respectively. We then combine the Bottleneck Paths (BP) problem with the well known Shortest Paths (SP) problem to compute the shortest paths for all possible flow values. We call this problem the Shortest Paths for All Flows (SP-AF) problem. We show that if the flow demand is uncertain, but between two consecutive capacity values, the unique shortest path can be computed to push that flow. If the uncertainty stretches over two intervals, we need to prepare two shortest paths to accommodate the uncertainty, etc. In introducing this new problem, we define a new semi-ring called the distance/flow semi-ring, and show that the well known algorithm by Floyd can be used over the distance/flow semi-ring to solve the All Pairs Shortest Paths for All Flows (APSP-AF) problem.
cs/0412017
Salman Abdul Baset
Salman A. Baset and Henning Schulzrinne
An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol
null
null
null
CUCS-039-04
cs.NI cs.MM
null
Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.
[ { "created": "Sun, 5 Dec 2004 03:56:57 GMT", "version": "v1" } ]
2008-07-09
[ [ "Baset", "Salman A.", "" ], [ "Schulzrinne", "Henning", "" ] ]
Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.
2212.13329
Hisham A. Kholidy
Mohammed Abuzamak, Hisham Kholidy
UAV Based 5G Network: A Practical Survey Study
null
null
null
null
cs.NI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicles (UAVs) are anticipated to significantly contribute to the development of new wireless networks that could handle high-speed transmissions and enable wireless broadcasts. When compared to communications that rely on permanent infrastructure, UAVs offer a number of advantages, including flexible deployment, dependable line-of-sight (LoS) connection links, and more design degrees of freedom because of controlled mobility. Unmanned aerial vehicles (UAVs) combined with 5G networks and Internet of Things (IoT) components have the potential to completely transform a variety of industries. UAVs may transfer massive volumes of data in real-time by utilizing the low latency and high-speed abilities of 5G networks, opening up a variety of applications like remote sensing, precision farming, and disaster response. This study of UAV communication with regard to 5G/B5G WLANs is presented in this research. The three UAV-assisted MEC network scenarios also include the specifics for the allocation of resources and optimization. We also concentrate on the case where a UAV does task computation in addition to serving as a MEC server to examine wind farm turbines. This paper covers the key implementation difficulties of UAV-assisted MEC, such as optimum UAV deployment, wind models, and coupled trajectory-computation performance optimization, in order to promote widespread implementations of UAV-assisted MEC in practice. The primary problem for 5G and beyond 5G (B5G) is delivering broadband access to various device kinds. Prior to discussing associated research issues faced by the developing integrated network design, we first provide a brief overview of the background information as well as the networks that integrate space, aviation, and land.
[ { "created": "Tue, 27 Dec 2022 00:34:59 GMT", "version": "v1" } ]
2022-12-29
[ [ "Abuzamak", "Mohammed", "" ], [ "Kholidy", "Hisham", "" ] ]
Unmanned aerial vehicles (UAVs) are anticipated to significantly contribute to the development of new wireless networks that could handle high-speed transmissions and enable wireless broadcasts. When compared to communications that rely on permanent infrastructure, UAVs offer a number of advantages, including flexible deployment, dependable line-of-sight (LoS) connection links, and more design degrees of freedom because of controlled mobility. Unmanned aerial vehicles (UAVs) combined with 5G networks and Internet of Things (IoT) components have the potential to completely transform a variety of industries. UAVs may transfer massive volumes of data in real-time by utilizing the low latency and high-speed abilities of 5G networks, opening up a variety of applications like remote sensing, precision farming, and disaster response. This study of UAV communication with regard to 5G/B5G WLANs is presented in this research. The three UAV-assisted MEC network scenarios also include the specifics for the allocation of resources and optimization. We also concentrate on the case where a UAV does task computation in addition to serving as a MEC server to examine wind farm turbines. This paper covers the key implementation difficulties of UAV-assisted MEC, such as optimum UAV deployment, wind models, and coupled trajectory-computation performance optimization, in order to promote widespread implementations of UAV-assisted MEC in practice. The primary problem for 5G and beyond 5G (B5G) is delivering broadband access to various device kinds. Prior to discussing associated research issues faced by the developing integrated network design, we first provide a brief overview of the background information as well as the networks that integrate space, aviation, and land.
2407.19286
Mikko Heikkil\"a
Mikko A. Heikkil\"a
On Joint Noise Scaling in Differentially Private Federated Learning with Multiple Local Steps
14 pages with appendix, 3 figures, 1 table
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
Federated learning is a distributed learning setting where the main aim is to train machine learning models without having to share raw data but only what is required for learning. To guarantee training data privacy and high-utility models, differential privacy and secure aggregation techniques are often combined with federated learning. However, with fine-grained protection granularities the currently existing techniques require the parties to communicate for each local optimisation step, if they want to fully benefit from the secure aggregation in terms of the resulting formal privacy guarantees. In this paper, we show how a simple new analysis allows the parties to perform multiple local optimisation steps while still benefiting from joint noise scaling when using secure aggregation. We show that our analysis enables higher utility models with guaranteed privacy protection under limited number of communication rounds.
[ { "created": "Sat, 27 Jul 2024 15:54:58 GMT", "version": "v1" } ]
2024-07-30
[ [ "Heikkilä", "Mikko A.", "" ] ]
Federated learning is a distributed learning setting where the main aim is to train machine learning models without having to share raw data but only what is required for learning. To guarantee training data privacy and high-utility models, differential privacy and secure aggregation techniques are often combined with federated learning. However, with fine-grained protection granularities the currently existing techniques require the parties to communicate for each local optimisation step, if they want to fully benefit from the secure aggregation in terms of the resulting formal privacy guarantees. In this paper, we show how a simple new analysis allows the parties to perform multiple local optimisation steps while still benefiting from joint noise scaling when using secure aggregation. We show that our analysis enables higher utility models with guaranteed privacy protection under limited number of communication rounds.
2305.04891
Chen Zhu
Chen Zhu, Liang Du, Hong Chen, Shuang Zhao, Zixun Sun, Xin Wang, Wenwu Zhu
DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for CTR Prediction
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation, where learning effective feature embeddings is of great significance. However, traditional methods typically learn fixed feature representations without dynamically refining feature representations according to the context information, leading to suboptimal performance. Some recent approaches attempt to address this issue by learning bit-wise weights or augmented embeddings for feature representations, but suffer from uninformative or redundant features in the context. To tackle this problem, inspired by the Global Workspace Theory in conscious processing, which posits that only a specific subset of the product features are pertinent while the rest can be noisy and even detrimental to human-click behaviors, we propose a CTR model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction, termed DELTA. DELTA contains two key components: (I) conscious truncation module (CTM), which utilizes curriculum learning to apply adaptive truncation on attention weights to select the most critical feature in the context; (II) explicit embedding optimization (EEO), which applies an auxiliary task during training that directly and independently propagates the gradient from the loss layer to the embedding layer, thereby optimizing the embedding explicitly via linear feature crossing. Extensive experiments on five challenging CTR datasets demonstrate that DELTA achieves new state-of-art performance among current CTR methods.
[ { "created": "Wed, 3 May 2023 12:34:45 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 09:17:04 GMT", "version": "v2" }, { "created": "Tue, 5 Sep 2023 07:24:00 GMT", "version": "v3" } ]
2023-09-06
[ [ "Zhu", "Chen", "" ], [ "Du", "Liang", "" ], [ "Chen", "Hong", "" ], [ "Zhao", "Shuang", "" ], [ "Sun", "Zixun", "" ], [ "Wang", "Xin", "" ], [ "Zhu", "Wenwu", "" ] ]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation, where learning effective feature embeddings is of great significance. However, traditional methods typically learn fixed feature representations without dynamically refining feature representations according to the context information, leading to suboptimal performance. Some recent approaches attempt to address this issue by learning bit-wise weights or augmented embeddings for feature representations, but suffer from uninformative or redundant features in the context. To tackle this problem, inspired by the Global Workspace Theory in conscious processing, which posits that only a specific subset of the product features are pertinent while the rest can be noisy and even detrimental to human-click behaviors, we propose a CTR model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction, termed DELTA. DELTA contains two key components: (I) conscious truncation module (CTM), which utilizes curriculum learning to apply adaptive truncation on attention weights to select the most critical feature in the context; (II) explicit embedding optimization (EEO), which applies an auxiliary task during training that directly and independently propagates the gradient from the loss layer to the embedding layer, thereby optimizing the embedding explicitly via linear feature crossing. Extensive experiments on five challenging CTR datasets demonstrate that DELTA achieves new state-of-art performance among current CTR methods.
1211.4441
Santhanakrishnan Boopalan
B. Santhana Krishnan and Animesh Kumar and D. Manjunath and Bikash K. Dey
On the Separability of Targets Using Binary Proximity Sensors
17 pages, 3 figures, Submitted to IEEE TMC
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem where a network of sensors has to detect the presence of targets at any of $n$ possible locations in a finite region. All such locations may not be occupied by a target. The data from sensors is fused to determine the set of locations that have targets. We term this the separability problem. In this paper, we address the separability of an asymptotically large number of static target locations by using binary proximity sensors. Two models for target locations are considered: (i) when target locations lie on a uniformly spaced grid; and, (ii) when target locations are i.i.d. uniformly distributed in the area. Sensor locations are i.i.d uniformly distributed in the same finite region, independent of target locations. We derive conditions on the sensing radius and the number of sensors required to achieve separability. Order-optimal scaling laws, on the number of sensors as a function of the number of target locations, for two types of separability requirements are derived. The robustness or security aspects of the above problem is also addressed. It is shown that in the presence of adversarial sensors, which toggle their sensed reading and inject binary noise, the scaling laws for separability remain unaffected.
[ { "created": "Mon, 19 Nov 2012 14:39:56 GMT", "version": "v1" } ]
2012-11-20
[ [ "Krishnan", "B. Santhana", "" ], [ "Kumar", "Animesh", "" ], [ "Manjunath", "D.", "" ], [ "Dey", "Bikash K.", "" ] ]
We consider the problem where a network of sensors has to detect the presence of targets at any of $n$ possible locations in a finite region. All such locations may not be occupied by a target. The data from sensors is fused to determine the set of locations that have targets. We term this the separability problem. In this paper, we address the separability of an asymptotically large number of static target locations by using binary proximity sensors. Two models for target locations are considered: (i) when target locations lie on a uniformly spaced grid; and, (ii) when target locations are i.i.d. uniformly distributed in the area. Sensor locations are i.i.d uniformly distributed in the same finite region, independent of target locations. We derive conditions on the sensing radius and the number of sensors required to achieve separability. Order-optimal scaling laws, on the number of sensors as a function of the number of target locations, for two types of separability requirements are derived. The robustness or security aspects of the above problem is also addressed. It is shown that in the presence of adversarial sensors, which toggle their sensed reading and inject binary noise, the scaling laws for separability remain unaffected.
1305.4738
ManjuPrasad B
Manju Prasad, Andhe Dharani
A qoi based energy efficient clustering for dense wireless sensor network
9 Pages
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a wireless sensor network Quality of Information (QoI), Energy Efficiency, Redundant data avoidance, congestion control are the important metrics that affect the performance of wireless sensor network. As many approaches were proposed to increase the performance of a wireless sensor network among them clustering is one of the efficient approaches in sensor network. Many clustering algorithms concentrate mainly on power Optimization like FSCH, LEACH, and EELBCRP. There is necessity of the above metrics in wireless sensor network where nodes are densely deployed in a given network area. As the nodes are deployed densely there is maximum possibility of nodes appear in the sensing region of other nodes. So there exists an option that nodes have to send the information that is already reached the base station by its own cluster members or by members of other clusters. This mechanism will affect the QoI, Energy factor and congestion control of the wireless sensor networks. Even though clustering uses TDMA (Time Division Multiple Access) for avoiding congestion control for intra clustering data transmission, but it may fail in some critical situation. This paper proposed a energy efficient clustering which avoid data redundancy in a dense sensor network until the network becomes sparse and hence uses the TDMA efficiently during high density of the nodes.
[ { "created": "Tue, 21 May 2013 07:19:19 GMT", "version": "v1" } ]
2013-05-22
[ [ "Prasad", "Manju", "" ], [ "Dharani", "Andhe", "" ] ]
In a wireless sensor network Quality of Information (QoI), Energy Efficiency, Redundant data avoidance, congestion control are the important metrics that affect the performance of wireless sensor network. As many approaches were proposed to increase the performance of a wireless sensor network among them clustering is one of the efficient approaches in sensor network. Many clustering algorithms concentrate mainly on power Optimization like FSCH, LEACH, and EELBCRP. There is necessity of the above metrics in wireless sensor network where nodes are densely deployed in a given network area. As the nodes are deployed densely there is maximum possibility of nodes appear in the sensing region of other nodes. So there exists an option that nodes have to send the information that is already reached the base station by its own cluster members or by members of other clusters. This mechanism will affect the QoI, Energy factor and congestion control of the wireless sensor networks. Even though clustering uses TDMA (Time Division Multiple Access) for avoiding congestion control for intra clustering data transmission, but it may fail in some critical situation. This paper proposed a energy efficient clustering which avoid data redundancy in a dense sensor network until the network becomes sparse and hence uses the TDMA efficiently during high density of the nodes.
1509.08973
Tadahiro Taniguchi
Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya Ogata, and Hideki Asoh
Symbol Emergence in Robotics: A Survey
submitted to Advanced Robotics
Advanced Robotics, 30:11-12, 706-728, 2016
10.1080/01691864.2016.1164622
null
cs.AI cs.CL cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.
[ { "created": "Tue, 29 Sep 2015 23:16:48 GMT", "version": "v1" } ]
2023-01-18
[ [ "Taniguchi", "Tadahiro", "" ], [ "Nagai", "Takayuki", "" ], [ "Nakamura", "Tomoaki", "" ], [ "Iwahashi", "Naoto", "" ], [ "Ogata", "Tetsuya", "" ], [ "Asoh", "Hideki", "" ] ]
Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.