id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1805.01965
Zhongxing Yu
Zhongxing Yu, Chenggang Bai, Lionel Seinturier, Martin Monperrus
Characterizing the Usage, Evolution and Impact of Java Annotations in Practice
TO APPEAR IN IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
IEEE Transactions on Software Engineering, 2019
10.1109/TSE.2019.2910516
null
cs.SE cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Annotations have been formally introduced into Java since Java 5. Since then, annotations have been widely used by the Java community for different purposes, such as compiler guidance and runtime processing. Despite the ever-growing use, there is still limited empirical knowledge about the actual usage of annotations in practice, the changes made to annotations during software evolution, and the potential impact of annotations on code quality. To fill this gap, we perform the first large-scale empirical study about Java annotations on 1,094 notable open-source projects hosted on GitHub. Our study systematically investigates annotation usage, annotation evolution, and annotation impact, and generates 10 novel and important findings. We also present the implications of our findings, which shed light for developers, researchers, tool builders, and language or library designers in order to improve all facets of Java annotation engineering.
[ { "created": "Fri, 4 May 2018 23:29:19 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2019 15:02:59 GMT", "version": "v2" } ]
2019-04-16
[ [ "Yu", "Zhongxing", "" ], [ "Bai", "Chenggang", "" ], [ "Seinturier", "Lionel", "" ], [ "Monperrus", "Martin", "" ] ]
Annotations have been formally introduced into Java since Java 5. Since then, annotations have been widely used by the Java community for different purposes, such as compiler guidance and runtime processing. Despite the ever-growing use, there is still limited empirical knowledge about the actual usage of annotations in practice, the changes made to annotations during software evolution, and the potential impact of annotations on code quality. To fill this gap, we perform the first large-scale empirical study about Java annotations on 1,094 notable open-source projects hosted on GitHub. Our study systematically investigates annotation usage, annotation evolution, and annotation impact, and generates 10 novel and important findings. We also present the implications of our findings, which shed light for developers, researchers, tool builders, and language or library designers in order to improve all facets of Java annotation engineering.
cs/0601135
Robert Brijder
Robert Brijder, Hendrik Jan Hoogeboom, Michael Muskulus
Strategies of Loop Recombination in Ciliates
22 pages, 14 figures
Discrete Applied Mathematics, v. 156, 1736-1753, 2008
10.1016/j.dam.2007.08.032
LIACS Technical Report 2006-01
cs.LO q-bio.GN
null
Gene assembly in ciliates is an extremely involved DNA transformation process, which transforms a nucleus, the micronucleus, to another functionally different nucleus, the macronucleus. In this paper we characterize which loop recombination operations (one of the three types of molecular operations that accomplish gene assembly) can possibly be applied in the transformation of a given gene from its micronuclear form to its macronuclear form. We also characterize in which order these loop recombination operations are applicable. This is done in the abstract and more general setting of so-called legal strings.
[ { "created": "Tue, 31 Jan 2006 17:43:36 GMT", "version": "v1" } ]
2014-03-26
[ [ "Brijder", "Robert", "" ], [ "Hoogeboom", "Hendrik Jan", "" ], [ "Muskulus", "Michael", "" ] ]
Gene assembly in ciliates is an extremely involved DNA transformation process, which transforms a nucleus, the micronucleus, to another functionally different nucleus, the macronucleus. In this paper we characterize which loop recombination operations (one of the three types of molecular operations that accomplish gene assembly) can possibly be applied in the transformation of a given gene from its micronuclear form to its macronuclear form. We also characterize in which order these loop recombination operations are applicable. This is done in the abstract and more general setting of so-called legal strings.
1912.09539
Hamidreza Kasaei
S. Hamidreza Kasaei
Interactive Open-Ended Learning for 3D Object Recognition
PhD thesis
null
null
null
cs.RO cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The thesis contributes in several important ways to the research area of 3D object category learning and recognition. To cope with the mentioned limitations, we look at human cognition, in particular at the fact that human beings learn to recognize object categories ceaselessly over time. This ability to refine knowledge from the set of accumulated experiences facilitates the adaptation to new environments. Inspired by this capability, we seek to create a cognitive object perception and perceptual learning architecture that can learn 3D object categories in an open-ended fashion. In this context, ``open-ended'' implies that the set of categories to be learned is not known in advance, and the training instances are extracted from actual experiences of a robot, and thus become gradually available, rather than being available since the beginning of the learning process. In particular, this architecture provides perception capabilities that will allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. This framework integrates detection, tracking, teaching, learning, and recognition of objects. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks. The contributions presented in this thesis have been fully implemented and evaluated on different standard object and scene datasets and empirically evaluated on different robotic platforms.
[ { "created": "Thu, 19 Dec 2019 20:46:51 GMT", "version": "v1" } ]
2019-12-23
[ [ "Kasaei", "S. Hamidreza", "" ] ]
The thesis contributes in several important ways to the research area of 3D object category learning and recognition. To cope with the mentioned limitations, we look at human cognition, in particular at the fact that human beings learn to recognize object categories ceaselessly over time. This ability to refine knowledge from the set of accumulated experiences facilitates the adaptation to new environments. Inspired by this capability, we seek to create a cognitive object perception and perceptual learning architecture that can learn 3D object categories in an open-ended fashion. In this context, ``open-ended'' implies that the set of categories to be learned is not known in advance, and the training instances are extracted from actual experiences of a robot, and thus become gradually available, rather than being available since the beginning of the learning process. In particular, this architecture provides perception capabilities that will allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. This framework integrates detection, tracking, teaching, learning, and recognition of objects. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks. The contributions presented in this thesis have been fully implemented and evaluated on different standard object and scene datasets and empirically evaluated on different robotic platforms.
2312.15242
Rashik Shrestha
Rashik Shrestha, Bishad Koju, Abhigyan Bhusal, Danda Pani Paudel, Fran\c{c}ois Rameau
CaLDiff: Camera Localization in NeRF via Pose Diffusion
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the widespread use of NeRF-based implicit 3D representation, the need for camera localization in the same representation becomes manifestly apparent. Doing so not only simplifies the localization process -- by avoiding an outside-the-NeRF-based localization -- but also has the potential to offer the benefit of enhanced localization. This paper studies the problem of localizing cameras in NeRF using a diffusion model for camera pose adjustment. More specifically, given a pre-trained NeRF model, we train a diffusion model that iteratively updates randomly initialized camera poses, conditioned upon the image to be localized. At test time, a new camera is localized in two steps: first, coarse localization using the proposed pose diffusion process, followed by local refinement steps of a pose inversion process in NeRF. In fact, the proposed camera localization by pose diffusion (CaLDiff) method also integrates the pose inversion steps within the diffusion process. Such integration offers significantly better localization, thanks to our downstream refinement-aware diffusion process. Our exhaustive experiments on challenging real-world data validate our method by providing significantly better results than the compared methods and the established baselines. Our source code will be made publicly available.
[ { "created": "Sat, 23 Dec 2023 12:36:01 GMT", "version": "v1" } ]
2023-12-27
[ [ "Shrestha", "Rashik", "" ], [ "Koju", "Bishad", "" ], [ "Bhusal", "Abhigyan", "" ], [ "Paudel", "Danda Pani", "" ], [ "Rameau", "François", "" ] ]
With the widespread use of NeRF-based implicit 3D representation, the need for camera localization in the same representation becomes manifestly apparent. Doing so not only simplifies the localization process -- by avoiding an outside-the-NeRF-based localization -- but also has the potential to offer the benefit of enhanced localization. This paper studies the problem of localizing cameras in NeRF using a diffusion model for camera pose adjustment. More specifically, given a pre-trained NeRF model, we train a diffusion model that iteratively updates randomly initialized camera poses, conditioned upon the image to be localized. At test time, a new camera is localized in two steps: first, coarse localization using the proposed pose diffusion process, followed by local refinement steps of a pose inversion process in NeRF. In fact, the proposed camera localization by pose diffusion (CaLDiff) method also integrates the pose inversion steps within the diffusion process. Such integration offers significantly better localization, thanks to our downstream refinement-aware diffusion process. Our exhaustive experiments on challenging real-world data validate our method by providing significantly better results than the compared methods and the established baselines. Our source code will be made publicly available.
2010.05119
Ad\'in Ram\'irez Rivera
Ad\'in Ram\'irez Rivera, Adil Khan, Imad E. I. Bekkouch, Taimoor S. Sheikh
Anomaly Detection based on Zero-Shot Outlier Synthesis and Hierarchical Feature Distillation
To appear in IEEE Trans. on Neural Networks and Learning Systems
null
10.1109/TNNLS.2020.3027667
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomaly detection suffers from unbalanced data since anomalies are quite rare. Synthetically generated anomalies are a solution to such ill or not fully defined data. However, synthesis requires an expressive representation to guarantee the quality of the generated data. In this paper, we propose a two-level hierarchical latent space representation that distills inliers' feature-descriptors (through autoencoders) into more robust representations based on a variational family of distributions (through a variational autoencoder) for zero-shot anomaly generation. From the learned latent distributions, we select those that lie on the outskirts of the training data as synthetic-outlier generators. And, we synthesize from them, i.e., generate negative samples without seen them before, to train binary classifiers. We found that the use of the proposed hierarchical structure for feature distillation and fusion creates robust and general representations that allow us to synthesize pseudo outlier samples. And in turn, train robust binary classifiers for true outlier detection (without the need for actual outliers during training). We demonstrate the performance of our proposal on several benchmarks for anomaly detection.
[ { "created": "Sat, 10 Oct 2020 23:34:02 GMT", "version": "v1" } ]
2020-10-13
[ [ "Rivera", "Adín Ramírez", "" ], [ "Khan", "Adil", "" ], [ "Bekkouch", "Imad E. I.", "" ], [ "Sheikh", "Taimoor S.", "" ] ]
Anomaly detection suffers from unbalanced data since anomalies are quite rare. Synthetically generated anomalies are a solution to such ill or not fully defined data. However, synthesis requires an expressive representation to guarantee the quality of the generated data. In this paper, we propose a two-level hierarchical latent space representation that distills inliers' feature-descriptors (through autoencoders) into more robust representations based on a variational family of distributions (through a variational autoencoder) for zero-shot anomaly generation. From the learned latent distributions, we select those that lie on the outskirts of the training data as synthetic-outlier generators. And, we synthesize from them, i.e., generate negative samples without seen them before, to train binary classifiers. We found that the use of the proposed hierarchical structure for feature distillation and fusion creates robust and general representations that allow us to synthesize pseudo outlier samples. And in turn, train robust binary classifiers for true outlier detection (without the need for actual outliers during training). We demonstrate the performance of our proposal on several benchmarks for anomaly detection.
1809.05762
John Kingston
John KC Kingston
Using Artificial Intelligence to Support Compliance with the General Data Protection Regulation
null
Artificial Intelligence and Law (2017) 25, 429 - 443
10.1007/s10506-017-9206-9
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR - and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: * Following compliance checklists and codes of conduct; * Supporting risk assessments; * Complying with the new regulations regarding technologies that perform automatic profiling; * Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances.
[ { "created": "Sat, 15 Sep 2018 19:57:02 GMT", "version": "v1" } ]
2018-09-18
[ [ "Kingston", "John KC", "" ] ]
The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR - and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: * Following compliance checklists and codes of conduct; * Supporting risk assessments; * Complying with the new regulations regarding technologies that perform automatic profiling; * Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances.
2406.18060
Yifan Yang
Yifan Yang, Kai Zhen, Ershad Banijamal, Athanasios Mouchtaris, Zheng Zhang
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuning
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Fine-tuning large language models (LLMs) has achieved remarkable performance across various natural language processing tasks, yet it demands more and more memory as model sizes keep growing. To address this issue, the recently proposed Memory-efficient Zeroth-order (MeZO) methods attempt to fine-tune LLMs using only forward passes, thereby avoiding the need for a backpropagation graph. However, significant performance drops and a high risk of divergence have limited their widespread adoption. In this paper, we propose the Adaptive Zeroth-order Tensor-Train Adaption (AdaZeta) framework, specifically designed to improve the performance and convergence of the ZO methods. To enhance dimension-dependent ZO estimation accuracy, we introduce a fast-forward, low-parameter tensorized adapter. To tackle the frequently observed divergence issue in large-scale ZO fine-tuning tasks, we propose an adaptive query number schedule that guarantees convergence. Detailed theoretical analysis and extensive experimental results on Roberta-Large and Llama-2-7B models substantiate the efficacy of our AdaZeta framework in terms of accuracy, memory efficiency, and convergence speed.
[ { "created": "Wed, 26 Jun 2024 04:33:13 GMT", "version": "v1" } ]
2024-06-27
[ [ "Yang", "Yifan", "" ], [ "Zhen", "Kai", "" ], [ "Banijamal", "Ershad", "" ], [ "Mouchtaris", "Athanasios", "" ], [ "Zhang", "Zheng", "" ] ]
Fine-tuning large language models (LLMs) has achieved remarkable performance across various natural language processing tasks, yet it demands more and more memory as model sizes keep growing. To address this issue, the recently proposed Memory-efficient Zeroth-order (MeZO) methods attempt to fine-tune LLMs using only forward passes, thereby avoiding the need for a backpropagation graph. However, significant performance drops and a high risk of divergence have limited their widespread adoption. In this paper, we propose the Adaptive Zeroth-order Tensor-Train Adaption (AdaZeta) framework, specifically designed to improve the performance and convergence of the ZO methods. To enhance dimension-dependent ZO estimation accuracy, we introduce a fast-forward, low-parameter tensorized adapter. To tackle the frequently observed divergence issue in large-scale ZO fine-tuning tasks, we propose an adaptive query number schedule that guarantees convergence. Detailed theoretical analysis and extensive experimental results on Roberta-Large and Llama-2-7B models substantiate the efficacy of our AdaZeta framework in terms of accuracy, memory efficiency, and convergence speed.
2109.08295
EPTCS
Tobias Grubenmann (SDA Research Group, Department of Computer Science, University of Bonn, Germany), Jens Lehmann (SDA Research Group, Department of Computer Science, University of Bonn, Germany)
Geolog: Scalable Logic Programming on Spatial Data
In Proceedings ICLP 2021, arXiv:2109.07914
EPTCS 345, 2021, pp. 191-204
10.4204/EPTCS.345.34
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Spatial data is ubiquitous in our data-driven society. The Logic Programming community has been investigating the use of spatial data in different settings. Despite the success of this research, the Geographic Information System (GIS) community has rarely made use of these new approaches. This has mainly two reasons. First, there is a lack of tools that tightly integrate logical reasoning into state-of-the-art GIS software. Second, the scalability of solutions has often not been tested and hence, some solutions might work on toy examples but do not scale well to real-world settings. The two main contributions of this paper are (1) the Relation Based Programming paradigm, expressing rules on relations instead of individual entities, and (2) Geolog, a tool for spatio-logical reasoning that can be installed on top of ArcMap, which is an industry standard GIS. We evaluate our new Relation Based Programming paradigm in four real-world scenarios and show that up to two orders of magnitude in performance gain can be achieved compared to the prevalent Entity Based Programming paradigm.
[ { "created": "Fri, 17 Sep 2021 01:49:06 GMT", "version": "v1" } ]
2021-09-20
[ [ "Grubenmann", "Tobias", "", "SDA Research Group, Department of Computer Science,\n University of Bonn, Germany" ], [ "Lehmann", "Jens", "", "SDA Research Group, Department of\n Computer Science, University of Bonn, Germany" ] ]
Spatial data is ubiquitous in our data-driven society. The Logic Programming community has been investigating the use of spatial data in different settings. Despite the success of this research, the Geographic Information System (GIS) community has rarely made use of these new approaches. This has mainly two reasons. First, there is a lack of tools that tightly integrate logical reasoning into state-of-the-art GIS software. Second, the scalability of solutions has often not been tested and hence, some solutions might work on toy examples but do not scale well to real-world settings. The two main contributions of this paper are (1) the Relation Based Programming paradigm, expressing rules on relations instead of individual entities, and (2) Geolog, a tool for spatio-logical reasoning that can be installed on top of ArcMap, which is an industry standard GIS. We evaluate our new Relation Based Programming paradigm in four real-world scenarios and show that up to two orders of magnitude in performance gain can be achieved compared to the prevalent Entity Based Programming paradigm.
2104.01482
Edgar A. Bernal
Edgar A. Bernal
Training Deep Normalizing Flow Models in Highly Incomplete Data Scenarios with Prior Regularization
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative frameworks including GANs and normalizing flow models have proven successful at filling in missing values in partially observed data samples by effectively learning -- either explicitly or implicitly -- complex, high-dimensional statistical distributions. In tasks where the data available for learning is only partially observed, however, their performance decays monotonically as a function of the data missingness rate. In high missing data rate regimes (e.g., 60% and above), it has been observed that state-of-the-art models tend to break down and produce unrealistic and/or semantically inaccurate data. We propose a novel framework to facilitate the learning of data distributions in high paucity scenarios that is inspired by traditional formulations of solutions to ill-posed problems. The proposed framework naturally stems from posing the process of learning from incomplete data as a joint optimization task of the parameters of the model being learned and the missing data values. The method involves enforcing a prior regularization term that seamlessly integrates with objectives used to train explicit and tractable deep generative frameworks such as deep normalizing flow models. We demonstrate via extensive experimental validation that the proposed framework outperforms competing techniques, particularly as the rate of data paucity approaches unity.
[ { "created": "Sat, 3 Apr 2021 20:57:57 GMT", "version": "v1" } ]
2021-04-06
[ [ "Bernal", "Edgar A.", "" ] ]
Deep generative frameworks including GANs and normalizing flow models have proven successful at filling in missing values in partially observed data samples by effectively learning -- either explicitly or implicitly -- complex, high-dimensional statistical distributions. In tasks where the data available for learning is only partially observed, however, their performance decays monotonically as a function of the data missingness rate. In high missing data rate regimes (e.g., 60% and above), it has been observed that state-of-the-art models tend to break down and produce unrealistic and/or semantically inaccurate data. We propose a novel framework to facilitate the learning of data distributions in high paucity scenarios that is inspired by traditional formulations of solutions to ill-posed problems. The proposed framework naturally stems from posing the process of learning from incomplete data as a joint optimization task of the parameters of the model being learned and the missing data values. The method involves enforcing a prior regularization term that seamlessly integrates with objectives used to train explicit and tractable deep generative frameworks such as deep normalizing flow models. We demonstrate via extensive experimental validation that the proposed framework outperforms competing techniques, particularly as the rate of data paucity approaches unity.
2402.07108
Wenzhi Gao
Wenzhi Gao, Chunlin Sun, Chenyu Xue, Dongdong Ge, Yinyu Ye
Decoupling Learning and Decision-Making: Breaking the $\mathcal{O}(\sqrt{T})$ Barrier in Online Resource Allocation with First-Order Methods
null
null
null
null
cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
Online linear programming plays an important role in both revenue management and resource allocation, and recent research has focused on developing efficient first-order online learning algorithms. Despite the empirical success of first-order methods, they typically achieve a regret no better than $\mathcal{O}(\sqrt{T})$, which is suboptimal compared to the $\mathcal{O}(\log T)$ bound guaranteed by the state-of-the-art linear programming (LP)-based online algorithms. This paper establishes several important facts about online linear programming, which unveils the challenge for first-order-method-based online algorithms to achieve beyond $\mathcal{O}(\sqrt{T})$ regret. To address the challenge, we introduce a new algorithmic framework that decouples learning from decision-making. For the first time, we show that first-order methods can attain regret $\mathcal{O}(T^{1/3})$ with this new framework.
[ { "created": "Sun, 11 Feb 2024 05:35:50 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 20:43:21 GMT", "version": "v2" } ]
2024-05-30
[ [ "Gao", "Wenzhi", "" ], [ "Sun", "Chunlin", "" ], [ "Xue", "Chenyu", "" ], [ "Ge", "Dongdong", "" ], [ "Ye", "Yinyu", "" ] ]
Online linear programming plays an important role in both revenue management and resource allocation, and recent research has focused on developing efficient first-order online learning algorithms. Despite the empirical success of first-order methods, they typically achieve a regret no better than $\mathcal{O}(\sqrt{T})$, which is suboptimal compared to the $\mathcal{O}(\log T)$ bound guaranteed by the state-of-the-art linear programming (LP)-based online algorithms. This paper establishes several important facts about online linear programming, which unveils the challenge for first-order-method-based online algorithms to achieve beyond $\mathcal{O}(\sqrt{T})$ regret. To address the challenge, we introduce a new algorithmic framework that decouples learning from decision-making. For the first time, we show that first-order methods can attain regret $\mathcal{O}(T^{1/3})$ with this new framework.
2201.11674
Xin Du
Xin Du, Benedicte Legastelois, Bhargavi Ganesh, Ajitha Rajan, Hana Chockler, Vaishak Belle, Stuart Anderson, Subramanian Ramamoorthy
Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities
17 pages, 18 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Using large pre-trained models for image recognition tasks is becoming increasingly common owing to the well acknowledged success of recent models like vision transformers and other CNN-based models like VGG and Resnet. The high accuracy of these models on benchmark tasks has translated into their practical use across many domains including safety-critical applications like autonomous driving and medical diagnostics. Despite their widespread use, image models have been shown to be fragile to changes in the operating environment, bringing their robustness into question. There is an urgent need for methods that systematically characterise and quantify the capabilities of these models to help designers understand and provide guarantees about their safety and robustness. In this paper, we propose Vision Checklist, a framework aimed at interrogating the capabilities of a model in order to produce a report that can be used by a system designer for robustness evaluations. This framework proposes a set of perturbation operations that can be applied on the underlying data to generate test samples of different types. The perturbations reflect potential changes in operating environments, and interrogate various properties ranging from the strictly quantitative to more qualitative. Our framework is evaluated on multiple datasets like Tinyimagenet, CIFAR10, CIFAR100 and Camelyon17 and for models like ViT and Resnet. Our Vision Checklist proposes a specific set of evaluations that can be integrated into the previously proposed concept of a model card. Robustness evaluations like our checklist will be crucial in future safety evaluations of visual perception modules, and be useful for a wide range of stakeholders including designers, deployers, and regulators involved in the certification of these systems. Source code of Vision Checklist would be open for public use.
[ { "created": "Thu, 27 Jan 2022 17:20:16 GMT", "version": "v1" }, { "created": "Fri, 28 Jan 2022 13:48:59 GMT", "version": "v2" }, { "created": "Mon, 31 Jan 2022 11:09:19 GMT", "version": "v3" } ]
2022-02-01
[ [ "Du", "Xin", "" ], [ "Legastelois", "Benedicte", "" ], [ "Ganesh", "Bhargavi", "" ], [ "Rajan", "Ajitha", "" ], [ "Chockler", "Hana", "" ], [ "Belle", "Vaishak", "" ], [ "Anderson", "Stuart", "" ], [ "Ramamoorthy", "Subramanian", "" ] ]
Using large pre-trained models for image recognition tasks is becoming increasingly common owing to the well acknowledged success of recent models like vision transformers and other CNN-based models like VGG and Resnet. The high accuracy of these models on benchmark tasks has translated into their practical use across many domains including safety-critical applications like autonomous driving and medical diagnostics. Despite their widespread use, image models have been shown to be fragile to changes in the operating environment, bringing their robustness into question. There is an urgent need for methods that systematically characterise and quantify the capabilities of these models to help designers understand and provide guarantees about their safety and robustness. In this paper, we propose Vision Checklist, a framework aimed at interrogating the capabilities of a model in order to produce a report that can be used by a system designer for robustness evaluations. This framework proposes a set of perturbation operations that can be applied on the underlying data to generate test samples of different types. The perturbations reflect potential changes in operating environments, and interrogate various properties ranging from the strictly quantitative to more qualitative. Our framework is evaluated on multiple datasets like Tinyimagenet, CIFAR10, CIFAR100 and Camelyon17 and for models like ViT and Resnet. Our Vision Checklist proposes a specific set of evaluations that can be integrated into the previously proposed concept of a model card. Robustness evaluations like our checklist will be crucial in future safety evaluations of visual perception modules, and be useful for a wide range of stakeholders including designers, deployers, and regulators involved in the certification of these systems. Source code of Vision Checklist would be open for public use.
2112.08458
Onofrio Semeraro
Alessandro Bucci, Onofrio Semeraro, Alexandre Allauzen, Sergio Chibbaro and Lionel Mathelin
Curriculum learning for data-driven modeling of dynamical systems
null
Eur. Phys. J. E 46, 12 (2023)
10.1140/epje/s10189-023-00269-8
null
cs.LG nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The reliable prediction of the temporal behavior of complex systems is key in numerous scientific fields. This strong interest is however hindered by modeling issues: often, the governing equations describing the physics of the system under consideration are not accessible or, if known, their solution might require a computational time incompatible with the prediction time constraints. Not surprisingly, approximating complex systems in a generic functional format and informing it ex-nihilo from available observations has become common practice in the age of machine learning, as illustrated by the numerous successful examples based on deep neural networks. However, generalizability of the models, margins of guarantee and the impact of data are often overlooked or examined mainly by relying on prior knowledge of the physics. We tackle these issues from a different viewpoint, by adopting a curriculum learning strategy. In curriculum learning, the dataset is structured such that the training process starts from simple samples towards more complex ones in order to favor convergence and generalization. The concept has been developed and successfully applied in robotics and control of systems. Here, we apply this concept for the learning of complex dynamical systems in a systematic way. First, leveraging insights from the ergodic theory, we assess the amount of data sufficient for a-priori guaranteeing a faithful model of the physical system and thoroughly investigate the impact of the training set and its structure on the quality of long-term predictions. Based on that, we consider entropy as a metric of complexity of the dataset; we show how an informed design of the training set based on the analysis of the entropy significantly improves the resulting models in terms of generalizability, and provide insights on the amount and the choice of data required for an effective data-driven modeling.
[ { "created": "Wed, 15 Dec 2021 20:09:20 GMT", "version": "v1" }, { "created": "Mon, 25 Apr 2022 09:56:54 GMT", "version": "v2" }, { "created": "Tue, 22 Nov 2022 17:02:30 GMT", "version": "v3" }, { "created": "Tue, 14 Feb 2023 21:06:53 GMT", "version": "v4" } ]
2023-05-29
[ [ "Bucci", "Alessandro", "" ], [ "Semeraro", "Onofrio", "" ], [ "Allauzen", "Alexandre", "" ], [ "Chibbaro", "Sergio", "" ], [ "Mathelin", "Lionel", "" ] ]
The reliable prediction of the temporal behavior of complex systems is key in numerous scientific fields. This strong interest is however hindered by modeling issues: often, the governing equations describing the physics of the system under consideration are not accessible or, if known, their solution might require a computational time incompatible with the prediction time constraints. Not surprisingly, approximating complex systems in a generic functional format and informing it ex-nihilo from available observations has become common practice in the age of machine learning, as illustrated by the numerous successful examples based on deep neural networks. However, generalizability of the models, margins of guarantee and the impact of data are often overlooked or examined mainly by relying on prior knowledge of the physics. We tackle these issues from a different viewpoint, by adopting a curriculum learning strategy. In curriculum learning, the dataset is structured such that the training process starts from simple samples towards more complex ones in order to favor convergence and generalization. The concept has been developed and successfully applied in robotics and control of systems. Here, we apply this concept for the learning of complex dynamical systems in a systematic way. First, leveraging insights from the ergodic theory, we assess the amount of data sufficient for a-priori guaranteeing a faithful model of the physical system and thoroughly investigate the impact of the training set and its structure on the quality of long-term predictions. Based on that, we consider entropy as a metric of complexity of the dataset; we show how an informed design of the training set based on the analysis of the entropy significantly improves the resulting models in terms of generalizability, and provide insights on the amount and the choice of data required for an effective data-driven modeling.
2203.08512
Wenpeng Yin
Wenpeng Yin, Jia Li, Caiming Xiong
ConTinTin: Continual Learning from Task Instructions
ACL'2022 camera-ready
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The mainstream machine learning paradigms for NLP often work with two underlying presumptions. First, the target task is predefined and static; a system merely needs to learn to solve it exclusively. Second, the supervision of a task mainly comes from a set of labeled examples. A question arises: how to build a system that can keep learning new tasks from their instructions? This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction. The system is required to (i) generate the expected outputs of a new task by learning from its instruction, (ii) transfer the knowledge acquired from upstream tasks to help solve downstream tasks (i.e., forward-transfer), and (iii) retain or even improve the performance on earlier tasks after learning new tasks (i.e., backward-transfer). This new problem is studied on a stream of more than 60 tasks, each equipped with an instruction. Technically, our method InstructionSpeak contains two strategies that make full use of task instructions to improve forward-transfer and backward-transfer: one is to learn from negative outputs, the other is to re-visit instructions of previous tasks. To our knowledge, this is the first time to study ConTinTin in NLP. In addition to the problem formulation and our promising approach, this work also contributes to providing rich analyses for the community to better understand this novel learning problem.
[ { "created": "Wed, 16 Mar 2022 10:27:18 GMT", "version": "v1" }, { "created": "Fri, 18 Mar 2022 19:15:47 GMT", "version": "v2" } ]
2022-03-22
[ [ "Yin", "Wenpeng", "" ], [ "Li", "Jia", "" ], [ "Xiong", "Caiming", "" ] ]
The mainstream machine learning paradigms for NLP often work with two underlying presumptions. First, the target task is predefined and static; a system merely needs to learn to solve it exclusively. Second, the supervision of a task mainly comes from a set of labeled examples. A question arises: how to build a system that can keep learning new tasks from their instructions? This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction. The system is required to (i) generate the expected outputs of a new task by learning from its instruction, (ii) transfer the knowledge acquired from upstream tasks to help solve downstream tasks (i.e., forward-transfer), and (iii) retain or even improve the performance on earlier tasks after learning new tasks (i.e., backward-transfer). This new problem is studied on a stream of more than 60 tasks, each equipped with an instruction. Technically, our method InstructionSpeak contains two strategies that make full use of task instructions to improve forward-transfer and backward-transfer: one is to learn from negative outputs, the other is to re-visit instructions of previous tasks. To our knowledge, this is the first time to study ConTinTin in NLP. In addition to the problem formulation and our promising approach, this work also contributes to providing rich analyses for the community to better understand this novel learning problem.
0901.3990
Bernard Jacquemin
Bernard Jacquemin (LIMSI), Sabine Ploux (L2C2)
Du corpus au dictionnaire
null
Cahiers de Linguistique. Revue de sociolinguistique et de sociologie de la langue fran\c{c}aise 33, 1 (2008) 63-84
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we propose an automatic process to build multi-lingual lexico-semantic resources. The goal of these resources is to browse semantically textual information contained in texts of different languages. This method uses a mathematical model called Atlas s\'emantiques in order to represent the different senses of each word. It uses the linguistic relations between words to create graphs that are projected into a semantic space. These projections constitute semantic maps that denote the sense trends of each given word. This model is fed with syntactic relations between words extracted from a corpus. Therefore, the lexico-semantic resource produced describes all the words and all their meanings observed in the corpus. The sense trends are expressed by syntactic contexts, typical for a given meaning. The link between each sense trend and the utterances used to build the sense trend are also stored in an index. Thus all the instances of a word in a particular sense are linked and can be browsed easily. And by using several corpora of different languages, several resources are built that correspond with each other through languages. It makes it possible to browse information through languages thanks to syntactic contexts translations (even if some of them are partial).
[ { "created": "Mon, 26 Jan 2009 15:52:21 GMT", "version": "v1" } ]
2009-01-27
[ [ "Jacquemin", "Bernard", "", "LIMSI" ], [ "Ploux", "Sabine", "", "L2C2" ] ]
In this article, we propose an automatic process to build multi-lingual lexico-semantic resources. The goal of these resources is to browse semantically textual information contained in texts of different languages. This method uses a mathematical model called Atlas s\'emantiques in order to represent the different senses of each word. It uses the linguistic relations between words to create graphs that are projected into a semantic space. These projections constitute semantic maps that denote the sense trends of each given word. This model is fed with syntactic relations between words extracted from a corpus. Therefore, the lexico-semantic resource produced describes all the words and all their meanings observed in the corpus. The sense trends are expressed by syntactic contexts, typical for a given meaning. The link between each sense trend and the utterances used to build the sense trend are also stored in an index. Thus all the instances of a word in a particular sense are linked and can be browsed easily. And by using several corpora of different languages, several resources are built that correspond with each other through languages. It makes it possible to browse information through languages thanks to syntactic contexts translations (even if some of them are partial).
2304.06167
Ravi Sahita
Ravi Sahita, Atish Patra, Vedvyas Shanbhogue, Samuel Ortiz, Andrew Bresticker, Dylan Reid, Atul Khare, Rajnesh Kanwal
CoVE: Towards Confidential Computing on RISC-V Platforms
null
null
null
null
cs.CR cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-tenant computing platforms are typically comprised of several software and hardware components including platform firmware, host operating system kernel, virtualization monitor, and the actual tenant payloads that run on them (typically in a virtual machine, container, or application). This model is well established in large scale commercial deployment, but the downside is that all platform components and operators are in the Trusted Computing Base (TCB) of the tenant. This aspect is ill-suited for privacy-oriented workloads that aim to minimize the TCB footprint. Confidential computing presents a good stepping-stone towards providing a quantifiable TCB for computing. Confidential computing [1] requires the use of a HW-attested Trusted Execution Environments for data-in-use protection. The RISC-V architecture presents a strong foundation for meeting the requirements for Confidential Computing and other security paradigms in a clean slate manner. This paper describes a reference architecture and discusses ISA, non-ISA and system-on-chip (SoC) requirements for confidential computing on RISC-V Platforms. It discusses proposed ISA and non-ISA Extension for Confidential Virtual Machine for RISC-V platforms, referred to as CoVE.
[ { "created": "Wed, 12 Apr 2023 21:35:44 GMT", "version": "v1" } ]
2023-04-14
[ [ "Sahita", "Ravi", "" ], [ "Patra", "Atish", "" ], [ "Shanbhogue", "Vedvyas", "" ], [ "Ortiz", "Samuel", "" ], [ "Bresticker", "Andrew", "" ], [ "Reid", "Dylan", "" ], [ "Khare", "Atul", "" ], [ "Kanwal", "Rajnesh", "" ] ]
Multi-tenant computing platforms are typically comprised of several software and hardware components including platform firmware, host operating system kernel, virtualization monitor, and the actual tenant payloads that run on them (typically in a virtual machine, container, or application). This model is well established in large scale commercial deployment, but the downside is that all platform components and operators are in the Trusted Computing Base (TCB) of the tenant. This aspect is ill-suited for privacy-oriented workloads that aim to minimize the TCB footprint. Confidential computing presents a good stepping-stone towards providing a quantifiable TCB for computing. Confidential computing [1] requires the use of a HW-attested Trusted Execution Environments for data-in-use protection. The RISC-V architecture presents a strong foundation for meeting the requirements for Confidential Computing and other security paradigms in a clean slate manner. This paper describes a reference architecture and discusses ISA, non-ISA and system-on-chip (SoC) requirements for confidential computing on RISC-V Platforms. It discusses proposed ISA and non-ISA Extension for Confidential Virtual Machine for RISC-V platforms, referred to as CoVE.
1608.02784
Nikos Papasarantopoulos
Nikos Papasarantopoulos, Helen Jiang, Shay B. Cohen
Canonical Correlation Inference for Mapping Abstract Scenes to Text
10 pages, accepted to AAAI 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a technique for structured prediction, based on canonical correlation analysis. Our learning algorithm finds two projections for the input and the output spaces that aim at projecting a given input and its correct output into points close to each other. We demonstrate our technique on a language-vision problem, namely the problem of giving a textual description to an "abstract scene".
[ { "created": "Tue, 9 Aug 2016 12:26:19 GMT", "version": "v1" }, { "created": "Fri, 17 Nov 2017 19:53:13 GMT", "version": "v2" } ]
2017-11-21
[ [ "Papasarantopoulos", "Nikos", "" ], [ "Jiang", "Helen", "" ], [ "Cohen", "Shay B.", "" ] ]
We describe a technique for structured prediction, based on canonical correlation analysis. Our learning algorithm finds two projections for the input and the output spaces that aim at projecting a given input and its correct output into points close to each other. We demonstrate our technique on a language-vision problem, namely the problem of giving a textual description to an "abstract scene".
2210.03246
Joseph Gatto
Joseph Gatto, Parker Seegmiller, Garrett Johnston, Sarah M. Preum
HealthE: Classifying Entities in Online Textual Health Advice
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The processing of entities in natural language is essential to many medical NLP systems. Unfortunately, existing datasets vastly under-represent the entities required to model public health relevant texts such as health advice often found on sites like WebMD. People rely on such information for personal health management and clinically relevant decision making. In this work, we release a new annotated dataset, HealthE, consisting of 6,756 health advice. HealthE has a more granular label space compared to existing medical NER corpora and contains annotation for diverse health phrases. Additionally, we introduce a new health entity classification model, EP S-BERT, which leverages textual context patterns in the classification of entity classes. EP S-BERT provides a 4-point increase in F1 score over the nearest baseline and a 34-point increase in F1 when compared to off-the-shelf medical NER tools trained to extract disease and medication mentions from clinical texts. All code and data are publicly available on Github.
[ { "created": "Thu, 6 Oct 2022 23:18:24 GMT", "version": "v1" } ]
2022-10-10
[ [ "Gatto", "Joseph", "" ], [ "Seegmiller", "Parker", "" ], [ "Johnston", "Garrett", "" ], [ "Preum", "Sarah M.", "" ] ]
The processing of entities in natural language is essential to many medical NLP systems. Unfortunately, existing datasets vastly under-represent the entities required to model public health relevant texts such as health advice often found on sites like WebMD. People rely on such information for personal health management and clinically relevant decision making. In this work, we release a new annotated dataset, HealthE, consisting of 6,756 health advice. HealthE has a more granular label space compared to existing medical NER corpora and contains annotation for diverse health phrases. Additionally, we introduce a new health entity classification model, EP S-BERT, which leverages textual context patterns in the classification of entity classes. EP S-BERT provides a 4-point increase in F1 score over the nearest baseline and a 34-point increase in F1 when compared to off-the-shelf medical NER tools trained to extract disease and medication mentions from clinical texts. All code and data are publicly available on Github.
1609.05135
Mark Vousden
Mark Vousden, Marc-Antonio Bisotti, Maximilian Albert, Hans Fangohr
Virtual Micromagnetics: A Framework for Accessible and Reproducible Micromagnetic Simulation
12 pages, 1 figure
Journal of Open Research Software, 4(1), p.e41 (2016)
10.5334/jors.141
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational micromagnetics requires numerical solution of partial differential equations to resolve complex interactions in magnetic nanomaterials. The Virtual Micromagnetics project described here provides virtual machine simulation environments to run open-source micromagnetic simulation packages. These environments allow easy access to simulation packages that are often difficult to compile and install, and enable simulations and their data to be shared and stored in a single virtual hard disk file, which encourages reproducible research. Virtual Micromagnetics can be extended to automate the installation of micromagnetic simulation packages on non-virtual machines, and to support closed-source and new open-source simulation packages, including packages from disciplines other than micromagnetics, encouraging reuse. Virtual Micromagnetics is stored in a public GitHub repository under a three-clause Berkeley Software Distribution (BSD) license.
[ { "created": "Thu, 11 Aug 2016 10:59:40 GMT", "version": "v1" }, { "created": "Fri, 25 Nov 2016 10:15:16 GMT", "version": "v2" } ]
2016-11-28
[ [ "Vousden", "Mark", "" ], [ "Bisotti", "Marc-Antonio", "" ], [ "Albert", "Maximilian", "" ], [ "Fangohr", "Hans", "" ] ]
Computational micromagnetics requires numerical solution of partial differential equations to resolve complex interactions in magnetic nanomaterials. The Virtual Micromagnetics project described here provides virtual machine simulation environments to run open-source micromagnetic simulation packages. These environments allow easy access to simulation packages that are often difficult to compile and install, and enable simulations and their data to be shared and stored in a single virtual hard disk file, which encourages reproducible research. Virtual Micromagnetics can be extended to automate the installation of micromagnetic simulation packages on non-virtual machines, and to support closed-source and new open-source simulation packages, including packages from disciplines other than micromagnetics, encouraging reuse. Virtual Micromagnetics is stored in a public GitHub repository under a three-clause Berkeley Software Distribution (BSD) license.
2212.04316
Martino Sorbaro
Francesco L\"assig, Pau Vilimelis Aceituno, Martino Sorbaro, Benjamin F. Grewe
Bio-Inspired, Task-Free Continual Learning through Activity Regularization
null
null
null
null
cs.NE cs.CV q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
[ { "created": "Thu, 8 Dec 2022 15:14:20 GMT", "version": "v1" } ]
2022-12-09
[ [ "Lässig", "Francesco", "" ], [ "Aceituno", "Pau Vilimelis", "" ], [ "Sorbaro", "Martino", "" ], [ "Grewe", "Benjamin F.", "" ] ]
The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
2405.01324
Philipp Meyer
Philipp Meyer, Timo H\"ackel, Teresa L\"ubeck, Franz Korf, Thomas C. Schmidt
A Framework for the Systematic Assessment of Anomaly Detectors in Time-Sensitive Automotive Networks
null
null
null
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Connected cars are susceptible to cyberattacks. Security and safety of future vehicles highly depend on a holistic protection of automotive components, of which the time-sensitive backbone network takes a significant role. These onboard Time-Sensitive Networks (TSNs) require monitoring for safety and -- as versatile platforms to host Network Anomaly Detection Systems (NADSs) -- for security. Still a thorough evaluation of anomaly detection methods in the context of hard real-time operations, automotive protocol stacks, and domain specific attack vectors is missing along with appropriate input datasets. In this paper, we present an assessment framework that allows for reproducible, comparable, and rapid evaluation of detection algorithms. It is based on a simulation toolchain, which contributes configurable topologies, traffic streams, anomalies, attacks, and detectors. We demonstrate the assessment of NADSs in a comprehensive in-vehicular network with its communication flows, on which we model traffic anomalies. We evaluate exemplary detection mechanisms and reveal how the detection performance is influenced by different combinations of TSN traffic flows and anomaly types. Our approach translates to other real-time Ethernet domains, such as industrial facilities, airplanes, and UAVs.
[ { "created": "Thu, 2 May 2024 14:29:42 GMT", "version": "v1" } ]
2024-05-03
[ [ "Meyer", "Philipp", "" ], [ "Häckel", "Timo", "" ], [ "Lübeck", "Teresa", "" ], [ "Korf", "Franz", "" ], [ "Schmidt", "Thomas C.", "" ] ]
Connected cars are susceptible to cyberattacks. Security and safety of future vehicles highly depend on a holistic protection of automotive components, of which the time-sensitive backbone network takes a significant role. These onboard Time-Sensitive Networks (TSNs) require monitoring for safety and -- as versatile platforms to host Network Anomaly Detection Systems (NADSs) -- for security. Still a thorough evaluation of anomaly detection methods in the context of hard real-time operations, automotive protocol stacks, and domain specific attack vectors is missing along with appropriate input datasets. In this paper, we present an assessment framework that allows for reproducible, comparable, and rapid evaluation of detection algorithms. It is based on a simulation toolchain, which contributes configurable topologies, traffic streams, anomalies, attacks, and detectors. We demonstrate the assessment of NADSs in a comprehensive in-vehicular network with its communication flows, on which we model traffic anomalies. We evaluate exemplary detection mechanisms and reveal how the detection performance is influenced by different combinations of TSN traffic flows and anomaly types. Our approach translates to other real-time Ethernet domains, such as industrial facilities, airplanes, and UAVs.
1403.0783
Antoine Amarilli
Antoine Amarilli, Yael Amsterdamer, Tova Milo
Uncertainty in Crowd Data Sourcing under Structural Constraints
8 pages, vision paper. To appear at UnCrowd 2014
null
10.1007/978-3-662-43984-5_27
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Applications extracting data from crowdsourcing platforms must deal with the uncertainty of crowd answers in two different ways: first, by deriving estimates of the correct value from the answers; second, by choosing crowd questions whose answers are expected to minimize this uncertainty relative to the overall data collection goal. Such problems are already challenging when we assume that questions are unrelated and answers are independent, but they are even more complicated when we assume that the unknown values follow hard structural constraints (such as monotonicity). In this vision paper, we examine how to formally address this issue with an approach inspired by [Amsterdamer et al., 2013]. We describe a generalized setting where we model constraints as linear inequalities, and use them to guide the choice of crowd questions and the processing of answers. We present the main challenges arising in this setting, and propose directions to solve them.
[ { "created": "Tue, 4 Mar 2014 13:21:39 GMT", "version": "v1" } ]
2016-07-19
[ [ "Amarilli", "Antoine", "" ], [ "Amsterdamer", "Yael", "" ], [ "Milo", "Tova", "" ] ]
Applications extracting data from crowdsourcing platforms must deal with the uncertainty of crowd answers in two different ways: first, by deriving estimates of the correct value from the answers; second, by choosing crowd questions whose answers are expected to minimize this uncertainty relative to the overall data collection goal. Such problems are already challenging when we assume that questions are unrelated and answers are independent, but they are even more complicated when we assume that the unknown values follow hard structural constraints (such as monotonicity). In this vision paper, we examine how to formally address this issue with an approach inspired by [Amsterdamer et al., 2013]. We describe a generalized setting where we model constraints as linear inequalities, and use them to guide the choice of crowd questions and the processing of answers. We present the main challenges arising in this setting, and propose directions to solve them.
2003.06705
Djordje Batic
Djordje Batic, Dubravko Culibrk
Identifying Individual Dogs in Social Media Images
Presented at BMVC 2019: Workshop on Visual AI and Entrepreneurship, Cardiff, UK
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the results of an initial study focused on developing a visual AI solution able to recognize individual dogs in unconstrained (wild) images occurring on social media. The work described here is part of joint project done with Pet2Net, a social network focused on pets and their owners. In order to detect and recognize individual dogs we combine transfer learning and object detection approaches on Inception v3 and SSD Inception v2 architectures respectively and evaluate the proposed pipeline using a new data set containing real data that the users uploaded to Pet2Net platform. We show that it can achieve 94.59% accuracy in identifying individual dogs. Our approach has been designed with simplicity in mind and the goal of easy deployment on all the images uploaded to Pet2Net platform. A purely visual approach to identifying dogs in images, will enhance Pet2Net features aimed at finding lost dogs, as well as form the basis of future work focused on identifying social relationships between dogs, which cannot be inferred from other data collected by the platform.
[ { "created": "Sat, 14 Mar 2020 21:11:02 GMT", "version": "v1" } ]
2020-03-17
[ [ "Batic", "Djordje", "" ], [ "Culibrk", "Dubravko", "" ] ]
We present the results of an initial study focused on developing a visual AI solution able to recognize individual dogs in unconstrained (wild) images occurring on social media. The work described here is part of joint project done with Pet2Net, a social network focused on pets and their owners. In order to detect and recognize individual dogs we combine transfer learning and object detection approaches on Inception v3 and SSD Inception v2 architectures respectively and evaluate the proposed pipeline using a new data set containing real data that the users uploaded to Pet2Net platform. We show that it can achieve 94.59% accuracy in identifying individual dogs. Our approach has been designed with simplicity in mind and the goal of easy deployment on all the images uploaded to Pet2Net platform. A purely visual approach to identifying dogs in images, will enhance Pet2Net features aimed at finding lost dogs, as well as form the basis of future work focused on identifying social relationships between dogs, which cannot be inferred from other data collected by the platform.
2006.08456
Dimitrios Michael Manias
Dimitrios Michael Manias, Hassan Hawilo, Abdallah Shami
A Machine Learning-Based Migration Strategy for Virtual Network Function Instances
Accepted - Future Technologies Conference 2020
null
null
null
cs.NI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the growing demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while simultaneously improving network performance and addressing the increased demand. Although Network Function Virtualization (NFV) has been identified as a promising solution, several challenges must be addressed to ensure its feasibility. In this paper, we address the Virtual Network Function (VNF) migration problem by developing the VNF Neural Network for Instance Migration (VNNIM), a migration strategy for VNF instances. The performance of VNNIM is further improved through the optimization of the learning rate hyperparameter through particle swarm optimization. Results show that the VNNIM is very effective in predicting the post-migration server exhibiting a binary accuracy of 99.07% and a delay difference distribution that is centered around a mean of zero when compared to the optimization model. The greatest advantage of VNNIM, however, is its run-time efficiency highlighted through a run-time analysis.
[ { "created": "Mon, 15 Jun 2020 15:03:27 GMT", "version": "v1" } ]
2020-06-16
[ [ "Manias", "Dimitrios Michael", "" ], [ "Hawilo", "Hassan", "" ], [ "Shami", "Abdallah", "" ] ]
With the growing demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while simultaneously improving network performance and addressing the increased demand. Although Network Function Virtualization (NFV) has been identified as a promising solution, several challenges must be addressed to ensure its feasibility. In this paper, we address the Virtual Network Function (VNF) migration problem by developing the VNF Neural Network for Instance Migration (VNNIM), a migration strategy for VNF instances. The performance of VNNIM is further improved through the optimization of the learning rate hyperparameter through particle swarm optimization. Results show that the VNNIM is very effective in predicting the post-migration server exhibiting a binary accuracy of 99.07% and a delay difference distribution that is centered around a mean of zero when compared to the optimization model. The greatest advantage of VNNIM, however, is its run-time efficiency highlighted through a run-time analysis.
2312.11500
Mathew Walter
Mathew J. Walter, Aaron Barrett and Kimberly Tam
A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Artificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.
[ { "created": "Fri, 8 Dec 2023 14:59:07 GMT", "version": "v1" } ]
2023-12-20
[ [ "Walter", "Mathew J.", "" ], [ "Barrett", "Aaron", "" ], [ "Tam", "Kimberly", "" ] ]
Artificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.
2009.06855
Madison Elliott
Madison Elliott, Christine Nothelfer, Cindy Xiong and Danielle Szafir
A Design Space of Vision Science Methods for Visualization Research
11 pages, 6 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A growing number of efforts aim to understand what people see when using a visualization. These efforts provide scientific grounding to complement design intuitions, leading to more effective visualization practice. However, published visualization research currently reflects a limited set of available methods for understanding how people process visualized data. Alternative methods from vision science offer a rich suite of tools for understanding visualizations, but no curated collection of these methods exists in either perception or visualization research. We introduce a design space of experimental methods for empirically investigating the perceptual processes involved with viewing data visualizations to ultimately inform visualization design guidelines. This paper provides a shared lexicon for facilitating experimental visualization research. We discuss popular experimental paradigms, adjustment types, response types, and dependent measures used in vision science research, rooting each in visualization examples. We then discuss the advantages and limitations of each technique. Researchers can use this design space to create innovative studies and progress scientific understanding of design choices and evaluations in visualization. We highlight a history of collaborative success between visualization and vision science research and advocate for a deeper relationship between the two fields that can elaborate on and extend the methodological design space for understanding visualization and vision.
[ { "created": "Tue, 15 Sep 2020 03:51:15 GMT", "version": "v1" } ]
2020-09-16
[ [ "Elliott", "Madison", "" ], [ "Nothelfer", "Christine", "" ], [ "Xiong", "Cindy", "" ], [ "Szafir", "Danielle", "" ] ]
A growing number of efforts aim to understand what people see when using a visualization. These efforts provide scientific grounding to complement design intuitions, leading to more effective visualization practice. However, published visualization research currently reflects a limited set of available methods for understanding how people process visualized data. Alternative methods from vision science offer a rich suite of tools for understanding visualizations, but no curated collection of these methods exists in either perception or visualization research. We introduce a design space of experimental methods for empirically investigating the perceptual processes involved with viewing data visualizations to ultimately inform visualization design guidelines. This paper provides a shared lexicon for facilitating experimental visualization research. We discuss popular experimental paradigms, adjustment types, response types, and dependent measures used in vision science research, rooting each in visualization examples. We then discuss the advantages and limitations of each technique. Researchers can use this design space to create innovative studies and progress scientific understanding of design choices and evaluations in visualization. We highlight a history of collaborative success between visualization and vision science research and advocate for a deeper relationship between the two fields that can elaborate on and extend the methodological design space for understanding visualization and vision.
1906.02040
Yifan Hu
Yifan Hu and Yefeng Zheng
A GLCM Embedded CNN Strategy for Computer-aided Diagnosis in Intracerebral Hemorrhage
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer-aided diagnosis (CADx) systems have been shown to assist radiologists by providing classifications of all kinds of medical images like Computed tomography (CT) and Magnetic resonance (MR). Currently, convolutional neural networks play an important role in CADx. However, since CNN model should have a square-like input, it is usually difficult to directly apply the CNN algorithms on the irregular segmentation region of interests (ROIs) where the radiologists are interested in. In this paper, we propose a new approach to construct the model by extracting and converting the information of the irregular region into a fixed-size Gray-Level Co-Occurrence Matrix (GLCM) and then utilize the GLCM as one input of our CNN model. In this way, as an useful implementary to the original CNN, a couple of GLCM-based features are also extracted by CNN. Meanwhile, the network will pay more attention to the important lesion area and achieve a higher accuracy in classification. Experiments are performed on three classification databases: Hemorrhage, BraTS18 and Cervix to validate the universality of our innovative model. In conclusion, the proposed framework outperforms the corresponding state-of-art algorithms on each database with both test losses and classification accuracy as the evaluation criteria.
[ { "created": "Wed, 5 Jun 2019 14:12:21 GMT", "version": "v1" } ]
2019-06-06
[ [ "Hu", "Yifan", "" ], [ "Zheng", "Yefeng", "" ] ]
Computer-aided diagnosis (CADx) systems have been shown to assist radiologists by providing classifications of all kinds of medical images like Computed tomography (CT) and Magnetic resonance (MR). Currently, convolutional neural networks play an important role in CADx. However, since CNN model should have a square-like input, it is usually difficult to directly apply the CNN algorithms on the irregular segmentation region of interests (ROIs) where the radiologists are interested in. In this paper, we propose a new approach to construct the model by extracting and converting the information of the irregular region into a fixed-size Gray-Level Co-Occurrence Matrix (GLCM) and then utilize the GLCM as one input of our CNN model. In this way, as an useful implementary to the original CNN, a couple of GLCM-based features are also extracted by CNN. Meanwhile, the network will pay more attention to the important lesion area and achieve a higher accuracy in classification. Experiments are performed on three classification databases: Hemorrhage, BraTS18 and Cervix to validate the universality of our innovative model. In conclusion, the proposed framework outperforms the corresponding state-of-art algorithms on each database with both test losses and classification accuracy as the evaluation criteria.
2402.03283
Rohit Verma
Rohit Verma, Arun Raghunath
Towards a Flexible Scale-out Framework for Efficient Visual Data Query Processing
null
null
null
null
cs.DB cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
There is growing interest in visual data management systems that support queries with specialized operations ranging from resizing an image to running complex machine learning models. With a plethora of such operations, the basic need to receive query responses in minimal time takes a hit, especially when the client desires to run multiple such operations in a single query. Existing systems provide an ad-hoc approach where different solutions are clubbed together to provide an end-to-end visual data management system. Unlike such solutions, the Visual Data Management System (VDMS) natively executes queries with multiple operations, thus providing an end-to-end solution. However, a fixed subset of native operations and a synchronous threading architecture limit its generality and scalability. In this paper, we develop VDMS-Async that adds the capability to run user-defined operations with VDMS and execute operations within a query on a remote server. VDMS-Async utilizes an event-driven architecture to create an efficient pipeline for executing operations within a query. Our experiments have shown that VDMS-Async reduces the query execution time by 2-3X compared to existing state-of-the-art systems. Further, remote operations coupled with an event-driven architecture enables VDMS-Async to scale query execution time linearly with the addition of every new remote server. We demonstrate a 64X reduction in query execution time when adding 64 remote servers.
[ { "created": "Mon, 5 Feb 2024 18:39:04 GMT", "version": "v1" } ]
2024-02-06
[ [ "Verma", "Rohit", "" ], [ "Raghunath", "Arun", "" ] ]
There is growing interest in visual data management systems that support queries with specialized operations ranging from resizing an image to running complex machine learning models. With a plethora of such operations, the basic need to receive query responses in minimal time takes a hit, especially when the client desires to run multiple such operations in a single query. Existing systems provide an ad-hoc approach where different solutions are clubbed together to provide an end-to-end visual data management system. Unlike such solutions, the Visual Data Management System (VDMS) natively executes queries with multiple operations, thus providing an end-to-end solution. However, a fixed subset of native operations and a synchronous threading architecture limit its generality and scalability. In this paper, we develop VDMS-Async that adds the capability to run user-defined operations with VDMS and execute operations within a query on a remote server. VDMS-Async utilizes an event-driven architecture to create an efficient pipeline for executing operations within a query. Our experiments have shown that VDMS-Async reduces the query execution time by 2-3X compared to existing state-of-the-art systems. Further, remote operations coupled with an event-driven architecture enables VDMS-Async to scale query execution time linearly with the addition of every new remote server. We demonstrate a 64X reduction in query execution time when adding 64 remote servers.
2111.10882
Rishabh Garg
Rishabh Garg, Ruohan Gao, Kristen Grauman
Geometry-Aware Multi-Task Learning for Binaural Audio Generation from Video
Published in BMVC 2021, project page: http://vision.cs.utexas.edu/projects/geometry-aware-binaural/
null
null
null
cs.CV cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binaural audio provides human listeners with an immersive spatial sound experience, but most existing videos lack binaural audio recordings. We propose an audio spatialization method that draws on visual information in videos to convert their monaural (single-channel) audio to binaural audio. Whereas existing approaches leverage visual features extracted directly from video frames, our approach explicitly disentangles the geometric cues present in the visual stream to guide the learning process. In particular, we develop a multi-task framework that learns geometry-aware features for binaural audio generation by accounting for the underlying room impulse response, the visual stream's coherence with the sound source(s) positions, and the consistency in geometry of the sounding objects over time. Furthermore, we introduce a new large video dataset with realistic binaural audio simulated for real-world scanned environments. On two datasets, we demonstrate the efficacy of our method, which achieves state-of-the-art results.
[ { "created": "Sun, 21 Nov 2021 19:26:45 GMT", "version": "v1" } ]
2021-11-23
[ [ "Garg", "Rishabh", "" ], [ "Gao", "Ruohan", "" ], [ "Grauman", "Kristen", "" ] ]
Binaural audio provides human listeners with an immersive spatial sound experience, but most existing videos lack binaural audio recordings. We propose an audio spatialization method that draws on visual information in videos to convert their monaural (single-channel) audio to binaural audio. Whereas existing approaches leverage visual features extracted directly from video frames, our approach explicitly disentangles the geometric cues present in the visual stream to guide the learning process. In particular, we develop a multi-task framework that learns geometry-aware features for binaural audio generation by accounting for the underlying room impulse response, the visual stream's coherence with the sound source(s) positions, and the consistency in geometry of the sounding objects over time. Furthermore, we introduce a new large video dataset with realistic binaural audio simulated for real-world scanned environments. On two datasets, we demonstrate the efficacy of our method, which achieves state-of-the-art results.
1410.5387
M\'aria Svore\v{n}ov\'a
Maria Svorenova, Jan Kretinsky, Martin Chmelik, Krishnendu Chatterjee, Ivana Cerna, Calin Belta
Temporal Logic Control for Stochastic Linear Systems using Abstraction Refinement of Probabilistic Games
Technical report accompanying HSCC'15 paper
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of computing the set of initial states of a dynamical system such that there exists a control strategy to ensure that the trajectories satisfy a temporal logic specification with probability 1 (almost-surely). We focus on discrete-time, stochastic linear dynamics and specifications given as formulas of the Generalized Reactivity(1) fragment of Linear Temporal Logic over linear predicates in the states of the system. We propose a solution based on iterative abstraction-refinement, and turn-based 2-player probabilistic games. While the theoretical guarantee of our algorithm after any finite number of iterations is only a partial solution, we show that if our algorithm terminates, then the result is the set of satisfying initial states. Moreover, for any (partial) solution our algorithm synthesizes witness control strategies to ensure almost-sure satisfaction of the temporal logic specification. We demonstrate our approach on an illustrative case study.
[ { "created": "Mon, 20 Oct 2014 18:45:55 GMT", "version": "v1" }, { "created": "Wed, 22 Oct 2014 12:01:39 GMT", "version": "v2" }, { "created": "Mon, 23 Feb 2015 11:54:41 GMT", "version": "v3" } ]
2015-02-24
[ [ "Svorenova", "Maria", "" ], [ "Kretinsky", "Jan", "" ], [ "Chmelik", "Martin", "" ], [ "Chatterjee", "Krishnendu", "" ], [ "Cerna", "Ivana", "" ], [ "Belta", "Calin", "" ] ]
We consider the problem of computing the set of initial states of a dynamical system such that there exists a control strategy to ensure that the trajectories satisfy a temporal logic specification with probability 1 (almost-surely). We focus on discrete-time, stochastic linear dynamics and specifications given as formulas of the Generalized Reactivity(1) fragment of Linear Temporal Logic over linear predicates in the states of the system. We propose a solution based on iterative abstraction-refinement, and turn-based 2-player probabilistic games. While the theoretical guarantee of our algorithm after any finite number of iterations is only a partial solution, we show that if our algorithm terminates, then the result is the set of satisfying initial states. Moreover, for any (partial) solution our algorithm synthesizes witness control strategies to ensure almost-sure satisfaction of the temporal logic specification. We demonstrate our approach on an illustrative case study.
1610.01175
Guangwu Xu
Guangwu Xu, Bao Li
On the Algorithmic Significance and Analysis of the Method of DaYan Deriving One
9 Pages, in Chinese
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modulo inverse is an important arithmetic operation. Many famous algorithms in public key cryptography require to compute modulo inverse. It is argued that the method of DaYan deriving one of Jiushao Qin provides the most concise and transparent way of computing modulo inverse. Based on the rule of taking the least positive remainder in division, this paper presents a more precise algorithmic description of the method of DaYan deriving one to reflect Qin's original idea. Our form of the algorithm is straightforward and different from the ones in the literature. Some additional information can be revealed easily from the process of DaYan deriving one, e.g., the invariance property of the permanent of the state, natural connection to continued fractions. Comparison of Qin'a algorithm and the modern form of the Extended Euclidean algorithm is also given. Since DaYan deriving one is the key technical ingredient of Jiushao Qin's DaYan aggregation method (aka the Chinese Remainder Theorem), we include some explanation to the latter as well.
[ { "created": "Tue, 4 Oct 2016 20:04:12 GMT", "version": "v1" }, { "created": "Thu, 31 Aug 2017 19:50:59 GMT", "version": "v2" } ]
2017-09-04
[ [ "Xu", "Guangwu", "" ], [ "Li", "Bao", "" ] ]
Modulo inverse is an important arithmetic operation. Many famous algorithms in public key cryptography require to compute modulo inverse. It is argued that the method of DaYan deriving one of Jiushao Qin provides the most concise and transparent way of computing modulo inverse. Based on the rule of taking the least positive remainder in division, this paper presents a more precise algorithmic description of the method of DaYan deriving one to reflect Qin's original idea. Our form of the algorithm is straightforward and different from the ones in the literature. Some additional information can be revealed easily from the process of DaYan deriving one, e.g., the invariance property of the permanent of the state, natural connection to continued fractions. Comparison of Qin'a algorithm and the modern form of the Extended Euclidean algorithm is also given. Since DaYan deriving one is the key technical ingredient of Jiushao Qin's DaYan aggregation method (aka the Chinese Remainder Theorem), we include some explanation to the latter as well.
2211.09334
Koki Kizawa
Koki Kizawa, Ryoichi Shinkuma, Gabriele Trovato
Estimation of physical activities of people in offices from time-series point-cloud data
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes an edge computing system that enables estimating physical activities of people in offices from time-series point-cloud data, obtained by using a light-detection-and-ranging (LIDAR) sensor network. The paper presents that the proposed system successfully constructs the model for estimating the number of typed characters from time-series point-cloud data, through an experiment using real LIDAR sensors.
[ { "created": "Thu, 17 Nov 2022 04:49:51 GMT", "version": "v1" } ]
2022-11-18
[ [ "Kizawa", "Koki", "" ], [ "Shinkuma", "Ryoichi", "" ], [ "Trovato", "Gabriele", "" ] ]
This paper proposes an edge computing system that enables estimating physical activities of people in offices from time-series point-cloud data, obtained by using a light-detection-and-ranging (LIDAR) sensor network. The paper presents that the proposed system successfully constructs the model for estimating the number of typed characters from time-series point-cloud data, through an experiment using real LIDAR sensors.
2304.14791
Naif Mehanna
Naif Mehanna (CRIStAL, CNRS, SPIRALS), Walter Rudametkin (UR, IUF, CNRS, IRISA, DiverSe)
Caught in the Game: On the History and Evolution of Web Browser Gaming
null
TheWebConference 2023, Apr 2023, Austin (TX), United States
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web browsers have come a long way since their inception, evolving from a simple means of displaying text documents over the network to complex software stacks with advanced graphics and network capabilities. As personal computers grew in popularity, developers jumped at the opportunity to deploy cross-platform games with centralized management and a low barrier to entry. Simply going to the right address is now enough to start a game. From text-based to GPU-powered 3D games, browser gaming has evolved to become a strong alternative to traditional console and mobile-based gaming, targeting both casual and advanced gamers. Browser technology has also evolved to accommodate more demanding applications, sometimes even supplanting functions typically left to the operating system. Today, websites display rich, computationally intensive, hardware-accelerated graphics, allowing developers to build ever-more impressive applications and games.In this paper, we present the evolution of browser gaming and the technologies that enabled it, from the release of the first text-based games in the early 1990s to current open-world and game-engine-powered browser games. We discuss the societal impact of browser gaming and how it has allowed a new target audience to accessdigital gaming. Finally, we review the potential future evolution ofthe browser gaming industry.
[ { "created": "Fri, 28 Apr 2023 12:02:16 GMT", "version": "v1" } ]
2023-05-01
[ [ "Mehanna", "Naif", "", "CRIStAL, CNRS, SPIRALS" ], [ "Rudametkin", "Walter", "", "UR, IUF,\n CNRS, IRISA, DiverSe" ] ]
Web browsers have come a long way since their inception, evolving from a simple means of displaying text documents over the network to complex software stacks with advanced graphics and network capabilities. As personal computers grew in popularity, developers jumped at the opportunity to deploy cross-platform games with centralized management and a low barrier to entry. Simply going to the right address is now enough to start a game. From text-based to GPU-powered 3D games, browser gaming has evolved to become a strong alternative to traditional console and mobile-based gaming, targeting both casual and advanced gamers. Browser technology has also evolved to accommodate more demanding applications, sometimes even supplanting functions typically left to the operating system. Today, websites display rich, computationally intensive, hardware-accelerated graphics, allowing developers to build ever-more impressive applications and games.In this paper, we present the evolution of browser gaming and the technologies that enabled it, from the release of the first text-based games in the early 1990s to current open-world and game-engine-powered browser games. We discuss the societal impact of browser gaming and how it has allowed a new target audience to accessdigital gaming. Finally, we review the potential future evolution ofthe browser gaming industry.
2102.02502
Sebastian Bullinger
Sebastian Bullinger, Christoph Bodensteiner, Michael Arens
3D Surface Reconstruction From Multi-Date Satellite Images
Accepted at ISPRS Congress 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The reconstruction of accurate three-dimensional environment models is one of the most fundamental goals in the field of photogrammetry. Since satellite images provide suitable properties for obtaining large-scale environment reconstructions, there exist a variety of Stereo Matching based methods to reconstruct point clouds for satellite image pairs. Recently, the first Structure from Motion (SfM) based approach has been proposed, which allows to reconstruct point clouds from multiple satellite images. In this work, we propose an extension of this SfM based pipeline that allows us to reconstruct not only point clouds but watertight meshes including texture information. We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery. This includes a decomposition of finite projective camera calibration matrices, a skew correction of corresponding depth maps and input images as well as the recovery of real-world depth maps from reparameterized depth values. The paper presents an extensive quantitative evaluation on multi-date satellite images demonstrating that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error. We make the source code of our pipeline publicly available.
[ { "created": "Thu, 4 Feb 2021 09:23:21 GMT", "version": "v1" }, { "created": "Sat, 3 Apr 2021 12:50:05 GMT", "version": "v2" } ]
2021-04-06
[ [ "Bullinger", "Sebastian", "" ], [ "Bodensteiner", "Christoph", "" ], [ "Arens", "Michael", "" ] ]
The reconstruction of accurate three-dimensional environment models is one of the most fundamental goals in the field of photogrammetry. Since satellite images provide suitable properties for obtaining large-scale environment reconstructions, there exist a variety of Stereo Matching based methods to reconstruct point clouds for satellite image pairs. Recently, the first Structure from Motion (SfM) based approach has been proposed, which allows to reconstruct point clouds from multiple satellite images. In this work, we propose an extension of this SfM based pipeline that allows us to reconstruct not only point clouds but watertight meshes including texture information. We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery. This includes a decomposition of finite projective camera calibration matrices, a skew correction of corresponding depth maps and input images as well as the recovery of real-world depth maps from reparameterized depth values. The paper presents an extensive quantitative evaluation on multi-date satellite images demonstrating that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error. We make the source code of our pipeline publicly available.
1906.01602
Sarabjot Singh
Sarabjot Singh
On Provisioning Cellular Networks for Distributed Inference
null
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless traffic attributable to machine learning (ML) inference workloads is increasing with the proliferation of applications and smart wireless devices leveraging ML inference. Owing to limited compute capabilities at these "edge" devices, achieving high inference accuracy often requires coordination with a remote compute node or "cloud" over the wireless cellular network. The accuracy of this distributed inference is, thus, impacted by the communication rate and reliability offered by the cellular network. In this paper, an analytical framework is proposed to characterize inference accuracy as a function of cellular network design. Using the developed framework, it is shown that cellular network should be provisioned with a minimum density of access points (APs) to guarantee a target inference accuracy, and the inference accuracy achievable at asymptotically high AP density is limited by the air-interface bandwidth. Furthermore, the minimum accuracy required of edge inference to deliver a target inference accuracy is shown to be inversely proportional to the density of APs and the bandwidth.
[ { "created": "Tue, 4 Jun 2019 17:31:13 GMT", "version": "v1" } ]
2019-06-05
[ [ "Singh", "Sarabjot", "" ] ]
Wireless traffic attributable to machine learning (ML) inference workloads is increasing with the proliferation of applications and smart wireless devices leveraging ML inference. Owing to limited compute capabilities at these "edge" devices, achieving high inference accuracy often requires coordination with a remote compute node or "cloud" over the wireless cellular network. The accuracy of this distributed inference is, thus, impacted by the communication rate and reliability offered by the cellular network. In this paper, an analytical framework is proposed to characterize inference accuracy as a function of cellular network design. Using the developed framework, it is shown that cellular network should be provisioned with a minimum density of access points (APs) to guarantee a target inference accuracy, and the inference accuracy achievable at asymptotically high AP density is limited by the air-interface bandwidth. Furthermore, the minimum accuracy required of edge inference to deliver a target inference accuracy is shown to be inversely proportional to the density of APs and the bandwidth.
1909.00333
Hangfeng He
Hangfeng He, Qiang Ning, Dan Roth
QuASE: Question-Answer Driven Sentence Encoding
null
null
null
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)? For example, {\em can we use QAMR (Michael et al., 2017) to improve named entity recognition?} We suggest that simply further pre-training BERT is often not the best option, and propose the {\em question-answer driven sentence encoding (QuASE)} framework. QuASE learns representations from QA data, using BERT or other state-of-the-art contextual language models. In particular, we observe the need to distinguish between two types of sentence encodings, depending on whether the target task is a single- or multi-sentence input; in both cases, the resulting encoding is shown to be an easy-to-use plugin for many downstream tasks. This work may point out an alternative way to supervise NLP tasks.
[ { "created": "Sun, 1 Sep 2019 06:30:57 GMT", "version": "v1" }, { "created": "Mon, 4 May 2020 15:40:12 GMT", "version": "v2" }, { "created": "Thu, 3 Dec 2020 21:12:24 GMT", "version": "v3" } ]
2020-12-07
[ [ "He", "Hangfeng", "" ], [ "Ning", "Qiang", "" ], [ "Roth", "Dan", "" ] ]
Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)? For example, {\em can we use QAMR (Michael et al., 2017) to improve named entity recognition?} We suggest that simply further pre-training BERT is often not the best option, and propose the {\em question-answer driven sentence encoding (QuASE)} framework. QuASE learns representations from QA data, using BERT or other state-of-the-art contextual language models. In particular, we observe the need to distinguish between two types of sentence encodings, depending on whether the target task is a single- or multi-sentence input; in both cases, the resulting encoding is shown to be an easy-to-use plugin for many downstream tasks. This work may point out an alternative way to supervise NLP tasks.
1711.05073
Wei He
Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, Haifeng Wang
DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications
10 pages, ACL 2018 MRQA Workshop camera-ready version
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces DuReader, a new large-scale, open-domain Chinese ma- chine reading comprehension (MRC) dataset, designed to address real-world MRC. DuReader has three advantages over previous MRC datasets: (1) data sources: questions and documents are based on Baidu Search and Baidu Zhidao; answers are manually generated. (2) question types: it provides rich annotations for more question types, especially yes-no and opinion questions, that leaves more opportunity for the research community. (3) scale: it contains 200K questions, 420K answers and 1M documents; it is the largest Chinese MRC dataset so far. Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements. To help the community make these improvements, both DuReader and baseline systems have been posted online. We also organize a shared competition to encourage the exploration of more models. Since the release of the task, there are significant improvements over the baselines.
[ { "created": "Tue, 14 Nov 2017 12:13:44 GMT", "version": "v1" }, { "created": "Wed, 15 Nov 2017 11:45:41 GMT", "version": "v2" }, { "created": "Wed, 23 May 2018 12:07:19 GMT", "version": "v3" }, { "created": "Mon, 11 Jun 2018 03:26:30 GMT", "version": "v4" } ]
2018-06-12
[ [ "He", "Wei", "" ], [ "Liu", "Kai", "" ], [ "Liu", "Jing", "" ], [ "Lyu", "Yajuan", "" ], [ "Zhao", "Shiqi", "" ], [ "Xiao", "Xinyan", "" ], [ "Liu", "Yuan", "" ], [ "Wang", "Yizhong", "" ], [ "Wu", "Hua", "" ], [ "She", "Qiaoqiao", "" ], [ "Liu", "Xuan", "" ], [ "Wu", "Tian", "" ], [ "Wang", "Haifeng", "" ] ]
This paper introduces DuReader, a new large-scale, open-domain Chinese ma- chine reading comprehension (MRC) dataset, designed to address real-world MRC. DuReader has three advantages over previous MRC datasets: (1) data sources: questions and documents are based on Baidu Search and Baidu Zhidao; answers are manually generated. (2) question types: it provides rich annotations for more question types, especially yes-no and opinion questions, that leaves more opportunity for the research community. (3) scale: it contains 200K questions, 420K answers and 1M documents; it is the largest Chinese MRC dataset so far. Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements. To help the community make these improvements, both DuReader and baseline systems have been posted online. We also organize a shared competition to encourage the exploration of more models. Since the release of the task, there are significant improvements over the baselines.
2203.16796
Nikhil Tripathi
Nikhil Tripathi
Delays have Dangerous Ends: Slow HTTP/2 DoS attacks into the Wild and their Real-Time Detection using Event Sequence Analysis
11 pages, 8 figures
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The robustness principle, written by Jon Postel in an early version of TCP implementation, states that the communicating entities should be liberal while accepting the data. Several entities on the Internet do follow this principle. For instance, in this work, we show that many popular web servers on the Internet are generous as they wait for a substantial time period to receive the remaining portion of an incomplete web request. Unfortunately, this behavior also makes them vulnerable to a class of cyber attacks, commonly known as Slow Rate DoS attacks. HTTP/2, the recent version of HTTP, is recently found vulnerable to these attacks. However, the impact of Slow HTTP/2 DoS attacks on real web servers on the Internet has not been studied yet. Also, to the best of our knowledge, there is no defense scheme known to detect Slow Rate DoS attacks against HTTP/2 in real-time. To bridge these gaps, we first test the behavior of HTTP/2 supporting web servers on the Internet against Slow HTTP/2 DoS attacks. Subsequently, we propose a scheme to detect these attacks in real-time. We show that the proposed detection scheme can detect attacks in real-time with high accuracy and marginal computational overhead.
[ { "created": "Thu, 31 Mar 2022 04:53:35 GMT", "version": "v1" } ]
2022-04-01
[ [ "Tripathi", "Nikhil", "" ] ]
The robustness principle, written by Jon Postel in an early version of TCP implementation, states that the communicating entities should be liberal while accepting the data. Several entities on the Internet do follow this principle. For instance, in this work, we show that many popular web servers on the Internet are generous as they wait for a substantial time period to receive the remaining portion of an incomplete web request. Unfortunately, this behavior also makes them vulnerable to a class of cyber attacks, commonly known as Slow Rate DoS attacks. HTTP/2, the recent version of HTTP, is recently found vulnerable to these attacks. However, the impact of Slow HTTP/2 DoS attacks on real web servers on the Internet has not been studied yet. Also, to the best of our knowledge, there is no defense scheme known to detect Slow Rate DoS attacks against HTTP/2 in real-time. To bridge these gaps, we first test the behavior of HTTP/2 supporting web servers on the Internet against Slow HTTP/2 DoS attacks. Subsequently, we propose a scheme to detect these attacks in real-time. We show that the proposed detection scheme can detect attacks in real-time with high accuracy and marginal computational overhead.
2203.04305
Jiang Lianlian
Lianlian Jiang, Yuexuan Wang, Wenyi Zheng, Chao Jin, Zengxiang Li, Sin G. Teo
LSTMSPLIT: Effective SPLIT Learning based LSTM on Sequential Time-Series Data
null
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) and split learning (SL) are the two popular distributed machine learning (ML) approaches that provide some data privacy protection mechanisms. In the time-series classification problem, many researchers typically use 1D convolutional neural networks (1DCNNs) based on the SL approach with a single client to reduce the computational overhead at the client-side while still preserving data privacy. Another method, recurrent neural network (RNN), is utilized on sequentially partitioned data where segments of multiple-segment sequential data are distributed across various clients. However, to the best of our knowledge, it is still not much work done in SL with long short-term memory (LSTM) network, even the LSTM network is practically effective in processing time-series data. In this work, we propose a new approach, LSTMSPLIT, that uses SL architecture with an LSTM network to classify time-series data with multiple clients. The differential privacy (DP) is applied to solve the data privacy leakage. The proposed method, LSTMSPLIT, has achieved better or reasonable accuracy compared to the Split-1DCNN method using the electrocardiogram dataset and the human activity recognition dataset. Furthermore, the proposed method, LSTMSPLIT, can also achieve good accuracy after applying differential privacy to preserve the user privacy of the cut layer of the LSTMSPLIT.
[ { "created": "Tue, 8 Mar 2022 11:44:12 GMT", "version": "v1" } ]
2022-03-10
[ [ "Jiang", "Lianlian", "" ], [ "Wang", "Yuexuan", "" ], [ "Zheng", "Wenyi", "" ], [ "Jin", "Chao", "" ], [ "Li", "Zengxiang", "" ], [ "Teo", "Sin G.", "" ] ]
Federated learning (FL) and split learning (SL) are the two popular distributed machine learning (ML) approaches that provide some data privacy protection mechanisms. In the time-series classification problem, many researchers typically use 1D convolutional neural networks (1DCNNs) based on the SL approach with a single client to reduce the computational overhead at the client-side while still preserving data privacy. Another method, recurrent neural network (RNN), is utilized on sequentially partitioned data where segments of multiple-segment sequential data are distributed across various clients. However, to the best of our knowledge, it is still not much work done in SL with long short-term memory (LSTM) network, even the LSTM network is practically effective in processing time-series data. In this work, we propose a new approach, LSTMSPLIT, that uses SL architecture with an LSTM network to classify time-series data with multiple clients. The differential privacy (DP) is applied to solve the data privacy leakage. The proposed method, LSTMSPLIT, has achieved better or reasonable accuracy compared to the Split-1DCNN method using the electrocardiogram dataset and the human activity recognition dataset. Furthermore, the proposed method, LSTMSPLIT, can also achieve good accuracy after applying differential privacy to preserve the user privacy of the cut layer of the LSTMSPLIT.
1308.1779
Christoph Lange
Marco B. Caminati, Manfred Kerber, Christoph Lange, Colin Rowat
Proving soundness of combinatorial Vickrey auctions and generating verified executable code
null
null
null
null
cs.GT cs.CE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using mechanised reasoning we prove that combinatorial Vickrey auctions are soundly specified in that they associate a unique outcome (allocation and transfers) to any valid input (bids). Having done so, we auto-generate verified executable code from the formally defined auction. This removes a source of error in implementing the auction design. We intend to use formal methods to verify new auction designs. Here, our contribution is to introduce and demonstrate the use of formal methods for auction verification in the familiar setting of a well-known auction.
[ { "created": "Thu, 8 Aug 2013 08:00:55 GMT", "version": "v1" }, { "created": "Mon, 2 Sep 2013 10:47:25 GMT", "version": "v2" } ]
2013-09-03
[ [ "Caminati", "Marco B.", "" ], [ "Kerber", "Manfred", "" ], [ "Lange", "Christoph", "" ], [ "Rowat", "Colin", "" ] ]
Using mechanised reasoning we prove that combinatorial Vickrey auctions are soundly specified in that they associate a unique outcome (allocation and transfers) to any valid input (bids). Having done so, we auto-generate verified executable code from the formally defined auction. This removes a source of error in implementing the auction design. We intend to use formal methods to verify new auction designs. Here, our contribution is to introduce and demonstrate the use of formal methods for auction verification in the familiar setting of a well-known auction.
1806.10756
Bin Li
Chaoqiong Fan, Bin Li, Jia Hou, Yi Wu, Weisi Guo, Chenglin Zhao
Robust Fuzzy-Learning For Partially Overlapping Channels Allocation In UAV Communication Networks
null
null
null
null
cs.NI cs.GT cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider a mesh-structured unmanned aerial vehicle (UAV) networks exploiting partially overlapping channels (POCs). For general data-collection tasks in UAV networks, we aim to optimize the network throughput with constraints on transmission power and quality of service (QoS). As far as the highly mobile and constantly changing UAV networks are concerned, unfortunately, most existing methods rely on definite information which is vulnerable to the dynamic environment, rendering system performance to be less effective. In order to combat dynamic topology and varying interference of UAV networks, a robust and distributed learning scheme is proposed. Rather than the perfect channel state information (CSI), we introduce uncertainties to characterize the dynamic channel gains among UAV nodes, which are then interpreted with fuzzy numbers. Instead of the traditional observation space where the channel capacity is a crisp reward, we implement the learning and decision process in a mapped fuzzy space. This allows the system to achieve a smoother and more robust performance by optimizing in an alternate space. To this end, we design a fuzzy payoffs function (FPF) to describe the fluctuated utility, and the problem of POCs assignment is formulated as a fuzzy payoffs game (FPG). Assisted by an attractive property of fuzzy bi-matrix games, the existence of fuzzy Nash equilibrium (FNE) for our formulated FPG is proved. Our robust fuzzy-learning algorithm could reach the equilibrium solution via a least-deviation method. Finally, numerical simulations are provided to demonstrate the advantages of our new scheme over the existing scheme.
[ { "created": "Thu, 28 Jun 2018 03:35:09 GMT", "version": "v1" } ]
2018-06-29
[ [ "Fan", "Chaoqiong", "" ], [ "Li", "Bin", "" ], [ "Hou", "Jia", "" ], [ "Wu", "Yi", "" ], [ "Guo", "Weisi", "" ], [ "Zhao", "Chenglin", "" ] ]
In this paper, we consider a mesh-structured unmanned aerial vehicle (UAV) networks exploiting partially overlapping channels (POCs). For general data-collection tasks in UAV networks, we aim to optimize the network throughput with constraints on transmission power and quality of service (QoS). As far as the highly mobile and constantly changing UAV networks are concerned, unfortunately, most existing methods rely on definite information which is vulnerable to the dynamic environment, rendering system performance to be less effective. In order to combat dynamic topology and varying interference of UAV networks, a robust and distributed learning scheme is proposed. Rather than the perfect channel state information (CSI), we introduce uncertainties to characterize the dynamic channel gains among UAV nodes, which are then interpreted with fuzzy numbers. Instead of the traditional observation space where the channel capacity is a crisp reward, we implement the learning and decision process in a mapped fuzzy space. This allows the system to achieve a smoother and more robust performance by optimizing in an alternate space. To this end, we design a fuzzy payoffs function (FPF) to describe the fluctuated utility, and the problem of POCs assignment is formulated as a fuzzy payoffs game (FPG). Assisted by an attractive property of fuzzy bi-matrix games, the existence of fuzzy Nash equilibrium (FNE) for our formulated FPG is proved. Our robust fuzzy-learning algorithm could reach the equilibrium solution via a least-deviation method. Finally, numerical simulations are provided to demonstrate the advantages of our new scheme over the existing scheme.
2311.16473
Zhihao Liang
Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, Kui Jia
GS-IR: 3D Gaussian Splatting for Inverse Rendering
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS) that leverages forward mapping volume rendering to achieve photorealistic novel view synthesis and relighting results. Unlike previous works that use implicit neural representations and volume rendering (e.g. NeRF), which suffer from low expressive power and high computational complexity, we extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions. There are two main problems when introducing GS to inverse rendering: 1) GS does not support producing plausible normal natively; 2) forward mapping (e.g. rasterization and splatting) cannot trace the occlusion like backward mapping (e.g. ray tracing). To address these challenges, our GS-IR proposes an efficient optimization scheme that incorporates a depth-derivation-based regularization for normal estimation and a baking-based occlusion to model indirect lighting. The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering. We demonstrate the superiority of our method over baseline methods through qualitative and quantitative evaluations on various challenging scenes.
[ { "created": "Sun, 26 Nov 2023 02:35:09 GMT", "version": "v1" }, { "created": "Mon, 4 Dec 2023 10:35:53 GMT", "version": "v2" }, { "created": "Thu, 28 Mar 2024 05:47:24 GMT", "version": "v3" } ]
2024-03-29
[ [ "Liang", "Zhihao", "" ], [ "Zhang", "Qi", "" ], [ "Feng", "Ying", "" ], [ "Shan", "Ying", "" ], [ "Jia", "Kui", "" ] ]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS) that leverages forward mapping volume rendering to achieve photorealistic novel view synthesis and relighting results. Unlike previous works that use implicit neural representations and volume rendering (e.g. NeRF), which suffer from low expressive power and high computational complexity, we extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions. There are two main problems when introducing GS to inverse rendering: 1) GS does not support producing plausible normal natively; 2) forward mapping (e.g. rasterization and splatting) cannot trace the occlusion like backward mapping (e.g. ray tracing). To address these challenges, our GS-IR proposes an efficient optimization scheme that incorporates a depth-derivation-based regularization for normal estimation and a baking-based occlusion to model indirect lighting. The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering. We demonstrate the superiority of our method over baseline methods through qualitative and quantitative evaluations on various challenging scenes.
1907.12850
Jongho Im
Jongho Im, Taikgun Song, Youngsu Lee, Jewoo Kim
Confirmatory Aspect-based Opinion Mining Processes
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
A new opinion extraction method is proposed to summarize unstructured, user-generated content (i.e., online customer reviews) in the fixed topic domains. To differentiate the current approach from other opinion extraction approaches, which are often exposed to a sparsity problem and lack of sentiment scores, a confirmatory aspect-based opinion mining framework is introduced along with its practical algorithm called DiSSBUS. In this procedure, 1) each customer review is disintegrated into a set of clauses; 2) each clause is summarized to bi-terms-a topic word and an evaluation word-using a part-of-speech (POS) tagger; and 3) each bi-term is matched to a pre-specified topic relevant to a specific domain. The proposed processes have two primary advantages over existing methods: 1) they can decompose a single review into a set of bi-terms related to pre-specified topics in the domain of interest and, therefore, 2) allow identification of the reviewer's opinions on the topics via evaluation words within the set of bi-terms. The proposed aspect-based opinion mining is applied to customer reviews of restaurants in Hawaii obtained from TripAdvisor, and the empirical findings validate the effectiveness of the method. Keywords: Clause-based sentiment analysis, Customer review, Opinion mining, Topic modeling, User-generate-contents.
[ { "created": "Tue, 30 Jul 2019 12:00:03 GMT", "version": "v1" } ]
2019-07-31
[ [ "Im", "Jongho", "" ], [ "Song", "Taikgun", "" ], [ "Lee", "Youngsu", "" ], [ "Kim", "Jewoo", "" ] ]
A new opinion extraction method is proposed to summarize unstructured, user-generated content (i.e., online customer reviews) in the fixed topic domains. To differentiate the current approach from other opinion extraction approaches, which are often exposed to a sparsity problem and lack of sentiment scores, a confirmatory aspect-based opinion mining framework is introduced along with its practical algorithm called DiSSBUS. In this procedure, 1) each customer review is disintegrated into a set of clauses; 2) each clause is summarized to bi-terms-a topic word and an evaluation word-using a part-of-speech (POS) tagger; and 3) each bi-term is matched to a pre-specified topic relevant to a specific domain. The proposed processes have two primary advantages over existing methods: 1) they can decompose a single review into a set of bi-terms related to pre-specified topics in the domain of interest and, therefore, 2) allow identification of the reviewer's opinions on the topics via evaluation words within the set of bi-terms. The proposed aspect-based opinion mining is applied to customer reviews of restaurants in Hawaii obtained from TripAdvisor, and the empirical findings validate the effectiveness of the method. Keywords: Clause-based sentiment analysis, Customer review, Opinion mining, Topic modeling, User-generate-contents.
1105.0826
Radu Arsinte
Radu Arsinte, Eugen Lupu
Streaming Multimedia Information Using the Features of the DVB-S Card
4 pages, 5 figures
Scientific Bulletin of the "Politehnica" University Timi\c{s}oara, Transaction on Electronics and Telecomunications, Tom 51(65), Fascicola 1-2, pag. 181-184, 2006
null
null
cs.MM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a study of audio-video streaming using the additional possibilities of a DVB-S card. The board used for experiments (Technisat SkyStar 2) is one of the most frequently used cards for this purpose. Using the main blocks of the board's software support it is possible the implement a really useful and full functional system for audio-video streaming. The streaming is possible to be implemented either for decoded MPEG stream or for transport stream. In this last case it is possible to view not only a program, but any program from the same multiplex. This allows us to implement
[ { "created": "Wed, 4 May 2011 13:46:31 GMT", "version": "v1" } ]
2011-05-05
[ [ "Arsinte", "Radu", "" ], [ "Lupu", "Eugen", "" ] ]
This paper presents a study of audio-video streaming using the additional possibilities of a DVB-S card. The board used for experiments (Technisat SkyStar 2) is one of the most frequently used cards for this purpose. Using the main blocks of the board's software support it is possible the implement a really useful and full functional system for audio-video streaming. The streaming is possible to be implemented either for decoded MPEG stream or for transport stream. In this last case it is possible to view not only a program, but any program from the same multiplex. This allows us to implement
1312.5946
Kathrin Bujna
Johannes Bl\"omer and Kathrin Bujna
Adaptive Seeding for Gaussian Mixture Models
This is a preprint of a paper that has been accepted for publication in the Proceedings of the 20th Pacific Asia Conference on Knowledge Discovery and Data Mining (PAKDD) 2016. The final publication is available at link.springer.com (http://link.springer.com/chapter/10.1007/978-3-319-31750-2 24)
null
10.1007/978-3-319-31750-2_24
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present new initialization methods for the expectation-maximization algorithm for multivariate Gaussian mixture models. Our methods are adaptions of the well-known $K$-means++ initialization and the Gonzalez algorithm. Thereby we aim to close the gap between simple random, e.g. uniform, and complex methods, that crucially depend on the right choice of hyperparameters. Our extensive experiments indicate the usefulness of our methods compared to common techniques and methods, which e.g. apply the original $K$-means++ and Gonzalez directly, with respect to artificial as well as real-world data sets.
[ { "created": "Fri, 20 Dec 2013 14:08:48 GMT", "version": "v1" }, { "created": "Mon, 1 Aug 2016 08:33:13 GMT", "version": "v2" }, { "created": "Tue, 30 May 2017 07:44:37 GMT", "version": "v3" } ]
2017-05-31
[ [ "Blömer", "Johannes", "" ], [ "Bujna", "Kathrin", "" ] ]
We present new initialization methods for the expectation-maximization algorithm for multivariate Gaussian mixture models. Our methods are adaptions of the well-known $K$-means++ initialization and the Gonzalez algorithm. Thereby we aim to close the gap between simple random, e.g. uniform, and complex methods, that crucially depend on the right choice of hyperparameters. Our extensive experiments indicate the usefulness of our methods compared to common techniques and methods, which e.g. apply the original $K$-means++ and Gonzalez directly, with respect to artificial as well as real-world data sets.
1008.5073
Everardo Barcenas
Everardo Barcenas (INRIA Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), Pierre Geneves (INRIA Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), Nabil Layaida (INRIA Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble), Alan Schmitt (INRIA Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble)
On the Count of Trees
null
null
null
RR-7251
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regular tree grammars and regular path expressions constitute core constructs widely used in programming languages and type systems. Nevertheless, there has been little research so far on frameworks for reasoning about path expressions where node cardinality constraints occur along a path in a tree. We present a logic capable of expressing deep counting along paths which may include arbitrary recursive forward and backward navigation. The counting extensions can be seen as a generalization of graded modalities that count immediate successor nodes. While the combination of graded modalities, nominals, and inverse modalities yields undecidable logics over graphs, we show that these features can be combined in a decidable tree logic whose main features can be decided in exponential time. Our logic being closed under negation, it may be used to decide typical problems on XPath queries such as satisfiability, type checking with relation to regular types, containment, or equivalence.
[ { "created": "Mon, 30 Aug 2010 12:58:17 GMT", "version": "v1" } ]
2010-08-31
[ [ "Barcenas", "Everardo", "", "INRIA Rhône-Alpes / LIG Laboratoire\n d'Informatique de Grenoble" ], [ "Geneves", "Pierre", "", "INRIA Rhône-Alpes / LIG\n Laboratoire d'Informatique de Grenoble" ], [ "Layaida", "Nabil", "", "INRIA Rhône-Alpes /\n LIG Laboratoire d'Informatique de Grenoble" ], [ "Schmitt", "Alan", "", "INRIA\n Rhône-Alpes / LIG Laboratoire d'Informatique de Grenoble" ] ]
Regular tree grammars and regular path expressions constitute core constructs widely used in programming languages and type systems. Nevertheless, there has been little research so far on frameworks for reasoning about path expressions where node cardinality constraints occur along a path in a tree. We present a logic capable of expressing deep counting along paths which may include arbitrary recursive forward and backward navigation. The counting extensions can be seen as a generalization of graded modalities that count immediate successor nodes. While the combination of graded modalities, nominals, and inverse modalities yields undecidable logics over graphs, we show that these features can be combined in a decidable tree logic whose main features can be decided in exponential time. Our logic being closed under negation, it may be used to decide typical problems on XPath queries such as satisfiability, type checking with relation to regular types, containment, or equivalence.
2305.11347
Elise Bishoff
Elise Bishoff, Charles Godfrey, Myles McKay, Eleanor Byler
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
In overhead image segmentation tasks, including additional spectral bands beyond the traditional RGB channels can improve model performance. However, it is still unclear how incorporating this additional data impacts model robustness to adversarial attacks and natural perturbations. For adversarial robustness, the additional information could improve the model's ability to distinguish malicious inputs, or simply provide new attack avenues and vulnerabilities. For natural perturbations, the additional information could better inform model decisions and weaken perturbation effects or have no significant influence at all. In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations. While existing adversarial and natural robustness research has focused primarily on digital perturbations, we prioritize on creating realistic perturbations designed with physical world conditions in mind. For adversarial robustness, we focus on data poisoning attacks whereas for natural robustness, we focus on extending ImageNet-C common corruptions for fog and snow that coherently and self-consistently perturbs the input data. Overall, we find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures and that while physically realizable natural perturbations still degrade model performance, the impact differs based on fusion architecture and input data.
[ { "created": "Thu, 18 May 2023 23:43:33 GMT", "version": "v1" } ]
2023-05-22
[ [ "Bishoff", "Elise", "" ], [ "Godfrey", "Charles", "" ], [ "McKay", "Myles", "" ], [ "Byler", "Eleanor", "" ] ]
In overhead image segmentation tasks, including additional spectral bands beyond the traditional RGB channels can improve model performance. However, it is still unclear how incorporating this additional data impacts model robustness to adversarial attacks and natural perturbations. For adversarial robustness, the additional information could improve the model's ability to distinguish malicious inputs, or simply provide new attack avenues and vulnerabilities. For natural perturbations, the additional information could better inform model decisions and weaken perturbation effects or have no significant influence at all. In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations. While existing adversarial and natural robustness research has focused primarily on digital perturbations, we prioritize on creating realistic perturbations designed with physical world conditions in mind. For adversarial robustness, we focus on data poisoning attacks whereas for natural robustness, we focus on extending ImageNet-C common corruptions for fog and snow that coherently and self-consistently perturbs the input data. Overall, we find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures and that while physically realizable natural perturbations still degrade model performance, the impact differs based on fusion architecture and input data.
2311.17200
Steve Huntsman
Steve Huntsman
Greybox fuzzing time-intensive programs
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine (directed) greybox fuzzing from a geometrical perspective, viewing dissimilarities on inputs and on control flow graphs (with dynamical statistics) as primitive objects of interest. We prototype and evaluate GoExploreFuzz, a greybox fuzzer for time-intensive programs that incorporates this perspective. The results indicate useful capabilities for greybox fuzzing that have hitherto been underutilized, notably quantifying the diversity of paths and autonomously tuning the "bandwidth" of mutations.
[ { "created": "Tue, 28 Nov 2023 20:10:38 GMT", "version": "v1" } ]
2023-11-30
[ [ "Huntsman", "Steve", "" ] ]
We examine (directed) greybox fuzzing from a geometrical perspective, viewing dissimilarities on inputs and on control flow graphs (with dynamical statistics) as primitive objects of interest. We prototype and evaluate GoExploreFuzz, a greybox fuzzer for time-intensive programs that incorporates this perspective. The results indicate useful capabilities for greybox fuzzing that have hitherto been underutilized, notably quantifying the diversity of paths and autonomously tuning the "bandwidth" of mutations.
1906.01926
Yoshinari Fujinuma
Yoshinari Fujinuma, Jordan Boyd-Graber, Michael J. Paul
A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity
Accepted to ACL 2019, camera-ready
null
10.18653/v1/P19-1489
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-lingual word embeddings encode the meaning of words from different languages into a shared low-dimensional space. An important requirement for many downstream tasks is that word similarity should be independent of language - i.e., word vectors within one language should not be more similar to each other than to words in another language. We measure this characteristic using modularity, a network measurement that measures the strength of clusters in a graph. Modularity has a moderate to strong correlation with three downstream tasks, even though modularity is based only on the structure of embeddings and does not require any external resources. We show through experiments that modularity can serve as an intrinsic validation metric to improve unsupervised cross-lingual word embeddings, particularly on distant language pairs in low-resource settings.
[ { "created": "Wed, 5 Jun 2019 10:34:56 GMT", "version": "v1" } ]
2022-03-24
[ [ "Fujinuma", "Yoshinari", "" ], [ "Boyd-Graber", "Jordan", "" ], [ "Paul", "Michael J.", "" ] ]
Cross-lingual word embeddings encode the meaning of words from different languages into a shared low-dimensional space. An important requirement for many downstream tasks is that word similarity should be independent of language - i.e., word vectors within one language should not be more similar to each other than to words in another language. We measure this characteristic using modularity, a network measurement that measures the strength of clusters in a graph. Modularity has a moderate to strong correlation with three downstream tasks, even though modularity is based only on the structure of embeddings and does not require any external resources. We show through experiments that modularity can serve as an intrinsic validation metric to improve unsupervised cross-lingual word embeddings, particularly on distant language pairs in low-resource settings.
2011.06235
Weiming Zhi
Weiming Zhi, Tin Lai, Lionel Ott, Fabio Ramos
Anticipatory Navigation in Crowds by Probabilistic Prediction of Pedestrian Future Movements
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
Critical for the coexistence of humans and robots in dynamic environments is the capability for agents to understand each other's actions, and anticipate their movements. This paper presents Stochastic Process Anticipatory Navigation (SPAN), a framework that enables nonholonomic robots to navigate in environments with crowds, while anticipating and accounting for the motion patterns of pedestrians. To this end, we learn a predictive model to predict continuous-time stochastic processes to model future movement of pedestrians. Anticipated pedestrian positions are used to conduct chance constrained collision-checking, and are incorporated into a time-to-collision control problem. An occupancy map is also integrated to allow for probabilistic collision-checking with static obstacles. We demonstrate the capability of SPAN in crowded simulation environments, as well as with a real-world pedestrian dataset.
[ { "created": "Thu, 12 Nov 2020 07:18:20 GMT", "version": "v1" } ]
2020-11-13
[ [ "Zhi", "Weiming", "" ], [ "Lai", "Tin", "" ], [ "Ott", "Lionel", "" ], [ "Ramos", "Fabio", "" ] ]
Critical for the coexistence of humans and robots in dynamic environments is the capability for agents to understand each other's actions, and anticipate their movements. This paper presents Stochastic Process Anticipatory Navigation (SPAN), a framework that enables nonholonomic robots to navigate in environments with crowds, while anticipating and accounting for the motion patterns of pedestrians. To this end, we learn a predictive model to predict continuous-time stochastic processes to model future movement of pedestrians. Anticipated pedestrian positions are used to conduct chance constrained collision-checking, and are incorporated into a time-to-collision control problem. An occupancy map is also integrated to allow for probabilistic collision-checking with static obstacles. We demonstrate the capability of SPAN in crowded simulation environments, as well as with a real-world pedestrian dataset.
2401.14520
Cori Faklaris
Cori Faklaris
Mitigating Smishing: Challenges and Future Work
5 pages. In submission to ConPro: 8th Workshop on Technology and Consumer Protection, co-located with the 45th IEEE Symposium on Security and Privacy, San Francisco, CA USA
null
null
null
cs.CR cs.CY cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
This paper describes three principal challenges in smishing mitigation - limitations of device affordances, complexity of infrastructure, and cognitive and contextual factors of mobile device use. We give a high-level overview of ideas that can mitigate smishing and work around these challenges.
[ { "created": "Thu, 25 Jan 2024 21:26:36 GMT", "version": "v1" } ]
2024-01-29
[ [ "Faklaris", "Cori", "" ] ]
This paper describes three principal challenges in smishing mitigation - limitations of device affordances, complexity of infrastructure, and cognitive and contextual factors of mobile device use. We give a high-level overview of ideas that can mitigate smishing and work around these challenges.
1306.5702
Vijay Manikandan Janakiraman
Vijay Manikandan Janakiraman, XuanLong Nguyen, Jeff Sterniak, and Dennis Assanis
Modeling The Stable Operating Envelope For Partially Stable Combustion Engines Using Class Imbalance Learning
In a Journal review
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advanced combustion technologies such as homogeneous charge compression ignition (HCCI) engines have a narrow stable operating region defined by complex control strategies such as exhaust gas recirculation (EGR) and variable valve timing among others. For such systems, it is important to identify the operating envelope or the boundary of stable operation for diagnostics and control purposes. Obtaining a good model of the operating envelope using physics becomes intractable owing to engine transient effects. In this paper, a machine learning based approach is employed to identify the stable operating boundary of HCCI combustion directly from experimental data. Owing to imbalance in class proportions in the data, two approaches are considered. A re-sampling (under-sampling, over-sampling) based approach is used to develop models using existing algorithms while a cost-sensitive approach is used to modify the learning algorithm without modifying the data set. Support vector machines and recently developed extreme learning machines are used for model development and results compared against linear classification methods show that cost-sensitive versions of ELM and SVM algorithms are well suited to model the HCCI operating envelope. The prediction results indicate that the models have the potential to be used for predicting HCCI instability based on sensor measurement history.
[ { "created": "Mon, 24 Jun 2013 18:34:28 GMT", "version": "v1" } ]
2013-06-25
[ [ "Janakiraman", "Vijay Manikandan", "" ], [ "Nguyen", "XuanLong", "" ], [ "Sterniak", "Jeff", "" ], [ "Assanis", "Dennis", "" ] ]
Advanced combustion technologies such as homogeneous charge compression ignition (HCCI) engines have a narrow stable operating region defined by complex control strategies such as exhaust gas recirculation (EGR) and variable valve timing among others. For such systems, it is important to identify the operating envelope or the boundary of stable operation for diagnostics and control purposes. Obtaining a good model of the operating envelope using physics becomes intractable owing to engine transient effects. In this paper, a machine learning based approach is employed to identify the stable operating boundary of HCCI combustion directly from experimental data. Owing to imbalance in class proportions in the data, two approaches are considered. A re-sampling (under-sampling, over-sampling) based approach is used to develop models using existing algorithms while a cost-sensitive approach is used to modify the learning algorithm without modifying the data set. Support vector machines and recently developed extreme learning machines are used for model development and results compared against linear classification methods show that cost-sensitive versions of ELM and SVM algorithms are well suited to model the HCCI operating envelope. The prediction results indicate that the models have the potential to be used for predicting HCCI instability based on sensor measurement history.
2008.07958
Fran Casino
Lamprini Zarpala and Fran Casino
A blockchain-based Forensic Model for Financial Crime Investigation: The Embezzlement Scenario
Digit Finance (2021)
null
10.1007/s42521-021-00035-5
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The financial crime landscape is evolving along with the digitization in financial services. In this context, laws and regulations cannot efficiently cope with a fast-moving industry such as finance, which translates in late adoption of measures and legal voids, providing a fruitful landscape for malicious actors. In parallel, blockchain technology and its promising features such as immutability, verifiability, and authentication, enhance the opportunities of financial forensics. In this paper, we focus on an embezzlement scheme and we provide a forensic-by-design methodology for its investigation. In addition, the feasibility and adaptability of our approach can be extended and embrace digital investigations on other types of schemes. We provide a functional implementation based on smart contracts and we integrate standardised forensic flows and chain of custody preservation mechanisms. Finally, we discuss the benefits and challenges of the symbiotic relationship between blockchain and financial investigations, along with future research directions.
[ { "created": "Tue, 18 Aug 2020 14:38:01 GMT", "version": "v1" }, { "created": "Sun, 23 Aug 2020 15:27:59 GMT", "version": "v2" }, { "created": "Mon, 19 Jul 2021 11:42:41 GMT", "version": "v3" } ]
2021-07-20
[ [ "Zarpala", "Lamprini", "" ], [ "Casino", "Fran", "" ] ]
The financial crime landscape is evolving along with the digitization in financial services. In this context, laws and regulations cannot efficiently cope with a fast-moving industry such as finance, which translates in late adoption of measures and legal voids, providing a fruitful landscape for malicious actors. In parallel, blockchain technology and its promising features such as immutability, verifiability, and authentication, enhance the opportunities of financial forensics. In this paper, we focus on an embezzlement scheme and we provide a forensic-by-design methodology for its investigation. In addition, the feasibility and adaptability of our approach can be extended and embrace digital investigations on other types of schemes. We provide a functional implementation based on smart contracts and we integrate standardised forensic flows and chain of custody preservation mechanisms. Finally, we discuss the benefits and challenges of the symbiotic relationship between blockchain and financial investigations, along with future research directions.
1401.3448
Robert Mateescu
Robert Mateescu, Rina Dechter, Radu Marinescu
AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Graphical Models
null
Journal Of Artificial Intelligence Research, Volume 33, pages 465-519, 2008
10.1613/jair.2605
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the recently introduced framework of AND/OR search spaces for graphical models, we propose to augment Multi-Valued Decision Diagrams (MDD) with AND nodes, in order to capture function decomposition structure and to extend these compiled data structures to general weighted graphical models (e.g., probabilistic models). We present the AND/OR Multi-Valued Decision Diagram (AOMDD) which compiles a graphical model into a canonical form that supports polynomial (e.g., solution counting, belief updating) or constant time (e.g. equivalence of graphical models) queries. We provide two algorithms for compiling the AOMDD of a graphical model. The first is search-based, and works by applying reduction rules to the trace of the memory intensive AND/OR search algorithm. The second is inference-based and uses a Bucket Elimination schedule to combine the AOMDDs of the input functions via the the APPLY operator. For both algorithms, the compilation time and the size of the AOMDD are, in the worst case, exponential in the treewidth of the graphical model, rather than pathwidth as is known for ordered binary decision diagrams (OBDDs). We introduce the concept of semantic treewidth, which helps explain why the size of a decision diagram is often much smaller than the worst case bound. We provide an experimental evaluation that demonstrates the potential of AOMDDs.
[ { "created": "Wed, 15 Jan 2014 05:09:35 GMT", "version": "v1" } ]
2014-01-16
[ [ "Mateescu", "Robert", "" ], [ "Dechter", "Rina", "" ], [ "Marinescu", "Radu", "" ] ]
Inspired by the recently introduced framework of AND/OR search spaces for graphical models, we propose to augment Multi-Valued Decision Diagrams (MDD) with AND nodes, in order to capture function decomposition structure and to extend these compiled data structures to general weighted graphical models (e.g., probabilistic models). We present the AND/OR Multi-Valued Decision Diagram (AOMDD) which compiles a graphical model into a canonical form that supports polynomial (e.g., solution counting, belief updating) or constant time (e.g. equivalence of graphical models) queries. We provide two algorithms for compiling the AOMDD of a graphical model. The first is search-based, and works by applying reduction rules to the trace of the memory intensive AND/OR search algorithm. The second is inference-based and uses a Bucket Elimination schedule to combine the AOMDDs of the input functions via the the APPLY operator. For both algorithms, the compilation time and the size of the AOMDD are, in the worst case, exponential in the treewidth of the graphical model, rather than pathwidth as is known for ordered binary decision diagrams (OBDDs). We introduce the concept of semantic treewidth, which helps explain why the size of a decision diagram is often much smaller than the worst case bound. We provide an experimental evaluation that demonstrates the potential of AOMDDs.
2111.00160
Xuxi Chen
Xuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Accepted by ACL 2023
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many fine-tuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning - by enforcing sparsity-aware low-rank updates on top of the pre-trained weights; and (ii) resource-efficient inference - by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structured sparse patterns in pre-trained language models via a unified approach. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, RoBERTa, and GPT-2) on dozens of datasets, consistently demonstrate impressive parameter-/inference-efficiency, while maintaining competitive downstream performance. For instance, DSEE saves about 25% inference FLOPs while achieving comparable performance, with 0.5% trainable parameters on BERT. Codes are available in https://github.com/VITA-Group/DSEE.
[ { "created": "Sat, 30 Oct 2021 03:29:47 GMT", "version": "v1" }, { "created": "Sun, 31 Jul 2022 16:30:56 GMT", "version": "v2" }, { "created": "Wed, 24 May 2023 02:29:37 GMT", "version": "v3" } ]
2023-05-25
[ [ "Chen", "Xuxi", "" ], [ "Chen", "Tianlong", "" ], [ "Chen", "Weizhu", "" ], [ "Awadallah", "Ahmed Hassan", "" ], [ "Wang", "Zhangyang", "" ], [ "Cheng", "Yu", "" ] ]
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many fine-tuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning - by enforcing sparsity-aware low-rank updates on top of the pre-trained weights; and (ii) resource-efficient inference - by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structured sparse patterns in pre-trained language models via a unified approach. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, RoBERTa, and GPT-2) on dozens of datasets, consistently demonstrate impressive parameter-/inference-efficiency, while maintaining competitive downstream performance. For instance, DSEE saves about 25% inference FLOPs while achieving comparable performance, with 0.5% trainable parameters on BERT. Codes are available in https://github.com/VITA-Group/DSEE.
1812.05901
Antoine Deleforge
Romain Lebarbenchon, Ewen Camberlein, Diego di Carlo, Cl\'ement Gaultier, Antoine Deleforge, Nancy Bertin
Evaluation of an open-source implementation of the SRP-PHAT algorithm within the 2018 LOCATA challenge
In Proceedings of the LOCATA Challenge Workshop - a satellite event of IWAENC 2018 (arXiv:1811.08482 )
null
null
LOCATAchallenge/2018/01
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This short paper presents an efficient, flexible implementation of the SRP-PHAT multichannel sound source localization method. The method is evaluated on the single-source tasks of the LOCATA 2018 development dataset, and an associated Matlab toolbox is made available online.
[ { "created": "Fri, 14 Dec 2018 13:15:45 GMT", "version": "v1" } ]
2018-12-17
[ [ "Lebarbenchon", "Romain", "" ], [ "Camberlein", "Ewen", "" ], [ "di Carlo", "Diego", "" ], [ "Gaultier", "Clément", "" ], [ "Deleforge", "Antoine", "" ], [ "Bertin", "Nancy", "" ] ]
This short paper presents an efficient, flexible implementation of the SRP-PHAT multichannel sound source localization method. The method is evaluated on the single-source tasks of the LOCATA 2018 development dataset, and an associated Matlab toolbox is made available online.
2306.06811
Samuel Reinders
Samuel Reinders, Swamy Ananthanarayan, Matthew Butler, Kim Marriott
Designing Conversational Multimodal 3D Printed Models with People who are Blind
To appear in ACM Designing Interactive Systems Conference (DIS '23), July 10-14, 2023, Pittsburgh, PA, USA
null
10.1145/3563657.3595989
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D printed models have been used to improve access to graphical information by people who are blind, offering benefits over conventional accessible graphics. Here we investigate an interactive 3D printed model (I3M) that combines a conversational interface with haptic vibration and touch to provide more natural and accessible experiences. Specifically, we co-designed a multimodal model of the Solar System with nine blind people and evaluated the prototype with another seven blind participants. We discuss our journey from a design perspective, focusing on touch, conversational and multimodal interactions. Based on our experience, we suggest design recommendations that consider blind users' desire for independence and control, customisation, comfort and use of prior experience
[ { "created": "Mon, 12 Jun 2023 00:44:57 GMT", "version": "v1" } ]
2023-06-13
[ [ "Reinders", "Samuel", "" ], [ "Ananthanarayan", "Swamy", "" ], [ "Butler", "Matthew", "" ], [ "Marriott", "Kim", "" ] ]
3D printed models have been used to improve access to graphical information by people who are blind, offering benefits over conventional accessible graphics. Here we investigate an interactive 3D printed model (I3M) that combines a conversational interface with haptic vibration and touch to provide more natural and accessible experiences. Specifically, we co-designed a multimodal model of the Solar System with nine blind people and evaluated the prototype with another seven blind participants. We discuss our journey from a design perspective, focusing on touch, conversational and multimodal interactions. Based on our experience, we suggest design recommendations that consider blind users' desire for independence and control, customisation, comfort and use of prior experience
1807.03546
Oskar Schirmer
Oskar Schirmer
Parallel Architecture Hardware and General Purpose Operating System Co-design
66 pages, 30 figures and tables
null
null
null
cs.DC cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Because most optimisations to achieve higher computational performance eventually are limited, parallelism that scales is required. Parallelised hardware alone is not sufficient, but software that matches the architecture is required to gain best performance. For decades now, hardware design has been guided by the basic design of existing software, to avoid the higher cost to redesign the latter. In doing so, however, quite a variety of superior concepts is excluded a priori. Consequently, co-design of both hardware and software is crucial where highest performance is the goal. For special purpose application, this co-design is common practice. For general purpose application, however, a precondition for usability of a computer system is an operating system which is both comprehensive and dynamic. As no such operating system has ever been designed, a sketch for a comprehensive dynamic operating system is presented, based on a straightforward hardware architecture to demonstrate how design decisions regarding software and hardware do coexist and harmonise.
[ { "created": "Tue, 10 Jul 2018 09:24:33 GMT", "version": "v1" } ]
2018-07-11
[ [ "Schirmer", "Oskar", "" ] ]
Because most optimisations to achieve higher computational performance eventually are limited, parallelism that scales is required. Parallelised hardware alone is not sufficient, but software that matches the architecture is required to gain best performance. For decades now, hardware design has been guided by the basic design of existing software, to avoid the higher cost to redesign the latter. In doing so, however, quite a variety of superior concepts is excluded a priori. Consequently, co-design of both hardware and software is crucial where highest performance is the goal. For special purpose application, this co-design is common practice. For general purpose application, however, a precondition for usability of a computer system is an operating system which is both comprehensive and dynamic. As no such operating system has ever been designed, a sketch for a comprehensive dynamic operating system is presented, based on a straightforward hardware architecture to demonstrate how design decisions regarding software and hardware do coexist and harmonise.
2405.00172
David Liu
David Liu, Arjun Seshadri, Tina Eliassi-Rad, Johan Ugander
Re-visiting Skip-Gram Negative Sampling: Dimension Regularization for More Efficient Dissimilarity Preservation in Graph Embeddings
null
null
null
null
cs.LG cs.SI stat.ML
http://creativecommons.org/licenses/by/4.0/
A wide range of graph embedding objectives decompose into two components: one that attracts the embeddings of nodes that are perceived as similar, and another that repels embeddings of nodes that are perceived as dissimilar. Because real-world graphs are sparse and the number of dissimilar pairs grows quadratically with the number of nodes, Skip-Gram Negative Sampling (SGNS) has emerged as a popular and efficient repulsion approach. SGNS repels each node from a sample of dissimilar nodes, as opposed to all dissimilar nodes. In this work, we show that node-wise repulsion is, in aggregate, an approximate re-centering of the node embedding dimensions. Such dimension operations are much more scalable than node operations. The dimension approach, in addition to being more efficient, yields a simpler geometric interpretation of the repulsion. Our result extends findings from the self-supervised learning literature to the skip-gram model, establishing a connection between skip-gram node contrast and dimension regularization. We show that in the limit of large graphs, under mild regularity conditions, the original node repulsion objective converges to optimization with dimension regularization. We use this observation to propose an algorithm augmentation framework that speeds up any existing algorithm, supervised or unsupervised, using SGNS. The framework prioritizes node attraction and replaces SGNS with dimension regularization. We instantiate this generic framework for LINE and node2vec and show that the augmented algorithms preserve downstream performance while dramatically increasing efficiency.
[ { "created": "Tue, 30 Apr 2024 19:43:01 GMT", "version": "v1" } ]
2024-05-02
[ [ "Liu", "David", "" ], [ "Seshadri", "Arjun", "" ], [ "Eliassi-Rad", "Tina", "" ], [ "Ugander", "Johan", "" ] ]
A wide range of graph embedding objectives decompose into two components: one that attracts the embeddings of nodes that are perceived as similar, and another that repels embeddings of nodes that are perceived as dissimilar. Because real-world graphs are sparse and the number of dissimilar pairs grows quadratically with the number of nodes, Skip-Gram Negative Sampling (SGNS) has emerged as a popular and efficient repulsion approach. SGNS repels each node from a sample of dissimilar nodes, as opposed to all dissimilar nodes. In this work, we show that node-wise repulsion is, in aggregate, an approximate re-centering of the node embedding dimensions. Such dimension operations are much more scalable than node operations. The dimension approach, in addition to being more efficient, yields a simpler geometric interpretation of the repulsion. Our result extends findings from the self-supervised learning literature to the skip-gram model, establishing a connection between skip-gram node contrast and dimension regularization. We show that in the limit of large graphs, under mild regularity conditions, the original node repulsion objective converges to optimization with dimension regularization. We use this observation to propose an algorithm augmentation framework that speeds up any existing algorithm, supervised or unsupervised, using SGNS. The framework prioritizes node attraction and replaces SGNS with dimension regularization. We instantiate this generic framework for LINE and node2vec and show that the augmented algorithms preserve downstream performance while dramatically increasing efficiency.
2403.06734
Keshara Weerasinghe
Keshara Weerasinghe, Saahith Janapati, Xueren Ge, Sion Kim, Sneha Iyer, John A. Stankovic, Homa Alemzadeh
Real-Time Multimodal Cognitive Assistant for Emergency Medical Services
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.AI cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emergency Medical Services (EMS) responders often operate under time-sensitive conditions, facing cognitive overload and inherent risks, requiring essential skills in critical thinking and rapid decision-making. This paper presents CognitiveEMS, an end-to-end wearable cognitive assistant system that can act as a collaborative virtual partner engaging in the real-time acquisition and analysis of multimodal data from an emergency scene and interacting with EMS responders through Augmented Reality (AR) smart glasses. CognitiveEMS processes the continuous streams of data in real-time and leverages edge computing to provide assistance in EMS protocol selection and intervention recognition. We address key technical challenges in real-time cognitive assistance by introducing three novel components: (i) a Speech Recognition model that is fine-tuned for real-world medical emergency conversations using simulated EMS audio recordings, augmented with synthetic data generated by large language models (LLMs); (ii) an EMS Protocol Prediction model that combines state-of-the-art (SOTA) tiny language models with EMS domain knowledge using graph-based attention mechanisms; (iii) an EMS Action Recognition module which leverages multimodal audio and video data and protocol predictions to infer the intervention/treatment actions taken by the responders at the incident scene. Our results show that for speech recognition we achieve superior performance compared to SOTA (WER of 0.290 vs. 0.618) on conversational data. Our protocol prediction component also significantly outperforms SOTA (top-3 accuracy of 0.800 vs. 0.200) and the action recognition achieves an accuracy of 0.727, while maintaining an end-to-end latency of 3.78s for protocol prediction on the edge and 0.31s on the server.
[ { "created": "Mon, 11 Mar 2024 13:56:57 GMT", "version": "v1" } ]
2024-03-12
[ [ "Weerasinghe", "Keshara", "" ], [ "Janapati", "Saahith", "" ], [ "Ge", "Xueren", "" ], [ "Kim", "Sion", "" ], [ "Iyer", "Sneha", "" ], [ "Stankovic", "John A.", "" ], [ "Alemzadeh", "Homa", "" ] ]
Emergency Medical Services (EMS) responders often operate under time-sensitive conditions, facing cognitive overload and inherent risks, requiring essential skills in critical thinking and rapid decision-making. This paper presents CognitiveEMS, an end-to-end wearable cognitive assistant system that can act as a collaborative virtual partner engaging in the real-time acquisition and analysis of multimodal data from an emergency scene and interacting with EMS responders through Augmented Reality (AR) smart glasses. CognitiveEMS processes the continuous streams of data in real-time and leverages edge computing to provide assistance in EMS protocol selection and intervention recognition. We address key technical challenges in real-time cognitive assistance by introducing three novel components: (i) a Speech Recognition model that is fine-tuned for real-world medical emergency conversations using simulated EMS audio recordings, augmented with synthetic data generated by large language models (LLMs); (ii) an EMS Protocol Prediction model that combines state-of-the-art (SOTA) tiny language models with EMS domain knowledge using graph-based attention mechanisms; (iii) an EMS Action Recognition module which leverages multimodal audio and video data and protocol predictions to infer the intervention/treatment actions taken by the responders at the incident scene. Our results show that for speech recognition we achieve superior performance compared to SOTA (WER of 0.290 vs. 0.618) on conversational data. Our protocol prediction component also significantly outperforms SOTA (top-3 accuracy of 0.800 vs. 0.200) and the action recognition achieves an accuracy of 0.727, while maintaining an end-to-end latency of 3.78s for protocol prediction on the edge and 0.31s on the server.
2001.05119
Zan Gojcic
Zan Gojcic, Caifa Zhou, Jan D. Wegner, Leonidas J. Guibas, Tolga Birdal
Learning multiview 3D point cloud registration
CVPR2020 - Camera Ready
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm. Registration of multiple scans typically follows a two-stage pipeline: the initial pairwise alignment and the globally consistent refinement. The former is often ambiguous due to the low overlap of neighboring point clouds, symmetries and repetitive scene parts. Therefore, the latter global refinement aims at establishing the cyclic consistency across multiple scans and helps in resolving the ambiguous cases. In this paper we propose, to the best of our knowledge, the first end-to-end algorithm for joint learning of both parts of this two-stage problem. Experimental evaluation on well accepted benchmark datasets shows that our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly. Moreover, we present detailed analysis and an ablation study that validate the novel components of our approach. The source code and pretrained models are publicly available under https://github.com/zgojcic/3D_multiview_reg.
[ { "created": "Wed, 15 Jan 2020 03:42:14 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2020 07:53:36 GMT", "version": "v2" } ]
2020-04-01
[ [ "Gojcic", "Zan", "" ], [ "Zhou", "Caifa", "" ], [ "Wegner", "Jan D.", "" ], [ "Guibas", "Leonidas J.", "" ], [ "Birdal", "Tolga", "" ] ]
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm. Registration of multiple scans typically follows a two-stage pipeline: the initial pairwise alignment and the globally consistent refinement. The former is often ambiguous due to the low overlap of neighboring point clouds, symmetries and repetitive scene parts. Therefore, the latter global refinement aims at establishing the cyclic consistency across multiple scans and helps in resolving the ambiguous cases. In this paper we propose, to the best of our knowledge, the first end-to-end algorithm for joint learning of both parts of this two-stage problem. Experimental evaluation on well accepted benchmark datasets shows that our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly. Moreover, we present detailed analysis and an ablation study that validate the novel components of our approach. The source code and pretrained models are publicly available under https://github.com/zgojcic/3D_multiview_reg.
1802.07384
Xin Zhang
Xin Zhang, Armando Solar-Lezama, and Rishabh Singh
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
24 pages
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new algorithm to generate minimal, stable, and symbolic corrections to an input that will cause a neural network with ReLU activations to change its output. We argue that such a correction is a useful way to provide feedback to a user when the network's output is different from a desired output. Our algorithm generates such a correction by solving a series of linear constraint satisfaction problems. The technique is evaluated on three neural network models: one predicting whether an applicant will pay a mortgage, one predicting whether a first-order theorem can be proved efficiently by a solver using certain heuristics, and the final one judging whether a drawing is an accurate rendition of a canonical drawing of a cat.
[ { "created": "Wed, 21 Feb 2018 00:47:32 GMT", "version": "v1" }, { "created": "Thu, 30 Aug 2018 21:33:26 GMT", "version": "v2" } ]
2018-09-03
[ [ "Zhang", "Xin", "" ], [ "Solar-Lezama", "Armando", "" ], [ "Singh", "Rishabh", "" ] ]
We present a new algorithm to generate minimal, stable, and symbolic corrections to an input that will cause a neural network with ReLU activations to change its output. We argue that such a correction is a useful way to provide feedback to a user when the network's output is different from a desired output. Our algorithm generates such a correction by solving a series of linear constraint satisfaction problems. The technique is evaluated on three neural network models: one predicting whether an applicant will pay a mortgage, one predicting whether a first-order theorem can be proved efficiently by a solver using certain heuristics, and the final one judging whether a drawing is an accurate rendition of a canonical drawing of a cat.
1909.03877
Ting Yao
Fuchen Long and Ting Yao and Zhaofan Qiu and Xinmei Tian and Jiebo Luo and Tao Mei
Gaussian Temporal Awareness Networks for Action Localization
CVPR 2019 Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporally localizing actions in a video is a fundamental challenge in video understanding. Most existing approaches have often drawn inspiration from image object detection and extended the advances, e.g., SSD and Faster R-CNN, to produce temporal locations of an action in a 1D sequence. Nevertheless, the results can suffer from robustness problem due to the design of predetermined temporal scales, which overlooks the temporal structure of an action and limits the utility on detecting actions with complex variations. In this paper, we propose to address the problem by introducing Gaussian kernels to dynamically optimize temporal scale of each action proposal. Specifically, we present Gaussian Temporal Awareness Networks (GTAN) --- a new architecture that novelly integrates the exploitation of temporal structure into an one-stage action localization framework. Technically, GTAN models the temporal structure through learning a set of Gaussian kernels, each for a cell in the feature maps. Each Gaussian kernel corresponds to a particular interval of an action proposal and a mixture of Gaussian kernels could further characterize action proposals with various length. Moreover, the values in each Gaussian curve reflect the contextual contributions to the localization of an action proposal. Extensive experiments are conducted on both THUMOS14 and ActivityNet v1.3 datasets, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GTAN achieves 1.9% and 1.1% improvements in mAP on testing set of the two datasets.
[ { "created": "Mon, 9 Sep 2019 14:13:48 GMT", "version": "v1" } ]
2019-09-10
[ [ "Long", "Fuchen", "" ], [ "Yao", "Ting", "" ], [ "Qiu", "Zhaofan", "" ], [ "Tian", "Xinmei", "" ], [ "Luo", "Jiebo", "" ], [ "Mei", "Tao", "" ] ]
Temporally localizing actions in a video is a fundamental challenge in video understanding. Most existing approaches have often drawn inspiration from image object detection and extended the advances, e.g., SSD and Faster R-CNN, to produce temporal locations of an action in a 1D sequence. Nevertheless, the results can suffer from robustness problem due to the design of predetermined temporal scales, which overlooks the temporal structure of an action and limits the utility on detecting actions with complex variations. In this paper, we propose to address the problem by introducing Gaussian kernels to dynamically optimize temporal scale of each action proposal. Specifically, we present Gaussian Temporal Awareness Networks (GTAN) --- a new architecture that novelly integrates the exploitation of temporal structure into an one-stage action localization framework. Technically, GTAN models the temporal structure through learning a set of Gaussian kernels, each for a cell in the feature maps. Each Gaussian kernel corresponds to a particular interval of an action proposal and a mixture of Gaussian kernels could further characterize action proposals with various length. Moreover, the values in each Gaussian curve reflect the contextual contributions to the localization of an action proposal. Extensive experiments are conducted on both THUMOS14 and ActivityNet v1.3 datasets, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GTAN achieves 1.9% and 1.1% improvements in mAP on testing set of the two datasets.
2011.04349
Hieu Phung
Hieu Trong Phung (1 and 2), Anh Tuan Vu (1), Tung Dinh Nguyen (1), Lam Thanh Do (1 and 2), Giang Nam Ngo (1), Trung Thanh Tran (1) and Ngoc C. L\^e (1 and 2) ((1) PIXTA Vietnam, Hanoi, Vietnam. (2) Hanoi University of Science and Technology, Ha Noi, Viet Nam.)
MAGNeto: An Efficient Deep Learning Method for the Extractive Tags Summarization Problem
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study a new image annotation task named Extractive Tags Summarization (ETS). The goal is to extract important tags from the context lying in an image and its corresponding tags. We adjust some state-of-the-art deep learning models to utilize both visual and textual information. Our proposed solution consists of different widely used blocks like convolutional and self-attention layers, together with a novel idea of combining auxiliary loss functions and the gating mechanism to glue and elevate these fundamental components and form a unified architecture. Besides, we introduce a loss function that aims to reduce the imbalance of the training data and a simple but effective data augmentation technique dedicated to alleviates the effect of outliers on the final results. Last but not least, we explore an unsupervised pre-training strategy to further boost the performance of the model by making use of the abundant amount of available unlabeled data. Our model shows the good results as 90% $F_\text{1}$ score on the public NUS-WIDE benchmark, and 50% $F_\text{1}$ score on a noisy large-scale real-world private dataset. Source code for reproducing the experiments is publicly available at: https://github.com/pixta-dev/labteam
[ { "created": "Mon, 9 Nov 2020 11:34:21 GMT", "version": "v1" } ]
2020-11-10
[ [ "Phung", "Hieu Trong", "", "1 and 2" ], [ "Vu", "Anh Tuan", "", "1 and 2" ], [ "Nguyen", "Tung Dinh", "", "1 and 2" ], [ "Do", "Lam Thanh", "", "1 and 2" ], [ "Ngo", "Giang Nam", "", "1 and 2" ], [ "Tran", "Trung Thanh", "", "1 and 2" ], [ "Lê", "Ngoc C.", "", "1 and 2" ] ]
In this work, we study a new image annotation task named Extractive Tags Summarization (ETS). The goal is to extract important tags from the context lying in an image and its corresponding tags. We adjust some state-of-the-art deep learning models to utilize both visual and textual information. Our proposed solution consists of different widely used blocks like convolutional and self-attention layers, together with a novel idea of combining auxiliary loss functions and the gating mechanism to glue and elevate these fundamental components and form a unified architecture. Besides, we introduce a loss function that aims to reduce the imbalance of the training data and a simple but effective data augmentation technique dedicated to alleviates the effect of outliers on the final results. Last but not least, we explore an unsupervised pre-training strategy to further boost the performance of the model by making use of the abundant amount of available unlabeled data. Our model shows the good results as 90% $F_\text{1}$ score on the public NUS-WIDE benchmark, and 50% $F_\text{1}$ score on a noisy large-scale real-world private dataset. Source code for reproducing the experiments is publicly available at: https://github.com/pixta-dev/labteam
1905.04284
Mohsen Joneidi
Mohsen Joneidi, Nazanin Rahnavard
Primary User Localization and Online Radio Cartography via Structured Tensor Decomposition
Submitted to the 2019 IEEE Global Communications Conference (GLOBECOM)
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Source localization and radio cartography using multi-way representation of spectrum is the subject of study in this paper. A joint matrix factorization and tensor decomposition problem is proposed and solved using an iterative algorithm. The multi-way measured spectrum is organized in a tensor and it is modeled by multiplication of a propagation tensor and a channel gain matrix. The tensor indicates the propagating power from each location and each frequency over time and the channel matrix links the propagating tensor to the sensed spectrum. We utilize sparsity and other intrinsic characteristics of spectrum to identify the solution of the proposed problem. Moreover, The online implementation of the proposed framework results in online radio cartography which is a powerful tool for efficient spectrum awareness and utilization. The simulation results show that our algorithm is a promising technique for dynamic primary user localization and online radio cartography.
[ { "created": "Fri, 10 May 2019 17:51:39 GMT", "version": "v1" } ]
2019-05-13
[ [ "Joneidi", "Mohsen", "" ], [ "Rahnavard", "Nazanin", "" ] ]
Source localization and radio cartography using multi-way representation of spectrum is the subject of study in this paper. A joint matrix factorization and tensor decomposition problem is proposed and solved using an iterative algorithm. The multi-way measured spectrum is organized in a tensor and it is modeled by multiplication of a propagation tensor and a channel gain matrix. The tensor indicates the propagating power from each location and each frequency over time and the channel matrix links the propagating tensor to the sensed spectrum. We utilize sparsity and other intrinsic characteristics of spectrum to identify the solution of the proposed problem. Moreover, The online implementation of the proposed framework results in online radio cartography which is a powerful tool for efficient spectrum awareness and utilization. The simulation results show that our algorithm is a promising technique for dynamic primary user localization and online radio cartography.
1605.00802
Aseem Sharma
Aseem Sharma, Krishna Jagannathan, Lav R. Varshney
Queuing Approaches to Principal-Agent Communication under Information Overload
33 pages excluding the main page, 5 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the information overload regime, human communication tasks such as responding to email are well-modeled as priority queues, where priority is determined by a mix of intrinsic motivation and extrinsic motivation corresponding to the task's importance to the sender. We view priority queuing from a principal-agent perspective, and characterize the effect of priority-misalignment and information asymmetry between task senders and task receivers in both single-agent and multi-agent settings. In the single-agent setting, we find that discipline can override misalignment. Although variation in human interests leads to performance loss in the single-agent setting, the same variability is useful to the principal with optimal routing of tasks, if the principal has suitable information about agents' priorities. Our approach starts to quantitatively address the effect of human dynamics in routine communication tasks.
[ { "created": "Tue, 3 May 2016 09:23:01 GMT", "version": "v1" } ]
2016-05-04
[ [ "Sharma", "Aseem", "" ], [ "Jagannathan", "Krishna", "" ], [ "Varshney", "Lav R.", "" ] ]
In the information overload regime, human communication tasks such as responding to email are well-modeled as priority queues, where priority is determined by a mix of intrinsic motivation and extrinsic motivation corresponding to the task's importance to the sender. We view priority queuing from a principal-agent perspective, and characterize the effect of priority-misalignment and information asymmetry between task senders and task receivers in both single-agent and multi-agent settings. In the single-agent setting, we find that discipline can override misalignment. Although variation in human interests leads to performance loss in the single-agent setting, the same variability is useful to the principal with optimal routing of tasks, if the principal has suitable information about agents' priorities. Our approach starts to quantitatively address the effect of human dynamics in routine communication tasks.
2001.01211
Lizhao Gao
Lizhao Gao, Haihua Xu, Chong Sun, Junling Liu, Yu-Wing Tai
Spatial-Scale Aligned Network for Fine-Grained Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing approaches for fine-grained visual recognition focus on learning marginal region-based representations while neglecting the spatial and scale misalignments, leading to inferior performance. In this paper, we propose the spatial-scale aligned network (SSANET) and implicitly address misalignments during the recognition process. Especially, SSANET consists of 1) a self-supervised proposal mining formula with Morphological Alignment Constraints; 2) a discriminative scale mining (DSM) module, which exploits the feature pyramid via a circulant matrix, and provides the Fourier solver for fast scale alignments; 3) an oriented pooling (OP) module, that performs the pooling operation in several pre-defined orientations. Each orientation defines one kind of spatial alignment, and the network automatically determines which is the optimal alignments through learning. With the proposed two modules, our algorithm can automatically determine the accurate local proposal regions and generate more robust target representations being invariant to various appearance variances. Extensive experiments verify that SSANET is competent at learning better spatial-scale invariant target representations, yielding superior performance on the fine-grained recognition task on several benchmarks.
[ { "created": "Sun, 5 Jan 2020 11:12:08 GMT", "version": "v1" } ]
2020-01-07
[ [ "Gao", "Lizhao", "" ], [ "Xu", "Haihua", "" ], [ "Sun", "Chong", "" ], [ "Liu", "Junling", "" ], [ "Tai", "Yu-Wing", "" ] ]
Existing approaches for fine-grained visual recognition focus on learning marginal region-based representations while neglecting the spatial and scale misalignments, leading to inferior performance. In this paper, we propose the spatial-scale aligned network (SSANET) and implicitly address misalignments during the recognition process. Especially, SSANET consists of 1) a self-supervised proposal mining formula with Morphological Alignment Constraints; 2) a discriminative scale mining (DSM) module, which exploits the feature pyramid via a circulant matrix, and provides the Fourier solver for fast scale alignments; 3) an oriented pooling (OP) module, that performs the pooling operation in several pre-defined orientations. Each orientation defines one kind of spatial alignment, and the network automatically determines which is the optimal alignments through learning. With the proposed two modules, our algorithm can automatically determine the accurate local proposal regions and generate more robust target representations being invariant to various appearance variances. Extensive experiments verify that SSANET is competent at learning better spatial-scale invariant target representations, yielding superior performance on the fine-grained recognition task on several benchmarks.
1809.03470
Marek Wydmuch
Marek Wydmuch, Micha{\l} Kempka, Wojciech Ja\'skowski
ViZDoom Competitions: Playing Doom from Pixels
null
null
10.1109/TG.2018.2877047
null
cs.AI cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the first two editions of Visual Doom AI Competition, held in 2016 and 2017. The challenge was to create bots that compete in a multi-player deathmatch in a first-person shooter (FPS) game, Doom. The bots had to make their decisions based solely on visual information, i.e., a raw screen buffer. To play well, the bots needed to understand their surroundings, navigate, explore, and handle the opponents at the same time. These aspects, together with the competitive multi-agent aspect of the game, make the competition a unique platform for evaluating the state of the art reinforcement learning algorithms. The paper discusses the rules, solutions, results, and statistics that give insight into the agents' behaviors. Best-performing agents are described in more detail. The results of the competition lead to the conclusion that, although reinforcement learning can produce capable Doom bots, they still are not yet able to successfully compete against humans in this game. The paper also revisits the ViZDoom environment, which is a flexible, easy to use, and efficient 3D platform for research for vision-based reinforcement learning, based on a well-recognized first-person perspective game Doom.
[ { "created": "Mon, 10 Sep 2018 17:41:39 GMT", "version": "v1" } ]
2022-07-28
[ [ "Wydmuch", "Marek", "" ], [ "Kempka", "Michał", "" ], [ "Jaśkowski", "Wojciech", "" ] ]
This paper presents the first two editions of Visual Doom AI Competition, held in 2016 and 2017. The challenge was to create bots that compete in a multi-player deathmatch in a first-person shooter (FPS) game, Doom. The bots had to make their decisions based solely on visual information, i.e., a raw screen buffer. To play well, the bots needed to understand their surroundings, navigate, explore, and handle the opponents at the same time. These aspects, together with the competitive multi-agent aspect of the game, make the competition a unique platform for evaluating the state of the art reinforcement learning algorithms. The paper discusses the rules, solutions, results, and statistics that give insight into the agents' behaviors. Best-performing agents are described in more detail. The results of the competition lead to the conclusion that, although reinforcement learning can produce capable Doom bots, they still are not yet able to successfully compete against humans in this game. The paper also revisits the ViZDoom environment, which is a flexible, easy to use, and efficient 3D platform for research for vision-based reinforcement learning, based on a well-recognized first-person perspective game Doom.
2310.16252
Arnab Maiti
Arnab Maiti, Ross Boczar, Kevin Jamieson, Lillian J. Ratliff
Near-Optimal Pure Exploration in Matrix Games: A Generalization of Stochastic Bandits & Dueling Bandits
22 pages, 5 figures
null
null
null
cs.LG cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the sample complexity of identifying the pure strategy Nash equilibrium (PSNE) in a two-player zero-sum matrix game with noise. Formally, we are given a stochastic model where any learner can sample an entry $(i,j)$ of the input matrix $A\in[-1,1]^{n\times m}$ and observe $A_{i,j}+\eta$ where $\eta$ is a zero-mean 1-sub-Gaussian noise. The aim of the learner is to identify the PSNE of $A$, whenever it exists, with high probability while taking as few samples as possible. Zhou et al. (2017) presents an instance-dependent sample complexity lower bound that depends only on the entries in the row and column in which the PSNE lies. We design a near-optimal algorithm whose sample complexity matches the lower bound, up to log factors. The problem of identifying the PSNE also generalizes the problem of pure exploration in stochastic multi-armed bandits and dueling bandits, and our result matches the optimal bounds, up to log factors, in both the settings.
[ { "created": "Wed, 25 Oct 2023 00:05:37 GMT", "version": "v1" }, { "created": "Mon, 27 Nov 2023 21:33:05 GMT", "version": "v2" } ]
2023-11-29
[ [ "Maiti", "Arnab", "" ], [ "Boczar", "Ross", "" ], [ "Jamieson", "Kevin", "" ], [ "Ratliff", "Lillian J.", "" ] ]
We study the sample complexity of identifying the pure strategy Nash equilibrium (PSNE) in a two-player zero-sum matrix game with noise. Formally, we are given a stochastic model where any learner can sample an entry $(i,j)$ of the input matrix $A\in[-1,1]^{n\times m}$ and observe $A_{i,j}+\eta$ where $\eta$ is a zero-mean 1-sub-Gaussian noise. The aim of the learner is to identify the PSNE of $A$, whenever it exists, with high probability while taking as few samples as possible. Zhou et al. (2017) presents an instance-dependent sample complexity lower bound that depends only on the entries in the row and column in which the PSNE lies. We design a near-optimal algorithm whose sample complexity matches the lower bound, up to log factors. The problem of identifying the PSNE also generalizes the problem of pure exploration in stochastic multi-armed bandits and dueling bandits, and our result matches the optimal bounds, up to log factors, in both the settings.
2403.17249
Theodoros Stouraitis
Lei Yan, Theodoros Stouraitis, Jo\~ao Moura, Wenfu Xu, Michael Gienger, and Sethu Vijayakumar
Impact-Aware Bimanual Catching of Large-Momentum Objects
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates one of the most challenging tasks in dynamic manipulation -- catching large-momentum moving objects. Beyond the realm of quasi-static manipulation, dealing with highly dynamic objects can significantly improve the robot's capability of interacting with its surrounding environment. Yet, the inevitable motion mismatch between the fast moving object and the approaching robot will result in large impulsive forces, which lead to the unstable contacts and irreversible damage to both the object and the robot. To address the above problems, we propose an online optimization framework to: 1) estimate and predict the linear and angular motion of the object; 2) search and select the optimal contact locations across every surface of the object to mitigate impact through sequential quadratic programming (SQP); 3) simultaneously optimize the end-effector motion, stiffness, and contact force for both robots using multi-mode trajectory optimization (MMTO); and 4) realise the impact-aware catching motion on the compliant robotic system based on indirect force controller. We validate the impulse distribution, contact selection, and impact-aware MMTO algorithms in simulation and demonstrate the benefits of the proposed framework in real-world experiments including catching large-momentum moving objects with well-defined motion, constrained motion and free-flying motion.
[ { "created": "Mon, 25 Mar 2024 22:51:27 GMT", "version": "v1" } ]
2024-03-27
[ [ "Yan", "Lei", "" ], [ "Stouraitis", "Theodoros", "" ], [ "Moura", "João", "" ], [ "Xu", "Wenfu", "" ], [ "Gienger", "Michael", "" ], [ "Vijayakumar", "Sethu", "" ] ]
This paper investigates one of the most challenging tasks in dynamic manipulation -- catching large-momentum moving objects. Beyond the realm of quasi-static manipulation, dealing with highly dynamic objects can significantly improve the robot's capability of interacting with its surrounding environment. Yet, the inevitable motion mismatch between the fast moving object and the approaching robot will result in large impulsive forces, which lead to the unstable contacts and irreversible damage to both the object and the robot. To address the above problems, we propose an online optimization framework to: 1) estimate and predict the linear and angular motion of the object; 2) search and select the optimal contact locations across every surface of the object to mitigate impact through sequential quadratic programming (SQP); 3) simultaneously optimize the end-effector motion, stiffness, and contact force for both robots using multi-mode trajectory optimization (MMTO); and 4) realise the impact-aware catching motion on the compliant robotic system based on indirect force controller. We validate the impulse distribution, contact selection, and impact-aware MMTO algorithms in simulation and demonstrate the benefits of the proposed framework in real-world experiments including catching large-momentum moving objects with well-defined motion, constrained motion and free-flying motion.
2006.05679
Bertrand Jouve
Djellabi Mehdi, Jouve Bertrand, Amblard Fr\'ed\'eric
Dense and sparse vertex connectivity in networks
null
null
null
null
cs.SI cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The different approaches developed to analyze the structure of complex networks have generated a large number of studies. In the field of social networks at least, studies mainly address the detection and analysis of communities. In this paper, we challenge these approaches and focus on nodes that have meaningful local interactions able to identify the internal organization of communities or the way communities are assembled. We propose an algorithm, ItRich, to identify this type of nodes, based on the decomposition of a graph into successive, less and less dense, layers. Our method is tested on synthetic and real data sets and meshes well with other methods such as community detection or k-core decomposition.
[ { "created": "Wed, 10 Jun 2020 06:27:07 GMT", "version": "v1" } ]
2020-06-11
[ [ "Mehdi", "Djellabi", "" ], [ "Bertrand", "Jouve", "" ], [ "Frédéric", "Amblard", "" ] ]
The different approaches developed to analyze the structure of complex networks have generated a large number of studies. In the field of social networks at least, studies mainly address the detection and analysis of communities. In this paper, we challenge these approaches and focus on nodes that have meaningful local interactions able to identify the internal organization of communities or the way communities are assembled. We propose an algorithm, ItRich, to identify this type of nodes, based on the decomposition of a graph into successive, less and less dense, layers. Our method is tested on synthetic and real data sets and meshes well with other methods such as community detection or k-core decomposition.
1001.2625
Arnab Bhattacharya
Arnab Bhattacharya, Abhishek Bhowmick, Ambuj K. Singh
Finding top-k similar pairs of objects annotated with terms from an ontology
17 pages, 13 figures
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the growing focus on semantic searches and interpretations, an increasing number of standardized vocabularies and ontologies are being designed and used to describe data. We investigate the querying of objects described by a tree-structured ontology. Specifically, we consider the case of finding the top-k best pairs of objects that have been annotated with terms from such an ontology when the object descriptions are available only at runtime. We consider three distance measures. The first one defines the object distance as the minimum pairwise distance between the sets of terms describing them, and the second one defines the distance as the average pairwise term distance. The third and most useful distance measure, earth mover's distance, finds the best way of matching the terms and computes the distance corresponding to this best matching. We develop lower bounds that can be aggregated progressively and utilize them to speed up the search for top-k object pairs when the earth mover's distance is used. For the minimum pairwise distance, we devise an algorithm that runs in O(D + Tk log k) time, where D is the total information size and T is the total number of terms in the ontology. We also develop a novel best-first search strategy for the average pairwise distance that utilizes lower bounds generated in an ordered manner. Experiments on real and synthetic datasets demonstrate the practicality and scalability of our algorithms.
[ { "created": "Fri, 15 Jan 2010 07:01:37 GMT", "version": "v1" }, { "created": "Sat, 6 Mar 2010 11:23:28 GMT", "version": "v2" } ]
2010-03-09
[ [ "Bhattacharya", "Arnab", "" ], [ "Bhowmick", "Abhishek", "" ], [ "Singh", "Ambuj K.", "" ] ]
With the growing focus on semantic searches and interpretations, an increasing number of standardized vocabularies and ontologies are being designed and used to describe data. We investigate the querying of objects described by a tree-structured ontology. Specifically, we consider the case of finding the top-k best pairs of objects that have been annotated with terms from such an ontology when the object descriptions are available only at runtime. We consider three distance measures. The first one defines the object distance as the minimum pairwise distance between the sets of terms describing them, and the second one defines the distance as the average pairwise term distance. The third and most useful distance measure, earth mover's distance, finds the best way of matching the terms and computes the distance corresponding to this best matching. We develop lower bounds that can be aggregated progressively and utilize them to speed up the search for top-k object pairs when the earth mover's distance is used. For the minimum pairwise distance, we devise an algorithm that runs in O(D + Tk log k) time, where D is the total information size and T is the total number of terms in the ontology. We also develop a novel best-first search strategy for the average pairwise distance that utilizes lower bounds generated in an ordered manner. Experiments on real and synthetic datasets demonstrate the practicality and scalability of our algorithms.
2009.08371
Nils Eckstein
Nils Eckstein and Julia Buhmann and Matthew Cook and Jan Funke
Microtubule Tracking in Electron Microscopy Volumes
Accepted at MICCAI 2020
null
null
null
cs.CV cs.LG eess.IV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1.2 x 4 x 4$\mu$m volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1.2 x 4 x 4$\mu$m) of densely annotated microtubules in the CREMI data set (https://github.com/nilsec/micron).
[ { "created": "Thu, 17 Sep 2020 15:37:30 GMT", "version": "v1" } ]
2020-09-18
[ [ "Eckstein", "Nils", "" ], [ "Buhmann", "Julia", "" ], [ "Cook", "Matthew", "" ], [ "Funke", "Jan", "" ] ]
We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1.2 x 4 x 4$\mu$m volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1.2 x 4 x 4$\mu$m) of densely annotated microtubules in the CREMI data set (https://github.com/nilsec/micron).
2406.17548
Lachlan Gunn
Vasisht Duddu, Oskari J\"arvinen, Lachlan J Gunn, N Asokan
Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regulations increasingly call for various assurances from machine learning (ML) model providers about their training data, training process, and the behavior of resulting models during inference. For better transparency, companies (e.g., Huggingface and Google) have adopted model cards and datasheets which describe different properties of the training datasets and models. In the same vein, we introduce the notion of an inference card to describe the properties of a given inference (e.g., binding output to the model and its corresponding input). We collectively refer to these as ML property cards. A malicious model provider can include false information in ML property cards, raising a need for verifiable ML property cards. We show how to realized them using property attestation, technical mechanisms by which a prover (e.g., a model provider) can attest different ML properties during training and inference to a verifier (e.g., an auditor). However, prior attestation mechanisms based purely on cryptography are often narrowly focused (lacking versatility) and inefficient. There is a need to efficiently attest different types properties across the ML model training and inference pipeline. Recent developments make it possible to run and even train models inside hardware-assisted trusted execution environments (TEEs), which can provide highly efficient attestation. We propose Laminator, the first framework for verifiable ML property cards using hardware-assisted ML property attestations to efficiently furnish attestations for various ML properties for training and inference. It scales to multiple verifiers, and is independent of the model configuration.
[ { "created": "Tue, 25 Jun 2024 13:36:53 GMT", "version": "v1" } ]
2024-06-26
[ [ "Duddu", "Vasisht", "" ], [ "Järvinen", "Oskari", "" ], [ "Gunn", "Lachlan J", "" ], [ "Asokan", "N", "" ] ]
Regulations increasingly call for various assurances from machine learning (ML) model providers about their training data, training process, and the behavior of resulting models during inference. For better transparency, companies (e.g., Huggingface and Google) have adopted model cards and datasheets which describe different properties of the training datasets and models. In the same vein, we introduce the notion of an inference card to describe the properties of a given inference (e.g., binding output to the model and its corresponding input). We collectively refer to these as ML property cards. A malicious model provider can include false information in ML property cards, raising a need for verifiable ML property cards. We show how to realized them using property attestation, technical mechanisms by which a prover (e.g., a model provider) can attest different ML properties during training and inference to a verifier (e.g., an auditor). However, prior attestation mechanisms based purely on cryptography are often narrowly focused (lacking versatility) and inefficient. There is a need to efficiently attest different types properties across the ML model training and inference pipeline. Recent developments make it possible to run and even train models inside hardware-assisted trusted execution environments (TEEs), which can provide highly efficient attestation. We propose Laminator, the first framework for verifiable ML property cards using hardware-assisted ML property attestations to efficiently furnish attestations for various ML properties for training and inference. It scales to multiple verifiers, and is independent of the model configuration.
2009.07769
Sarah Alnegheimish
Alexander Geiger, Dongyu Liu, Sarah Alnegheimish, Alfredo Cuesta-Infante, Kalyan Veeramachaneni
TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks
Alexander Geiger and Dongyu Liu contributed equally. To appear in the proceedings of IEEE International Conference on Big Data
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time series anomalies can offer information relevant to critical situations facing various fields, from finance and aerospace to the IT, security, and medical domains. However, detecting anomalies in time series data is particularly challenging due to the vague definition of anomalies and said data's frequent lack of labels and highly complex temporal correlations. Current state-of-the-art unsupervised machine learning methods for anomaly detection suffer from scalability and portability issues, and may have high false positive rates. In this paper, we propose TadGAN, an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs). To capture the temporal correlations of time series distributions, we use LSTM Recurrent Neural Networks as base models for Generators and Critics. TadGAN is trained with cycle consistency loss to allow for effective time-series data reconstruction. We further propose several novel methods to compute reconstruction errors, as well as different approaches to combine reconstruction errors and Critic outputs to compute anomaly scores. To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one. We compare our approach to 8 baseline anomaly detection methods on 11 datasets from multiple reputable sources such as NASA, Yahoo, Numenta, Amazon, and Twitter. The results show that our approach can effectively detect anomalies and outperform baseline methods in most cases (6 out of 11). Notably, our method has the highest averaged F1 score across all the datasets. Our code is open source and is available as a benchmarking tool.
[ { "created": "Wed, 16 Sep 2020 15:52:04 GMT", "version": "v1" }, { "created": "Sat, 19 Sep 2020 23:25:06 GMT", "version": "v2" }, { "created": "Sat, 14 Nov 2020 23:05:29 GMT", "version": "v3" } ]
2020-11-17
[ [ "Geiger", "Alexander", "" ], [ "Liu", "Dongyu", "" ], [ "Alnegheimish", "Sarah", "" ], [ "Cuesta-Infante", "Alfredo", "" ], [ "Veeramachaneni", "Kalyan", "" ] ]
Time series anomalies can offer information relevant to critical situations facing various fields, from finance and aerospace to the IT, security, and medical domains. However, detecting anomalies in time series data is particularly challenging due to the vague definition of anomalies and said data's frequent lack of labels and highly complex temporal correlations. Current state-of-the-art unsupervised machine learning methods for anomaly detection suffer from scalability and portability issues, and may have high false positive rates. In this paper, we propose TadGAN, an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs). To capture the temporal correlations of time series distributions, we use LSTM Recurrent Neural Networks as base models for Generators and Critics. TadGAN is trained with cycle consistency loss to allow for effective time-series data reconstruction. We further propose several novel methods to compute reconstruction errors, as well as different approaches to combine reconstruction errors and Critic outputs to compute anomaly scores. To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one. We compare our approach to 8 baseline anomaly detection methods on 11 datasets from multiple reputable sources such as NASA, Yahoo, Numenta, Amazon, and Twitter. The results show that our approach can effectively detect anomalies and outperform baseline methods in most cases (6 out of 11). Notably, our method has the highest averaged F1 score across all the datasets. Our code is open source and is available as a benchmarking tool.
1912.05759
Hanchi Liu
Bin Liu, Yuxiao Ren, Hanchi Liu, Hui Xu, Zhengfang Wang, Anthony G. Cohn, and Peng Jiang
GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining
null
IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 10, pp. 8305-8325, Oct. 2021
10.1109/TGRS.2020.3046454
null
cs.CV cs.LG eess.IV physics.geo-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A DNN architecture referred to as GPRInvNet was proposed to tackle the challenges of mapping the ground-penetrating radar (GPR) B-Scan data to complex permittivity maps of subsurface structures. The GPRInvNet consisted of a trace-to-trace encoder and a decoder. It was specially designed to take into account the characteristics of GPR inversion when faced with complex GPR B-Scan data, as well as addressing the spatial alignment issues between time-series B-Scan data and spatial permittivity maps. It displayed the ability to fuse features from several adjacent traces on the B-Scan data to enhance each trace, and then further condense the features of each trace separately. As a result, the sensitive zones on the permittivity maps spatially aligned to the enhanced trace could be reconstructed accurately. The GPRInvNet has been utilized to reconstruct the permittivity map of tunnel linings. A diverse range of dielectric models of tunnel linings containing complex defects has been reconstructed using GPRInvNet. The results have demonstrated that the GPRInvNet is capable of effectively reconstructing complex tunnel lining defects with clear boundaries. Comparative results with existing baseline methods also demonstrated the superiority of the GPRInvNet. For the purpose of generalizing the GPRInvNet to real GPR data, some background noise patches recorded from practical model testing were integrated into the synthetic GPR data to retrain the GPRInvNet. The model testing has been conducted for validation, and experimental results revealed that the GPRInvNet had also achieved satisfactory results with regard to the real data.
[ { "created": "Thu, 12 Dec 2019 03:43:09 GMT", "version": "v1" }, { "created": "Fri, 13 Dec 2019 03:35:45 GMT", "version": "v2" }, { "created": "Sun, 26 Sep 2021 08:15:44 GMT", "version": "v3" } ]
2021-09-28
[ [ "Liu", "Bin", "" ], [ "Ren", "Yuxiao", "" ], [ "Liu", "Hanchi", "" ], [ "Xu", "Hui", "" ], [ "Wang", "Zhengfang", "" ], [ "Cohn", "Anthony G.", "" ], [ "Jiang", "Peng", "" ] ]
A DNN architecture referred to as GPRInvNet was proposed to tackle the challenges of mapping the ground-penetrating radar (GPR) B-Scan data to complex permittivity maps of subsurface structures. The GPRInvNet consisted of a trace-to-trace encoder and a decoder. It was specially designed to take into account the characteristics of GPR inversion when faced with complex GPR B-Scan data, as well as addressing the spatial alignment issues between time-series B-Scan data and spatial permittivity maps. It displayed the ability to fuse features from several adjacent traces on the B-Scan data to enhance each trace, and then further condense the features of each trace separately. As a result, the sensitive zones on the permittivity maps spatially aligned to the enhanced trace could be reconstructed accurately. The GPRInvNet has been utilized to reconstruct the permittivity map of tunnel linings. A diverse range of dielectric models of tunnel linings containing complex defects has been reconstructed using GPRInvNet. The results have demonstrated that the GPRInvNet is capable of effectively reconstructing complex tunnel lining defects with clear boundaries. Comparative results with existing baseline methods also demonstrated the superiority of the GPRInvNet. For the purpose of generalizing the GPRInvNet to real GPR data, some background noise patches recorded from practical model testing were integrated into the synthetic GPR data to retrain the GPRInvNet. The model testing has been conducted for validation, and experimental results revealed that the GPRInvNet had also achieved satisfactory results with regard to the real data.
2203.03216
Beiduo Chen
Beiduo Chen, Jun-Yu Ma, Jiajun Qi, Wu Guo, Zhen-Hua Ling, Quan Liu
USTC-NELSLIP at SemEval-2022 Task 11: Gazetteer-Adapted Integration Network for Multilingual Complex Named Entity Recognition
Winner system (USTC-NELSLIP) of SemEval 2022 MultiCoNER shared task on 3 tracks (Chinese, Bangla, Code-mixed)
null
10.18653/v1/2022.semeval-1.223
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the system developed by the USTC-NELSLIP team for SemEval-2022 Task 11 Multilingual Complex Named Entity Recognition (MultiCoNER). We propose a gazetteer-adapted integration network (GAIN) to improve the performance of language models for recognizing complex named entities. The method first adapts the representations of gazetteer networks to those of language models by minimizing the KL divergence between them. After adaptation, these two networks are then integrated for backend supervised named entity recognition (NER) training. The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them. The final predictions are derived from an ensemble of these trained models. Experimental results and detailed analysis verify the effectiveness of the proposed method. The official results show that our system ranked 1st on three tracks (Chinese, Code-mixed and Bangla) and 2nd on the other ten tracks in this task.
[ { "created": "Mon, 7 Mar 2022 09:05:37 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2022 04:46:07 GMT", "version": "v2" } ]
2023-05-09
[ [ "Chen", "Beiduo", "" ], [ "Ma", "Jun-Yu", "" ], [ "Qi", "Jiajun", "" ], [ "Guo", "Wu", "" ], [ "Ling", "Zhen-Hua", "" ], [ "Liu", "Quan", "" ] ]
This paper describes the system developed by the USTC-NELSLIP team for SemEval-2022 Task 11 Multilingual Complex Named Entity Recognition (MultiCoNER). We propose a gazetteer-adapted integration network (GAIN) to improve the performance of language models for recognizing complex named entities. The method first adapts the representations of gazetteer networks to those of language models by minimizing the KL divergence between them. After adaptation, these two networks are then integrated for backend supervised named entity recognition (NER) training. The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them. The final predictions are derived from an ensemble of these trained models. Experimental results and detailed analysis verify the effectiveness of the proposed method. The official results show that our system ranked 1st on three tracks (Chinese, Code-mixed and Bangla) and 2nd on the other ten tracks in this task.
2306.07699
Haozhen Zhang
Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai
Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs
Accepted by CIKM 2023. The code is available at https://github.com/ViktorAxelsen/TGSL
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal Graph Learning, which aims to model the time-evolving nature of graphs, has gained increasing attention and achieved remarkable performance recently. However, in reality, graph structures are often incomplete and noisy, which hinders temporal graph networks (TGNs) from learning informative representations. Graph contrastive learning uses data augmentation to generate plausible variations of existing data and learn robust representations. However, rule-based augmentation approaches may be suboptimal as they lack learnability and fail to leverage rich information from downstream tasks. To address these issues, we propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs, which learns better graph structures for downstream tasks through adding potential temporal edges. In particular, it predicts time-aware context embedding based on previously observed interactions and uses the Gumble-Top-K to select the closest candidate edges to this context embedding. Additionally, several candidate sampling strategies are proposed to ensure both efficiency and diversity. Furthermore, we jointly learn the graph structure and TGNs in an end-to-end manner and perform inference on the refined graph. Extensive experiments on temporal link prediction benchmarks demonstrate that TGSL yields significant gains for the popular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive learning methods on temporal graphs. We release the code at https://github.com/ViktorAxelsen/TGSL.
[ { "created": "Tue, 13 Jun 2023 11:34:36 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 09:03:39 GMT", "version": "v2" } ]
2023-08-16
[ [ "Zhang", "Haozhen", "" ], [ "Han", "Xueting", "" ], [ "Xiao", "Xi", "" ], [ "Bai", "Jing", "" ] ]
Temporal Graph Learning, which aims to model the time-evolving nature of graphs, has gained increasing attention and achieved remarkable performance recently. However, in reality, graph structures are often incomplete and noisy, which hinders temporal graph networks (TGNs) from learning informative representations. Graph contrastive learning uses data augmentation to generate plausible variations of existing data and learn robust representations. However, rule-based augmentation approaches may be suboptimal as they lack learnability and fail to leverage rich information from downstream tasks. To address these issues, we propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs, which learns better graph structures for downstream tasks through adding potential temporal edges. In particular, it predicts time-aware context embedding based on previously observed interactions and uses the Gumble-Top-K to select the closest candidate edges to this context embedding. Additionally, several candidate sampling strategies are proposed to ensure both efficiency and diversity. Furthermore, we jointly learn the graph structure and TGNs in an end-to-end manner and perform inference on the refined graph. Extensive experiments on temporal link prediction benchmarks demonstrate that TGSL yields significant gains for the popular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive learning methods on temporal graphs. We release the code at https://github.com/ViktorAxelsen/TGSL.
2210.09220
David Sinclair D.Phil Oxon
Willem.T.Pye, David.A.Sinclair
A Saccaded Visual Transformer for General Object Spotting
11 pages mostly figure, central idea is to train on distance a patch is form a labelled feature
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the novel combination of a visual transformer style patch classifier with saccaded local attention. A novel optimisation paradigm for training object models is also presented, rather than the optimisation function minimising class membership probability error the network is trained to estimate the normalised distance to the centroid of labelled objects. This approach builds a degree of transnational invariance directly into the model and allows fast saccaded search with gradient ascent to find object centroids. The resulting saccaded visual transformer is demonstrated on human faces.
[ { "created": "Mon, 17 Oct 2022 16:17:02 GMT", "version": "v1" } ]
2022-10-18
[ [ "Pye", "Willem. T.", "" ], [ "Sinclair", "David. A.", "" ] ]
This paper presents the novel combination of a visual transformer style patch classifier with saccaded local attention. A novel optimisation paradigm for training object models is also presented, rather than the optimisation function minimising class membership probability error the network is trained to estimate the normalised distance to the centroid of labelled objects. This approach builds a degree of transnational invariance directly into the model and allows fast saccaded search with gradient ascent to find object centroids. The resulting saccaded visual transformer is demonstrated on human faces.
2406.06385
Yelysei Bondarenko
Yelysei Bondarenko, Riccardo Del Chiaro, Markus Nagel
Low-Rank Quantization-Aware Training for LLMs
null
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large language models (LLMs) are omnipresent, however their practical deployment is challenging due to their ever increasing computational and memory demands. Quantization is one of the most effective ways to make them more compute and memory efficient. Quantization-aware training (QAT) methods, generally produce the best quantized performance, however it comes at the cost of potentially long training time and excessive memory usage, making it impractical when applying for LLMs. Inspired by parameter-efficient fine-tuning (PEFT) and low-rank adaptation (LoRA) literature, we propose LR-QAT -- a lightweight and memory-efficient QAT algorithm for LLMs. LR-QAT employs several components to save memory without sacrificing predictive performance: (a) low-rank auxiliary weights that are aware of the quantization grid; (b) a downcasting operator using fixed-point or double-packed integers and (c) checkpointing. Unlike most related work, our method (i) is inference-efficient, leading to no additional overhead compared to traditional PTQ; (ii) can be seen as a general extended pretraining framework, meaning that the resulting model can still be utilized for any downstream task afterwards; (iii) can be applied across a wide range of quantization settings, such as different choices quantization granularity, activation quantization, and seamlessly combined with many PTQ techniques. We apply LR-QAT to LLaMA-2/3 and Mistral model families and validate its effectiveness on several downstream tasks. Our method outperforms common post-training quantization (PTQ) approaches and reaches the same model performance as full-model QAT at the fraction of its memory usage. Specifically, we can train a 7B LLM on a single consumer grade GPU with 24GB of memory.
[ { "created": "Mon, 10 Jun 2024 15:44:22 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2024 15:18:50 GMT", "version": "v2" } ]
2024-06-21
[ [ "Bondarenko", "Yelysei", "" ], [ "Del Chiaro", "Riccardo", "" ], [ "Nagel", "Markus", "" ] ]
Large language models (LLMs) are omnipresent, however their practical deployment is challenging due to their ever increasing computational and memory demands. Quantization is one of the most effective ways to make them more compute and memory efficient. Quantization-aware training (QAT) methods, generally produce the best quantized performance, however it comes at the cost of potentially long training time and excessive memory usage, making it impractical when applying for LLMs. Inspired by parameter-efficient fine-tuning (PEFT) and low-rank adaptation (LoRA) literature, we propose LR-QAT -- a lightweight and memory-efficient QAT algorithm for LLMs. LR-QAT employs several components to save memory without sacrificing predictive performance: (a) low-rank auxiliary weights that are aware of the quantization grid; (b) a downcasting operator using fixed-point or double-packed integers and (c) checkpointing. Unlike most related work, our method (i) is inference-efficient, leading to no additional overhead compared to traditional PTQ; (ii) can be seen as a general extended pretraining framework, meaning that the resulting model can still be utilized for any downstream task afterwards; (iii) can be applied across a wide range of quantization settings, such as different choices quantization granularity, activation quantization, and seamlessly combined with many PTQ techniques. We apply LR-QAT to LLaMA-2/3 and Mistral model families and validate its effectiveness on several downstream tasks. Our method outperforms common post-training quantization (PTQ) approaches and reaches the same model performance as full-model QAT at the fraction of its memory usage. Specifically, we can train a 7B LLM on a single consumer grade GPU with 24GB of memory.
2303.13712
Ruqing Xu
Ruqing Xu, Sarah Dean
Decision-aid or Controller? Steering Human Decision Makers with Algorithms
null
null
null
null
cs.AI cs.CY cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithms are used to aid human decision makers by making predictions and recommending decisions. Currently, these algorithms are trained to optimize prediction accuracy. What if they were optimized to control final decisions? In this paper, we study a decision-aid algorithm that learns about the human decision maker and provides ''personalized recommendations'' to influence final decisions. We first consider fixed human decision functions which map observable features and the algorithm's recommendations to final decisions. We characterize the conditions under which perfect control over final decisions is attainable. Under fairly general assumptions, the parameters of the human decision function can be identified from past interactions between the algorithm and the human decision maker, even when the algorithm was constrained to make truthful recommendations. We then consider a decision maker who is aware of the algorithm's manipulation and responds strategically. By posing the setting as a variation of the cheap talk game [Crawford and Sobel, 1982], we show that all equilibria are partition equilibria where only coarse information is shared: the algorithm recommends an interval containing the ideal decision. We discuss the potential applications of such algorithms and their social implications.
[ { "created": "Thu, 23 Mar 2023 23:24:26 GMT", "version": "v1" } ]
2023-03-27
[ [ "Xu", "Ruqing", "" ], [ "Dean", "Sarah", "" ] ]
Algorithms are used to aid human decision makers by making predictions and recommending decisions. Currently, these algorithms are trained to optimize prediction accuracy. What if they were optimized to control final decisions? In this paper, we study a decision-aid algorithm that learns about the human decision maker and provides ''personalized recommendations'' to influence final decisions. We first consider fixed human decision functions which map observable features and the algorithm's recommendations to final decisions. We characterize the conditions under which perfect control over final decisions is attainable. Under fairly general assumptions, the parameters of the human decision function can be identified from past interactions between the algorithm and the human decision maker, even when the algorithm was constrained to make truthful recommendations. We then consider a decision maker who is aware of the algorithm's manipulation and responds strategically. By posing the setting as a variation of the cheap talk game [Crawford and Sobel, 1982], we show that all equilibria are partition equilibria where only coarse information is shared: the algorithm recommends an interval containing the ideal decision. We discuss the potential applications of such algorithms and their social implications.
1612.05156
Emil Solsb{\ae}k Ottosen M.Sc.
Emil Solsb{\ae}k Ottosen and Monika D\"orfler
A Phase Vocoder based on Nonstationary Gabor Frames
10 pages, 6 figures
null
10.1109/TASLP.2017.2750767
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new algorithm for time stretching music signals based on the theory of nonstationary Gabor frames (NSGFs). The algorithm extends the techniques of the classical phase vocoder (PV) by incorporating adaptive time-frequency (TF) representations and adaptive phase locking. The adaptive TF representations imply good time resolution for the onsets of attack transients and good frequency resolution for the sinusoidal components. We estimate the phase values only at peak channels and the remaining phases are then locked to the values of the peaks in an adaptive manner. During attack transients we keep the stretch factor equal to one and we propose a new strategy for determining which channels are relevant for reinitializing the corresponding phase values. In contrast to previously published algorithms we use a non-uniform NSGF to obtain a low redundancy of the corresponding TF representation. We show that with just three times as many TF coefficients as signal samples, artifacts such as phasiness and transient smearing can be greatly reduced compared to the classical PV. The proposed algorithm is tested on both synthetic and real world signals and compared with state of the art algorithms in a reproducible manner.
[ { "created": "Thu, 15 Dec 2016 19:43:54 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2017 11:35:59 GMT", "version": "v2" } ]
2017-09-14
[ [ "Ottosen", "Emil Solsbæk", "" ], [ "Dörfler", "Monika", "" ] ]
We propose a new algorithm for time stretching music signals based on the theory of nonstationary Gabor frames (NSGFs). The algorithm extends the techniques of the classical phase vocoder (PV) by incorporating adaptive time-frequency (TF) representations and adaptive phase locking. The adaptive TF representations imply good time resolution for the onsets of attack transients and good frequency resolution for the sinusoidal components. We estimate the phase values only at peak channels and the remaining phases are then locked to the values of the peaks in an adaptive manner. During attack transients we keep the stretch factor equal to one and we propose a new strategy for determining which channels are relevant for reinitializing the corresponding phase values. In contrast to previously published algorithms we use a non-uniform NSGF to obtain a low redundancy of the corresponding TF representation. We show that with just three times as many TF coefficients as signal samples, artifacts such as phasiness and transient smearing can be greatly reduced compared to the classical PV. The proposed algorithm is tested on both synthetic and real world signals and compared with state of the art algorithms in a reproducible manner.
2407.16485
Baiyu Peng
Baiyu Peng, Aude Billard
Learning General Continuous Constraint from Demonstrations via Positive-Unlabeled Learning
null
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Planning for a wide range of real-world tasks necessitates to know and write all constraints. However, instances exist where these constraints are either unknown or challenging to specify accurately. A possible solution is to infer the unknown constraints from expert demonstration. The majority of prior works limit themselves to learning simple linear constraints, or require strong knowledge of the true constraint parameterization or environmental model. To mitigate these problems, this paper presents a positive-unlabeled (PU) learning approach to infer a continuous, arbitrary and possibly nonlinear, constraint from demonstration. From a PU learning view, We treat all data in demonstrations as positive (feasible) data, and learn a (sub)-optimal policy to generate high-reward-winning but potentially infeasible trajectories, which serve as unlabeled data containing both feasible and infeasible states. Under an assumption on data distribution, a feasible-infeasible classifier (i.e., constraint model) is learned from the two datasets through a postprocessing PU learning technique. The entire method employs an iterative framework alternating between updating the policy, which generates and selects higher-reward policies, and updating the constraint model. Additionally, a memory buffer is introduced to record and reuse samples from previous iterations to prevent forgetting. The effectiveness of the proposed method is validated in two Mujoco environments, successfully inferring continuous nonlinear constraints and outperforming a baseline method in terms of constraint accuracy and policy safety.
[ { "created": "Tue, 23 Jul 2024 14:00:18 GMT", "version": "v1" } ]
2024-07-24
[ [ "Peng", "Baiyu", "" ], [ "Billard", "Aude", "" ] ]
Planning for a wide range of real-world tasks necessitates to know and write all constraints. However, instances exist where these constraints are either unknown or challenging to specify accurately. A possible solution is to infer the unknown constraints from expert demonstration. The majority of prior works limit themselves to learning simple linear constraints, or require strong knowledge of the true constraint parameterization or environmental model. To mitigate these problems, this paper presents a positive-unlabeled (PU) learning approach to infer a continuous, arbitrary and possibly nonlinear, constraint from demonstration. From a PU learning view, We treat all data in demonstrations as positive (feasible) data, and learn a (sub)-optimal policy to generate high-reward-winning but potentially infeasible trajectories, which serve as unlabeled data containing both feasible and infeasible states. Under an assumption on data distribution, a feasible-infeasible classifier (i.e., constraint model) is learned from the two datasets through a postprocessing PU learning technique. The entire method employs an iterative framework alternating between updating the policy, which generates and selects higher-reward policies, and updating the constraint model. Additionally, a memory buffer is introduced to record and reuse samples from previous iterations to prevent forgetting. The effectiveness of the proposed method is validated in two Mujoco environments, successfully inferring continuous nonlinear constraints and outperforming a baseline method in terms of constraint accuracy and policy safety.
2010.04363
Zhiwei Xu
Zhiwei Xu, Thalaiyasingam Ajanthan, Richard Hartley
Refining Semantic Segmentation with Superpixel by Transparent Initialization and Sparse Encoder
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although deep learning greatly improves the performance of semantic segmentation, its success mainly lies in object central areas without accurate edges. As superpixels are a popular and effective auxiliary to preserve object edges, in this paper, we jointly learn semantic segmentation with trainable superpixels. We achieve it with fully-connected layers with Transparent Initialization (TI) and efficient logit consistency using a sparse encoder. The proposed TI preserves the effects of learned parameters of pretrained networks. This avoids a significant increase of the loss of pretrained networks, which otherwise may be caused by inappropriate parameter initialization of the additional layers. Meanwhile, consistent pixel labels in each superpixel are guaranteed by logit consistency. The sparse encoder with sparse matrix operations substantially reduces both the memory requirement and the computational complexity. We demonstrated the superiority of TI over other parameter initialization methods and tested its numerical stability. The effectiveness of our proposal was validated on PASCAL VOC 2012, ADE20K, and PASCAL Context showing enhanced semantic segmentation edges. With quantitative evaluations on segmentation edges using performance ratio and F-measure, our method outperforms the state-of-the-art.
[ { "created": "Fri, 9 Oct 2020 04:20:54 GMT", "version": "v1" }, { "created": "Sat, 7 Nov 2020 23:36:29 GMT", "version": "v2" }, { "created": "Tue, 24 Nov 2020 10:14:58 GMT", "version": "v3" } ]
2020-11-25
[ [ "Xu", "Zhiwei", "" ], [ "Ajanthan", "Thalaiyasingam", "" ], [ "Hartley", "Richard", "" ] ]
Although deep learning greatly improves the performance of semantic segmentation, its success mainly lies in object central areas without accurate edges. As superpixels are a popular and effective auxiliary to preserve object edges, in this paper, we jointly learn semantic segmentation with trainable superpixels. We achieve it with fully-connected layers with Transparent Initialization (TI) and efficient logit consistency using a sparse encoder. The proposed TI preserves the effects of learned parameters of pretrained networks. This avoids a significant increase of the loss of pretrained networks, which otherwise may be caused by inappropriate parameter initialization of the additional layers. Meanwhile, consistent pixel labels in each superpixel are guaranteed by logit consistency. The sparse encoder with sparse matrix operations substantially reduces both the memory requirement and the computational complexity. We demonstrated the superiority of TI over other parameter initialization methods and tested its numerical stability. The effectiveness of our proposal was validated on PASCAL VOC 2012, ADE20K, and PASCAL Context showing enhanced semantic segmentation edges. With quantitative evaluations on segmentation edges using performance ratio and F-measure, our method outperforms the state-of-the-art.
2310.17894
Haiqin Yang
Weixu Zhang, Yifei Wang, Yuanfeng Song, Victor Junqiu Wei, Yuxing Tian, Yiyan Qi, Jonathan H. Chan, Raymond Chi-Wing Wong, Haiqin Yang
Natural Language Interfaces for Tabular Data Querying and Visualization: A Survey
20 pages, 4 figures, 5 tables. Accepted by IEEE TKDE
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The emergence of natural language processing has revolutionized the way users interact with tabular data, enabling a shift from traditional query languages and manual plotting to more intuitive, language-based interfaces. The rise of large language models (LLMs) such as ChatGPT and its successors has further advanced this field, opening new avenues for natural language processing techniques. This survey presents a comprehensive overview of natural language interfaces for tabular data querying and visualization, which allow users to interact with data using natural language queries. We introduce the fundamental concepts and techniques underlying these interfaces with a particular emphasis on semantic parsing, the key technology facilitating the translation from natural language to SQL queries or data visualization commands. We then delve into the recent advancements in Text-to-SQL and Text-to-Vis problems from the perspectives of datasets, methodologies, metrics, and system designs. This includes a deep dive into the influence of LLMs, highlighting their strengths, limitations, and potential for future improvements. Through this survey, we aim to provide a roadmap for researchers and practitioners interested in developing and applying natural language interfaces for data interaction in the era of large language models.
[ { "created": "Fri, 27 Oct 2023 05:01:20 GMT", "version": "v1" }, { "created": "Sat, 11 May 2024 09:44:35 GMT", "version": "v2" }, { "created": "Mon, 20 May 2024 02:45:37 GMT", "version": "v3" } ]
2024-05-21
[ [ "Zhang", "Weixu", "" ], [ "Wang", "Yifei", "" ], [ "Song", "Yuanfeng", "" ], [ "Wei", "Victor Junqiu", "" ], [ "Tian", "Yuxing", "" ], [ "Qi", "Yiyan", "" ], [ "Chan", "Jonathan H.", "" ], [ "Wong", "Raymond Chi-Wing", "" ], [ "Yang", "Haiqin", "" ] ]
The emergence of natural language processing has revolutionized the way users interact with tabular data, enabling a shift from traditional query languages and manual plotting to more intuitive, language-based interfaces. The rise of large language models (LLMs) such as ChatGPT and its successors has further advanced this field, opening new avenues for natural language processing techniques. This survey presents a comprehensive overview of natural language interfaces for tabular data querying and visualization, which allow users to interact with data using natural language queries. We introduce the fundamental concepts and techniques underlying these interfaces with a particular emphasis on semantic parsing, the key technology facilitating the translation from natural language to SQL queries or data visualization commands. We then delve into the recent advancements in Text-to-SQL and Text-to-Vis problems from the perspectives of datasets, methodologies, metrics, and system designs. This includes a deep dive into the influence of LLMs, highlighting their strengths, limitations, and potential for future improvements. Through this survey, we aim to provide a roadmap for researchers and practitioners interested in developing and applying natural language interfaces for data interaction in the era of large language models.
1903.00100
Shaojie Xu
Shaojie Xu, Anvesha Amaravati, Justin Romberg, Arijit Raychowdhury
Appearance-based Gesture recognition in the compressed domain
arXiv admin note: text overlap with arXiv:1605.08313
2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, 2017, pp. 1722-1726
10.1109/ICASSP.2017.7952451
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel appearance-based gesture recognition algorithm using compressed domain signal processing techniques. Gesture features are extracted directly from the compressed measurements, which are the block averages and the coded linear combinations of the image sensor's pixel values. We also improve both the computational efficiency and the memory requirement of the previous DTW-based K-NN gesture classifiers. Both simulation testing and hardware implementation strongly support the proposed algorithm.
[ { "created": "Tue, 19 Feb 2019 06:05:12 GMT", "version": "v1" } ]
2019-03-04
[ [ "Xu", "Shaojie", "" ], [ "Amaravati", "Anvesha", "" ], [ "Romberg", "Justin", "" ], [ "Raychowdhury", "Arijit", "" ] ]
We propose a novel appearance-based gesture recognition algorithm using compressed domain signal processing techniques. Gesture features are extracted directly from the compressed measurements, which are the block averages and the coded linear combinations of the image sensor's pixel values. We also improve both the computational efficiency and the memory requirement of the previous DTW-based K-NN gesture classifiers. Both simulation testing and hardware implementation strongly support the proposed algorithm.
2106.05633
Arthur Brack
Arthur Brack and Anett Hoppe and Ralph Ewerth
Citation Recommendation for Research Papers via Knowledge Graphs
Accepted for publication in 25th International Conference on Theory and Practice of Digital Libraries (TPDL), 2021
null
null
null
cs.DL cs.IR
http://creativecommons.org/licenses/by/4.0/
Citation recommendation for research papers is a valuable task that can help researchers improve the quality of their work by suggesting relevant related work. Current approaches for this task rely primarily on the text of the papers and the citation network. In this paper, we propose to exploit an additional source of information, namely research knowledge graphs (KG) that interlink research papers based on mentioned scientific concepts. Our experimental results demonstrate that the combination of information from research KGs with existing state-of-the-art approaches is beneficial. Experimental results are presented for the STM-KG (STM: Science, Technology, Medicine), which is an automatically populated knowledge graph based on the scientific concepts extracted from papers of ten domains. The proposed approach outperforms the state of the art with a mean average precision of 20.6% (+0.8) for the top-50 retrieved results.
[ { "created": "Thu, 10 Jun 2021 10:16:51 GMT", "version": "v1" } ]
2021-06-11
[ [ "Brack", "Arthur", "" ], [ "Hoppe", "Anett", "" ], [ "Ewerth", "Ralph", "" ] ]
Citation recommendation for research papers is a valuable task that can help researchers improve the quality of their work by suggesting relevant related work. Current approaches for this task rely primarily on the text of the papers and the citation network. In this paper, we propose to exploit an additional source of information, namely research knowledge graphs (KG) that interlink research papers based on mentioned scientific concepts. Our experimental results demonstrate that the combination of information from research KGs with existing state-of-the-art approaches is beneficial. Experimental results are presented for the STM-KG (STM: Science, Technology, Medicine), which is an automatically populated knowledge graph based on the scientific concepts extracted from papers of ten domains. The proposed approach outperforms the state of the art with a mean average precision of 20.6% (+0.8) for the top-50 retrieved results.
2403.18416
Thomas Leyssens
Thomas Leyssens, Michel Henry, Jonathan Lambrechts, Jean-Francois Remacle
A Delaunay Refinement Algorithm for the Particle Finite Element Method applied to Free Surface Flows
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
This paper proposes two contributions to the calculation of free surface flows using the particle finite element method (PFEM). The PFEM is based on a Lagrangian approach: a set of particles defines the fluid. Then, unlike a pure Lagrangian method, all the particles are connected by a triangular mesh. The difficulty lies in locating the free surface from this mesh. It is a matter of deciding which of the elements in the mesh are part of the fluid domain, and to define a boundary - the free surface. Then, the incompressible Navier-Stokes equations are solved on the fluid domain and the particles' position is updated using the resulting velocity vector. Our first contribution is to propose an approach to adapt the mesh with theoretical guarantees of quality: the mesh generation community has acquired a lot of experience and understanding about mesh adaptation approaches with guarantees of quality on the final mesh. We use here a Delaunay refinement strategy, allowing to insert and remove nodes while gradually improving mesh quality. We show that this allows to create stable and smooth free surface geometries. Our PFEM approach models the topological evolution of one fluid. It is nevertheless necessary to apply conditions on the domain boundaries. When a boundary is a free surface, the flow on the other side is not modelled, it is represented by an external pressure. On the external free surface boundary, atmospheric pressure can be imposed. Nevertheless, there may be internal free surfaces: the fluid can fully encapsulate cavities to form bubbles. The pressure required to maintain the volume of those bubbles is a priori unknown. We propose a multi-point constraint approach to enforce global incompressibility of those empty bubbles. This approach allows to accurately model bubbly flows that involve two fluids with large density differences, while only modelling the heavier fluid.
[ { "created": "Wed, 27 Mar 2024 10:08:48 GMT", "version": "v1" } ]
2024-03-28
[ [ "Leyssens", "Thomas", "" ], [ "Henry", "Michel", "" ], [ "Lambrechts", "Jonathan", "" ], [ "Remacle", "Jean-Francois", "" ] ]
This paper proposes two contributions to the calculation of free surface flows using the particle finite element method (PFEM). The PFEM is based on a Lagrangian approach: a set of particles defines the fluid. Then, unlike a pure Lagrangian method, all the particles are connected by a triangular mesh. The difficulty lies in locating the free surface from this mesh. It is a matter of deciding which of the elements in the mesh are part of the fluid domain, and to define a boundary - the free surface. Then, the incompressible Navier-Stokes equations are solved on the fluid domain and the particles' position is updated using the resulting velocity vector. Our first contribution is to propose an approach to adapt the mesh with theoretical guarantees of quality: the mesh generation community has acquired a lot of experience and understanding about mesh adaptation approaches with guarantees of quality on the final mesh. We use here a Delaunay refinement strategy, allowing to insert and remove nodes while gradually improving mesh quality. We show that this allows to create stable and smooth free surface geometries. Our PFEM approach models the topological evolution of one fluid. It is nevertheless necessary to apply conditions on the domain boundaries. When a boundary is a free surface, the flow on the other side is not modelled, it is represented by an external pressure. On the external free surface boundary, atmospheric pressure can be imposed. Nevertheless, there may be internal free surfaces: the fluid can fully encapsulate cavities to form bubbles. The pressure required to maintain the volume of those bubbles is a priori unknown. We propose a multi-point constraint approach to enforce global incompressibility of those empty bubbles. This approach allows to accurately model bubbly flows that involve two fluids with large density differences, while only modelling the heavier fluid.
2301.07068
Davide Corsi
Luca Marzari, Davide Corsi, Ferdinando Cicalese and Alessandro Farinelli
The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks
Accepted in the International Joint Conference on Artificial Intelligence (IJCAI), 2023. [Marzari and Corsi contributed equally]
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks are increasingly adopted in critical tasks that require a high level of safety, e.g., autonomous driving. While state-of-the-art verifiers can be employed to check whether a DNN is unsafe w.r.t. some given property (i.e., whether there is at least one unsafe input configuration), their yes/no output is not informative enough for other purposes, such as shielding, model selection, or training improvements. In this paper, we introduce the #DNN-Verification problem, which involves counting the number of input configurations of a DNN that result in a violation of a particular safety property. We analyze the complexity of this problem and propose a novel approach that returns the exact count of violations. Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements. We present experimental results on a set of safety-critical benchmarks that demonstrate the effectiveness of our approximate method and evaluate the tightness of the bound.
[ { "created": "Tue, 17 Jan 2023 18:32:01 GMT", "version": "v1" }, { "created": "Tue, 9 May 2023 09:02:59 GMT", "version": "v2" }, { "created": "Mon, 22 May 2023 07:58:42 GMT", "version": "v3" }, { "created": "Mon, 19 Jun 2023 13:13:38 GMT", "version": "v4" } ]
2023-06-21
[ [ "Marzari", "Luca", "" ], [ "Corsi", "Davide", "" ], [ "Cicalese", "Ferdinando", "" ], [ "Farinelli", "Alessandro", "" ] ]
Deep Neural Networks are increasingly adopted in critical tasks that require a high level of safety, e.g., autonomous driving. While state-of-the-art verifiers can be employed to check whether a DNN is unsafe w.r.t. some given property (i.e., whether there is at least one unsafe input configuration), their yes/no output is not informative enough for other purposes, such as shielding, model selection, or training improvements. In this paper, we introduce the #DNN-Verification problem, which involves counting the number of input configurations of a DNN that result in a violation of a particular safety property. We analyze the complexity of this problem and propose a novel approach that returns the exact count of violations. Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements. We present experimental results on a set of safety-critical benchmarks that demonstrate the effectiveness of our approximate method and evaluate the tightness of the bound.
2304.12652
Peng Dai
Peng Dai, Yinda Zhang, Xin Yu, Xiaoyang Lyu, Xiaojuan Qi
Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rendering novel view images is highly desirable for many applications. Despite recent progress, it remains challenging to render high-fidelity and view-consistent novel views of large-scale scenes from in-the-wild images with inevitable artifacts (e.g., motion blur). To this end, we develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images. Besides, images captured in the wild inevitably contain artifacts, such as motion blur, which deteriorates the quality of rendered images. Accordingly, we propose strategies to simulate blur effects on the rendered images to mitigate the negative influence of blurriness images and reduce their importance during training based on precomputed quality-aware weights. Extensive experiments on real and synthetic data demonstrate our model surpasses state-of-the-art point-based methods for novel view synthesis. The code is available at https://daipengwa.github.io/Hybrid-Rendering-ProjectPage.
[ { "created": "Tue, 25 Apr 2023 08:36:33 GMT", "version": "v1" }, { "created": "Sun, 9 Jul 2023 13:45:44 GMT", "version": "v2" } ]
2023-07-11
[ [ "Dai", "Peng", "" ], [ "Zhang", "Yinda", "" ], [ "Yu", "Xin", "" ], [ "Lyu", "Xiaoyang", "" ], [ "Qi", "Xiaojuan", "" ] ]
Rendering novel view images is highly desirable for many applications. Despite recent progress, it remains challenging to render high-fidelity and view-consistent novel views of large-scale scenes from in-the-wild images with inevitable artifacts (e.g., motion blur). To this end, we develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images. Besides, images captured in the wild inevitably contain artifacts, such as motion blur, which deteriorates the quality of rendered images. Accordingly, we propose strategies to simulate blur effects on the rendered images to mitigate the negative influence of blurriness images and reduce their importance during training based on precomputed quality-aware weights. Extensive experiments on real and synthetic data demonstrate our model surpasses state-of-the-art point-based methods for novel view synthesis. The code is available at https://daipengwa.github.io/Hybrid-Rendering-ProjectPage.
1201.6567
Ravi Kumar
Bahman Bahmani, Ravi Kumar, Sergei Vassilvitskii
Densest Subgraph in Streaming and MapReduce
VLDB2012
Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 5, pp. 454-465 (2012)
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of finding locally dense components of a graph is an important primitive in data analysis, with wide-ranging applications from community mining to spam detection and the discovery of biological network modules. In this paper we present new algorithms for finding the densest subgraph in the streaming model. For any epsilon>0, our algorithms make O((log n)/log (1+epsilon)) passes over the input and find a subgraph whose density is guaranteed to be within a factor 2(1+epsilon) of the optimum. Our algorithms are also easily parallelizable and we illustrate this by realizing them in the MapReduce model. In addition we perform extensive experimental evaluation on massive real-world graphs showing the performance and scalability of our algorithms in practice.
[ { "created": "Tue, 31 Jan 2012 15:10:03 GMT", "version": "v1" } ]
2012-02-01
[ [ "Bahmani", "Bahman", "" ], [ "Kumar", "Ravi", "" ], [ "Vassilvitskii", "Sergei", "" ] ]
The problem of finding locally dense components of a graph is an important primitive in data analysis, with wide-ranging applications from community mining to spam detection and the discovery of biological network modules. In this paper we present new algorithms for finding the densest subgraph in the streaming model. For any epsilon>0, our algorithms make O((log n)/log (1+epsilon)) passes over the input and find a subgraph whose density is guaranteed to be within a factor 2(1+epsilon) of the optimum. Our algorithms are also easily parallelizable and we illustrate this by realizing them in the MapReduce model. In addition we perform extensive experimental evaluation on massive real-world graphs showing the performance and scalability of our algorithms in practice.
1603.02655
Varun Nagpal
Varun Nagpal
Study and evaluation of an Irregular Graph Algorithm on Multicore and GPU Processor Architectures
null
null
null
1115-1213Nagpal
cs.DC cs.PF
http://creativecommons.org/licenses/by-nc-sa/4.0/
One area of Computing applications which poses significant challenge of performance scalability on Chip Multiprocessors(CMP's) are Irregular applications. Such applications have very little computation and unpredictable memory access patterns making them memory-bound in contrast to compute-bound applications. Since the gap between processor and memory performance continues to exist, difficulty to hide and decrease this gap is one of the important factors which results in poor performance of these applications on CMP's. The goal of this thesis is to overcome many such challenges posed during performance acceleration of an irregular graph algorithm called Triad Census. We accelerated the Triad Census algorithm on two significantly different Chip Multiprocessors: Dual-socket Intel Xeon Multicore (8 hardware threads per socket) and 240-processor core NVIDIA Tesla C1060 GPGPU(128 hardware threads per core). The experimental results obtained on Intel Multicore Xeon system shows performance speedups (w.r.t baseline sequential) of maximum 56x , average 33x and minimum 8.3x for real world graph data sets. On NVIDIA Tesla C1060 GPGPU, we were able to match almost equally the Multicore results - 58.4x maximum, 32.8x average and 4.2x minimum speedups w.r.t baseline sequential. In terms of raw performance, for the graph data set called Patents network, our results on Intel Xeon Multicore(16 hw threads) were 1.27x times faster than previous results on Cray XMT(16 hw threads) while results achieved on GPGPU were comparatively slower(0.72x). To the best of our knowledge, this algorithm has only been accelerated on supercomputer class computer named Cray XMT and no work exists that demonstrates performance evaluation and comparison of this algorithm on relatively lower-cost Multicore and GPGPU based platforms.
[ { "created": "Tue, 8 Mar 2016 20:07:31 GMT", "version": "v1" } ]
2016-03-09
[ [ "Nagpal", "Varun", "" ] ]
One area of Computing applications which poses significant challenge of performance scalability on Chip Multiprocessors(CMP's) are Irregular applications. Such applications have very little computation and unpredictable memory access patterns making them memory-bound in contrast to compute-bound applications. Since the gap between processor and memory performance continues to exist, difficulty to hide and decrease this gap is one of the important factors which results in poor performance of these applications on CMP's. The goal of this thesis is to overcome many such challenges posed during performance acceleration of an irregular graph algorithm called Triad Census. We accelerated the Triad Census algorithm on two significantly different Chip Multiprocessors: Dual-socket Intel Xeon Multicore (8 hardware threads per socket) and 240-processor core NVIDIA Tesla C1060 GPGPU(128 hardware threads per core). The experimental results obtained on Intel Multicore Xeon system shows performance speedups (w.r.t baseline sequential) of maximum 56x , average 33x and minimum 8.3x for real world graph data sets. On NVIDIA Tesla C1060 GPGPU, we were able to match almost equally the Multicore results - 58.4x maximum, 32.8x average and 4.2x minimum speedups w.r.t baseline sequential. In terms of raw performance, for the graph data set called Patents network, our results on Intel Xeon Multicore(16 hw threads) were 1.27x times faster than previous results on Cray XMT(16 hw threads) while results achieved on GPGPU were comparatively slower(0.72x). To the best of our knowledge, this algorithm has only been accelerated on supercomputer class computer named Cray XMT and no work exists that demonstrates performance evaluation and comparison of this algorithm on relatively lower-cost Multicore and GPGPU based platforms.
0710.4819
EDA Publishing Association
S. Lopez, G. M. Callico, J. F. Lopez, R. Sarmiento
A High Quality/Low Computational Cost Technique for Block Matching Motion Estimation
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)
null
null
cs.MM
null
Motion estimation is the most critical process in video coding systems. First of all, it has a definitive impact on the rate-distortion performance given by the video encoder. Secondly, it is the most computationally intensive process within the encoding loop. For these reasons, the design of high-performance low-cost motion estimators is a crucial task in the video compression field. An adaptive cost block matching (ACBM) motion estimation technique is presented in this paper, featuring an excellent tradeoff between the quality of the reconstructed video sequences and the computational effort. Simulation results demonstrate that the ACBM algorithm achieves a slight better rate-distortion performance than the one given by the well-known full search algorithm block matching algorithm with reductions of up to 95% in the computational load.
[ { "created": "Thu, 25 Oct 2007 12:03:15 GMT", "version": "v1" } ]
2011-11-09
[ [ "Lopez", "S.", "" ], [ "Callico", "G. M.", "" ], [ "Lopez", "J. F.", "" ], [ "Sarmiento", "R.", "" ] ]
Motion estimation is the most critical process in video coding systems. First of all, it has a definitive impact on the rate-distortion performance given by the video encoder. Secondly, it is the most computationally intensive process within the encoding loop. For these reasons, the design of high-performance low-cost motion estimators is a crucial task in the video compression field. An adaptive cost block matching (ACBM) motion estimation technique is presented in this paper, featuring an excellent tradeoff between the quality of the reconstructed video sequences and the computational effort. Simulation results demonstrate that the ACBM algorithm achieves a slight better rate-distortion performance than the one given by the well-known full search algorithm block matching algorithm with reductions of up to 95% in the computational load.
2302.05499
Sumyeong Ahn
Sumyeong Ahn, Jongwoo Ko, Se-Young Yun
CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition
ICLR'23 Spotlight, 23 pages
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018.
[ { "created": "Fri, 10 Feb 2023 20:30:22 GMT", "version": "v1" } ]
2023-02-14
[ [ "Ahn", "Sumyeong", "" ], [ "Ko", "Jongwoo", "" ], [ "Yun", "Se-Young", "" ] ]
Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018.
2105.01861
Pawe{\l} Parys
Pawe{\l} Parys
Higher-Order Model Checking Step by Step
This is an extended version of a paper published on the ICALP 2021 conference
null
null
null
cs.LO cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show a new simple algorithm that solves the model-checking problem for recursion schemes: check whether the tree generated by a given higher-order recursion scheme is accepted by a given alternating parity automaton. The algorithm amounts to a procedure that transforms a recursion scheme of order $n$ to a recursion scheme of order $n-1$, preserving acceptance, and increasing the size only exponentially. After repeating the procedure $n$ times, we obtain a recursion scheme of order $0$, for which the problem boils down to solving a finite parity game. Since the size grows exponentially at each step, the overall complexity is $n$-EXPTIME, which is known to be optimal. More precisely, the transformation is linear in the size of the recursion scheme, assuming that the arity of employed nonterminals and the size of the automaton are bounded by a constant; this results in an FPT algorithm for the model-checking problem. Our transformation is a generalization of a previous transformation of the author (2020), working for reachability automata in place of parity automata. The step-by-step approach can be opposed to previous algorithms solving the considered problem "in one step", being compulsorily more complicated.
[ { "created": "Wed, 5 May 2021 04:21:31 GMT", "version": "v1" } ]
2021-05-06
[ [ "Parys", "Paweł", "" ] ]
We show a new simple algorithm that solves the model-checking problem for recursion schemes: check whether the tree generated by a given higher-order recursion scheme is accepted by a given alternating parity automaton. The algorithm amounts to a procedure that transforms a recursion scheme of order $n$ to a recursion scheme of order $n-1$, preserving acceptance, and increasing the size only exponentially. After repeating the procedure $n$ times, we obtain a recursion scheme of order $0$, for which the problem boils down to solving a finite parity game. Since the size grows exponentially at each step, the overall complexity is $n$-EXPTIME, which is known to be optimal. More precisely, the transformation is linear in the size of the recursion scheme, assuming that the arity of employed nonterminals and the size of the automaton are bounded by a constant; this results in an FPT algorithm for the model-checking problem. Our transformation is a generalization of a previous transformation of the author (2020), working for reachability automata in place of parity automata. The step-by-step approach can be opposed to previous algorithms solving the considered problem "in one step", being compulsorily more complicated.
2404.00358
Duosheng Chen
Duosheng Chen, Shihao Zhou, Jinshan Pan, Jinglei Shi, Lishen Qu and Jufeng Yang
Spread Your Wings: A Radial Strip Transformer for Image Deblurring
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploring motion information is important for the motion deblurring task. Recent the window-based transformer approaches have achieved decent performance in image deblurring. Note that the motion causing blurry results is usually composed of translation and rotation movements and the window-shift operation in the Cartesian coordinate system by the window-based transformer approaches only directly explores translation motion in orthogonal directions. Thus, these methods have the limitation of modeling the rotation part. To alleviate this problem, we introduce the polar coordinate-based transformer, which has the angles and distance to explore rotation motion and translation information together. In this paper, we propose a Radial Strip Transformer (RST), which is a transformer-based architecture that restores the blur images in a polar coordinate system instead of a Cartesian one. RST contains a dynamic radial embedding module (DRE) to extract the shallow feature by a radial deformable convolution. We design a polar mask layer to generate the offsets for the deformable convolution, which can reshape the convolution kernel along the radius to better capture the rotation motion information. Furthermore, we proposed a radial strip attention solver (RSAS) as deep feature extraction, where the relationship of windows is organized by azimuth and radius. This attention module contains radial strip windows to reweight image features in the polar coordinate, which preserves more useful information in rotation and translation motion together for better recovering the sharp images. Experimental results on six synthesis and real-world datasets prove that our method performs favorably against other SOTA methods for the image deblurring task.
[ { "created": "Sat, 30 Mar 2024 13:20:04 GMT", "version": "v1" }, { "created": "Sun, 19 May 2024 03:19:52 GMT", "version": "v2" }, { "created": "Wed, 22 May 2024 02:50:58 GMT", "version": "v3" } ]
2024-05-24
[ [ "Chen", "Duosheng", "" ], [ "Zhou", "Shihao", "" ], [ "Pan", "Jinshan", "" ], [ "Shi", "Jinglei", "" ], [ "Qu", "Lishen", "" ], [ "Yang", "Jufeng", "" ] ]
Exploring motion information is important for the motion deblurring task. Recent the window-based transformer approaches have achieved decent performance in image deblurring. Note that the motion causing blurry results is usually composed of translation and rotation movements and the window-shift operation in the Cartesian coordinate system by the window-based transformer approaches only directly explores translation motion in orthogonal directions. Thus, these methods have the limitation of modeling the rotation part. To alleviate this problem, we introduce the polar coordinate-based transformer, which has the angles and distance to explore rotation motion and translation information together. In this paper, we propose a Radial Strip Transformer (RST), which is a transformer-based architecture that restores the blur images in a polar coordinate system instead of a Cartesian one. RST contains a dynamic radial embedding module (DRE) to extract the shallow feature by a radial deformable convolution. We design a polar mask layer to generate the offsets for the deformable convolution, which can reshape the convolution kernel along the radius to better capture the rotation motion information. Furthermore, we proposed a radial strip attention solver (RSAS) as deep feature extraction, where the relationship of windows is organized by azimuth and radius. This attention module contains radial strip windows to reweight image features in the polar coordinate, which preserves more useful information in rotation and translation motion together for better recovering the sharp images. Experimental results on six synthesis and real-world datasets prove that our method performs favorably against other SOTA methods for the image deblurring task.
2205.15172
Guilherme Moraes Rosa
Guilherme Moraes Rosa and Luiz Bonifacio and Vitor Jeronymo and Hugo Abonizio and Roberto Lotufo and Rodrigo Nogueira
Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown that language models scaled to billions of parameters, such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In this work, we experiment with zero-shot models in the legal case entailment task of the COLIEE 2022 competition. Our experiments show that scaling the number of parameters in a language model improves the F1 score of our previous zero-shot result by more than 6 points, suggesting that stronger zero-shot capability may be a characteristic of larger models, at least for this task. Our 3B-parameter zero-shot model outperforms all models, including ensembles, in the COLIEE 2021 test set and also achieves the best performance of a single model in the COLIEE 2022 competition, second only to the ensemble composed of the 3B model itself and a smaller version of the same model. Despite the challenges posed by large language models, mainly due to latency constraints in real-time applications, we provide a demonstration of our zero-shot monoT5-3b model being used in production as a search engine, including for legal documents. The code for our submission and the demo of our system are available at https://github.com/neuralmind-ai/coliee and https://neuralsearchx.neuralmind.ai, respectively.
[ { "created": "Mon, 30 May 2022 15:21:26 GMT", "version": "v1" } ]
2022-05-31
[ [ "Rosa", "Guilherme Moraes", "" ], [ "Bonifacio", "Luiz", "" ], [ "Jeronymo", "Vitor", "" ], [ "Abonizio", "Hugo", "" ], [ "Lotufo", "Roberto", "" ], [ "Nogueira", "Rodrigo", "" ] ]
Recent work has shown that language models scaled to billions of parameters, such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In this work, we experiment with zero-shot models in the legal case entailment task of the COLIEE 2022 competition. Our experiments show that scaling the number of parameters in a language model improves the F1 score of our previous zero-shot result by more than 6 points, suggesting that stronger zero-shot capability may be a characteristic of larger models, at least for this task. Our 3B-parameter zero-shot model outperforms all models, including ensembles, in the COLIEE 2021 test set and also achieves the best performance of a single model in the COLIEE 2022 competition, second only to the ensemble composed of the 3B model itself and a smaller version of the same model. Despite the challenges posed by large language models, mainly due to latency constraints in real-time applications, we provide a demonstration of our zero-shot monoT5-3b model being used in production as a search engine, including for legal documents. The code for our submission and the demo of our system are available at https://github.com/neuralmind-ai/coliee and https://neuralsearchx.neuralmind.ai, respectively.
1912.08776
Byungsoo Kim
Simon Biland, Vinicius C. Azevedo, Byungsoo Kim and Barbara Solenthaler
Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks
Submitted to Eurographics2020
Eurographics 2020 - Short Papers
10.2312/egs.20201019
null
cs.LG cs.GR physics.comp-ph stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.
[ { "created": "Wed, 18 Dec 2019 18:13:22 GMT", "version": "v1" } ]
2020-05-29
[ [ "Biland", "Simon", "" ], [ "Azevedo", "Vinicius C.", "" ], [ "Kim", "Byungsoo", "" ], [ "Solenthaler", "Barbara", "" ] ]
Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.
1706.08106
Christophe Guyeux
Wiem Elghazel, Kamal Medjaher, Nourredine Zerhouni, Jacques Bahi, Ahamd Farhat, Christophe Guyeux, and Mourad Hakem
Random Forests for Industrial Device Functioning Diagnostics Using Wireless Sensor Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, random forests are proposed for operating devices diagnostics in the presence of a variable number of features. In various contexts, like large or difficult-to-access monitored areas, wired sensor networks providing features to achieve diagnostics are either very costly to use or totally impossible to spread out. Using a wireless sensor network can solve this problem, but this latter is more subjected to flaws. Furthermore, the networks' topology often changes, leading to a variability in quality of coverage in the targeted area. Diagnostics at the sink level must take into consideration that both the number and the quality of the provided features are not constant, and that some politics like scheduling or data aggregation may be developed across the network. The aim of this article is ($1$) to show that random forests are relevant in this context, due to their flexibility and robustness, and ($2$) to provide first examples of use of this method for diagnostics based on data provided by a wireless sensor network.
[ { "created": "Sun, 25 Jun 2017 13:54:33 GMT", "version": "v1" } ]
2017-06-27
[ [ "Elghazel", "Wiem", "" ], [ "Medjaher", "Kamal", "" ], [ "Zerhouni", "Nourredine", "" ], [ "Bahi", "Jacques", "" ], [ "Farhat", "Ahamd", "" ], [ "Guyeux", "Christophe", "" ], [ "Hakem", "Mourad", "" ] ]
In this paper, random forests are proposed for operating devices diagnostics in the presence of a variable number of features. In various contexts, like large or difficult-to-access monitored areas, wired sensor networks providing features to achieve diagnostics are either very costly to use or totally impossible to spread out. Using a wireless sensor network can solve this problem, but this latter is more subjected to flaws. Furthermore, the networks' topology often changes, leading to a variability in quality of coverage in the targeted area. Diagnostics at the sink level must take into consideration that both the number and the quality of the provided features are not constant, and that some politics like scheduling or data aggregation may be developed across the network. The aim of this article is ($1$) to show that random forests are relevant in this context, due to their flexibility and robustness, and ($2$) to provide first examples of use of this method for diagnostics based on data provided by a wireless sensor network.
1911.09996
Vacit Oguz Yazici
Vacit Oguz Yazici, Abel Gonzalez-Garcia, Arnau Ramisa, Bartlomiej Twardowski, Joost van de Weijer
Orderless Recurrent Models for Multi-label Classification
Accepted to CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification. Since RNNs produce sequential outputs, labels need to be ordered for the multi-label classification task. Current approaches sort labels according to their frequency, typically ordering them in either rare-first or frequent-first. These imposed orderings do not take into account that the natural order to generate the labels can change for each image, e.g.\ first the dominant object before summing up the smaller objects in the image. Therefore, in this paper, we propose ways to dynamically order the ground truth labels with the predicted label sequence. This allows for the faster training of more optimal LSTM models for multi-label classification. Analysis evidences that our method does not suffer from duplicate generation, something which is common for other models. Furthermore, it outperforms other CNN-RNN models, and we show that a standard architecture of an image encoder and language decoder trained with our proposed loss obtains the state-of-the-art results on the challenging MS-COCO, WIDER Attribute and PA-100K and competitive results on NUS-WIDE.
[ { "created": "Fri, 22 Nov 2019 12:25:14 GMT", "version": "v1" }, { "created": "Mon, 25 Nov 2019 11:16:41 GMT", "version": "v2" }, { "created": "Thu, 12 Mar 2020 17:10:18 GMT", "version": "v3" } ]
2020-03-13
[ [ "Yazici", "Vacit Oguz", "" ], [ "Gonzalez-Garcia", "Abel", "" ], [ "Ramisa", "Arnau", "" ], [ "Twardowski", "Bartlomiej", "" ], [ "van de Weijer", "Joost", "" ] ]
Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification. Since RNNs produce sequential outputs, labels need to be ordered for the multi-label classification task. Current approaches sort labels according to their frequency, typically ordering them in either rare-first or frequent-first. These imposed orderings do not take into account that the natural order to generate the labels can change for each image, e.g.\ first the dominant object before summing up the smaller objects in the image. Therefore, in this paper, we propose ways to dynamically order the ground truth labels with the predicted label sequence. This allows for the faster training of more optimal LSTM models for multi-label classification. Analysis evidences that our method does not suffer from duplicate generation, something which is common for other models. Furthermore, it outperforms other CNN-RNN models, and we show that a standard architecture of an image encoder and language decoder trained with our proposed loss obtains the state-of-the-art results on the challenging MS-COCO, WIDER Attribute and PA-100K and competitive results on NUS-WIDE.
2302.00762
Chaitanya Malaviya
Yuewei Yuan, Chaitanya Malaviya, Mark Yatskar
AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference
EACL 2023 Findings
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Given a sentence "Abby told Brittney that she upset Courtney", one would struggle to understand who "she" refers to, and ask for clarification. However, if the word "upset" were replaced with "hugged", "she" unambiguously refers to Abby. We study if modern coreference resolution models are sensitive to such pronominal ambiguity. To this end, we construct AmbiCoref, a diagnostic corpus of minimal sentence pairs with ambiguous and unambiguous referents. Our examples generalize psycholinguistic studies of human perception of ambiguity around particular arrangements of verbs and their arguments. Analysis shows that (1) humans are less sure of referents in ambiguous AmbiCoref examples than unambiguous ones, and (2) most coreference models show little difference in output between ambiguous and unambiguous pairs. We release AmbiCoref as a diagnostic corpus for testing whether models treat ambiguity similarly to humans.
[ { "created": "Wed, 1 Feb 2023 21:25:34 GMT", "version": "v1" }, { "created": "Fri, 3 Feb 2023 16:07:53 GMT", "version": "v2" } ]
2023-02-06
[ [ "Yuan", "Yuewei", "" ], [ "Malaviya", "Chaitanya", "" ], [ "Yatskar", "Mark", "" ] ]
Given a sentence "Abby told Brittney that she upset Courtney", one would struggle to understand who "she" refers to, and ask for clarification. However, if the word "upset" were replaced with "hugged", "she" unambiguously refers to Abby. We study if modern coreference resolution models are sensitive to such pronominal ambiguity. To this end, we construct AmbiCoref, a diagnostic corpus of minimal sentence pairs with ambiguous and unambiguous referents. Our examples generalize psycholinguistic studies of human perception of ambiguity around particular arrangements of verbs and their arguments. Analysis shows that (1) humans are less sure of referents in ambiguous AmbiCoref examples than unambiguous ones, and (2) most coreference models show little difference in output between ambiguous and unambiguous pairs. We release AmbiCoref as a diagnostic corpus for testing whether models treat ambiguity similarly to humans.