id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1607.08659
Helge Rhodin
Helge Rhodin, Nadia Robertini, Dan Casas, Christian Richardt, Hans-Peter Seidel, Christian Theobalt
General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues
Accepted to ECCV 2016, added additional references
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.
[ { "created": "Thu, 28 Jul 2016 22:59:55 GMT", "version": "v1" }, { "created": "Fri, 21 Oct 2016 11:23:31 GMT", "version": "v2" } ]
2016-10-24
[ [ "Rhodin", "Helge", "" ], [ "Robertini", "Nadia", "" ], [ "Casas", "Dan", "" ], [ "Richardt", "Christian", "" ], [ "Seidel", "Hans-Peter", "" ], [ "Theobalt", "Christian", "" ] ]
Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.
1712.08263
Arnold Wiliem
Siqi Yang, Arnold Wiliem, Shaokang Chen, Brian C. Lovell
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
to appear ECCV 2018 (accepted version)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance system utilizing face detectors. We show that existing adversarial perturbation methods are not effective to perform such an attack, especially when there are multiple faces in the input image. This is because the adversarial perturbation specifically generated for one face may disrupt the adversarial perturbation for another face. In this paper, we call this problem the Instance Perturbation Interference (IPI) problem. This IPI problem is addressed by studying the relationship between the deep neural network receptive field and the adversarial perturbation. As such, we propose the Localized Instance Perturbation (LIP) that uses adversarial perturbation constrained to the Effective Receptive Field (ERF) of a target to perform the attack. Experiment results show the LIP method massively outperforms existing adversarial perturbation generation methods -- often by a factor of 2 to 10.
[ { "created": "Fri, 22 Dec 2017 00:42:42 GMT", "version": "v1" }, { "created": "Thu, 5 Jul 2018 01:23:11 GMT", "version": "v2" } ]
2018-07-06
[ [ "Yang", "Siqi", "" ], [ "Wiliem", "Arnold", "" ], [ "Chen", "Shaokang", "" ], [ "Lovell", "Brian C.", "" ] ]
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance system utilizing face detectors. We show that existing adversarial perturbation methods are not effective to perform such an attack, especially when there are multiple faces in the input image. This is because the adversarial perturbation specifically generated for one face may disrupt the adversarial perturbation for another face. In this paper, we call this problem the Instance Perturbation Interference (IPI) problem. This IPI problem is addressed by studying the relationship between the deep neural network receptive field and the adversarial perturbation. As such, we propose the Localized Instance Perturbation (LIP) that uses adversarial perturbation constrained to the Effective Receptive Field (ERF) of a target to perform the attack. Experiment results show the LIP method massively outperforms existing adversarial perturbation generation methods -- often by a factor of 2 to 10.
1012.0557
Andrey Rumyantsev
Andrey Rumyantsev
Infinite computable version of Lovasz Local Lemma
null
null
null
null
cs.DS cs.DM math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lov\'asz Local Lemma (LLL) is a probabilistic tool that allows us to prove the existence of combinatorial objects in the cases when standard probabilistic argument does not work (there are many partly independent conditions). LLL can be also used to prove the consistency of an infinite set of conditions, using standard compactness argument (if an infinite set of conditions is inconsistent, then some finite part of it is inconsistent, too, which contradicts LLL). In this way we show that objects satisfying all the conditions do exist (though the probability of this event equals~$0$). However, if we are interested in finding a computable solution that satisfies all the constraints, compactness arguments do not work anymore. Moser and Tardos recently gave a nice constructive proof of LLL. Lance Fortnow asked whether one can apply Moser--Tardos technique to prove the existence of a computable solution. We show that this is indeed possible (under almost the same conditions as used in the non-constructive version).
[ { "created": "Thu, 2 Dec 2010 20:11:02 GMT", "version": "v1" } ]
2010-12-03
[ [ "Rumyantsev", "Andrey", "" ] ]
Lov\'asz Local Lemma (LLL) is a probabilistic tool that allows us to prove the existence of combinatorial objects in the cases when standard probabilistic argument does not work (there are many partly independent conditions). LLL can be also used to prove the consistency of an infinite set of conditions, using standard compactness argument (if an infinite set of conditions is inconsistent, then some finite part of it is inconsistent, too, which contradicts LLL). In this way we show that objects satisfying all the conditions do exist (though the probability of this event equals~$0$). However, if we are interested in finding a computable solution that satisfies all the constraints, compactness arguments do not work anymore. Moser and Tardos recently gave a nice constructive proof of LLL. Lance Fortnow asked whether one can apply Moser--Tardos technique to prove the existence of a computable solution. We show that this is indeed possible (under almost the same conditions as used in the non-constructive version).
2211.13818
Mushu Li
Mushu Li, Jie Gao, Conghao Zhou, Xuemin (Sherman) Shen and Weihua Zhuang
Digital Twin-Driven Computing Resource Management for Vehicular Networks
6 pages, 4 figures, accepted by 2022 IEEE GLOBECOM
null
null
null
cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel approach for computing resource management of edge servers in vehicular networks based on digital twins and artificial intelligence (AI). Specifically, we construct two-tier digital twins tailored for vehicular networks to capture networking-related features of vehicles and edge servers. By exploiting such features, we propose a two-stage computing resource allocation scheme. First, the central controller periodically generates reference policies for real-time computing resource allocation according to the network dynamics and service demands captured by digital twins of edge servers. Second, computing resources of the edge servers are allocated in real time to individual vehicles via low-complexity matching-based allocation that complies with the reference policies. By leveraging digital twins, the proposed scheme can adapt to dynamic service demands and vehicle mobility in a scalable manner. Simulation results demonstrate that the proposed digital twin-driven scheme enables the vehicular network to support more computing tasks than benchmark schemes.
[ { "created": "Thu, 24 Nov 2022 23:06:52 GMT", "version": "v1" } ]
2022-11-28
[ [ "Li", "Mushu", "", "Sherman" ], [ "Gao", "Jie", "", "Sherman" ], [ "Zhou", "Conghao", "", "Sherman" ], [ "Xuemin", "", "", "Sherman" ], [ "Shen", "", "" ], [ "Zhuang", "Weihua", "" ] ]
This paper presents a novel approach for computing resource management of edge servers in vehicular networks based on digital twins and artificial intelligence (AI). Specifically, we construct two-tier digital twins tailored for vehicular networks to capture networking-related features of vehicles and edge servers. By exploiting such features, we propose a two-stage computing resource allocation scheme. First, the central controller periodically generates reference policies for real-time computing resource allocation according to the network dynamics and service demands captured by digital twins of edge servers. Second, computing resources of the edge servers are allocated in real time to individual vehicles via low-complexity matching-based allocation that complies with the reference policies. By leveraging digital twins, the proposed scheme can adapt to dynamic service demands and vehicle mobility in a scalable manner. Simulation results demonstrate that the proposed digital twin-driven scheme enables the vehicular network to support more computing tasks than benchmark schemes.
2101.09818
Ali Rasteh
Ali Rasteh, Florian Delpech, Carlos Aguilar-Melchor, Romain Zimmer, Saeed Bagheri Shouraki and Timoth\'ee Masquelier
Encrypted Internet traffic classification using a supervised Spiking Neural Network
22 pages, 8 figures. Neurocomputing (2022)
Neurocomputing (2022)
10.1016/j.neucom.2022.06.055
null
cs.LG cs.NI
http://creativecommons.org/licenses/by/4.0/
Internet traffic recognition is an essential tool for access providers since recognizing traffic categories related to different data packets transmitted on a network help them define adapted priorities. That means, for instance, high priority requirements for an audio conference and low ones for a file transfer, to enhance user experience. As internet traffic becomes increasingly encrypted, the mainstream classic traffic recognition technique, payload inspection, is rendered ineffective. This paper uses machine learning techniques for encrypted traffic classification, looking only at packet size and time of arrival. Spiking neural networks (SNN), largely inspired by how biological neurons operate, were used for two reasons. Firstly, they are able to recognize time-related data packet features. Secondly, they can be implemented efficiently on neuromorphic hardware with a low energy footprint. Here we used a very simple feedforward SNN, with only one fully-connected hidden layer, and trained in a supervised manner using the newly introduced method known as Surrogate Gradient Learning. Surprisingly, such a simple SNN reached an accuracy of 95.9% on ISCX datasets, outperforming previous approaches. Besides better accuracy, there is also a very significant improvement on simplicity: input size, number of neurons, trainable parameters are all reduced by one to four orders of magnitude. Next, we analyzed the reasons for this good accuracy. It turns out that, beyond spatial (i.e. packet size) features, the SNN also exploits temporal ones, mostly the nearly synchronous (within a 200ms range) arrival times of packets with certain sizes. Taken together, these results show that SNNs are an excellent fit for encrypted internet traffic classification: they can be more accurate than conventional artificial neural networks (ANN), and they could be implemented efficiently on low power embedded systems.
[ { "created": "Sun, 24 Jan 2021 22:46:08 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2022 13:06:28 GMT", "version": "v2" } ]
2022-07-25
[ [ "Rasteh", "Ali", "" ], [ "Delpech", "Florian", "" ], [ "Aguilar-Melchor", "Carlos", "" ], [ "Zimmer", "Romain", "" ], [ "Shouraki", "Saeed Bagheri", "" ], [ "Masquelier", "Timothée", "" ] ]
Internet traffic recognition is an essential tool for access providers since recognizing traffic categories related to different data packets transmitted on a network help them define adapted priorities. That means, for instance, high priority requirements for an audio conference and low ones for a file transfer, to enhance user experience. As internet traffic becomes increasingly encrypted, the mainstream classic traffic recognition technique, payload inspection, is rendered ineffective. This paper uses machine learning techniques for encrypted traffic classification, looking only at packet size and time of arrival. Spiking neural networks (SNN), largely inspired by how biological neurons operate, were used for two reasons. Firstly, they are able to recognize time-related data packet features. Secondly, they can be implemented efficiently on neuromorphic hardware with a low energy footprint. Here we used a very simple feedforward SNN, with only one fully-connected hidden layer, and trained in a supervised manner using the newly introduced method known as Surrogate Gradient Learning. Surprisingly, such a simple SNN reached an accuracy of 95.9% on ISCX datasets, outperforming previous approaches. Besides better accuracy, there is also a very significant improvement on simplicity: input size, number of neurons, trainable parameters are all reduced by one to four orders of magnitude. Next, we analyzed the reasons for this good accuracy. It turns out that, beyond spatial (i.e. packet size) features, the SNN also exploits temporal ones, mostly the nearly synchronous (within a 200ms range) arrival times of packets with certain sizes. Taken together, these results show that SNNs are an excellent fit for encrypted internet traffic classification: they can be more accurate than conventional artificial neural networks (ANN), and they could be implemented efficiently on low power embedded systems.
1709.09250
Omar Al-Harbi Mohammad
Omar Al-Harbi, Shaidah Jusoh, Norita Md Norwawi
Lexical Disambiguation in Natural Language Questions (NLQs)
8 pages, 4 figures
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 2, July 2011 (143-150)
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question processing is a fundamental step in a question answering (QA) application, and its quality impacts the performance of QA application. The major challenging issue in processing question is how to extract semantic of natural language questions (NLQs). A human language is ambiguous. Ambiguity may occur at two levels; lexical and syntactic. In this paper, we propose a new approach for resolving lexical ambiguity problem by integrating context knowledge and concepts knowledge of a domain, into shallow natural language processing (SNLP) techniques. Concepts knowledge is modeled using ontology, while context knowledge is obtained from WordNet, and it is determined based on neighborhood words in a question. The approach will be applied to a university QA system.
[ { "created": "Tue, 26 Sep 2017 20:24:10 GMT", "version": "v1" } ]
2017-09-28
[ [ "Al-Harbi", "Omar", "" ], [ "Jusoh", "Shaidah", "" ], [ "Norwawi", "Norita Md", "" ] ]
Question processing is a fundamental step in a question answering (QA) application, and its quality impacts the performance of QA application. The major challenging issue in processing question is how to extract semantic of natural language questions (NLQs). A human language is ambiguous. Ambiguity may occur at two levels; lexical and syntactic. In this paper, we propose a new approach for resolving lexical ambiguity problem by integrating context knowledge and concepts knowledge of a domain, into shallow natural language processing (SNLP) techniques. Concepts knowledge is modeled using ontology, while context knowledge is obtained from WordNet, and it is determined based on neighborhood words in a question. The approach will be applied to a university QA system.
2404.09473
Dwaipayan Roy
Aman Sinha, Priyanshu Raj Mall, and Dwaipayan Roy
Exploring the Nexus Between Retrievability and Query Generation Strategies
Accepted at ECIR 2024
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Quantifying bias in retrieval functions through document retrievability scores is vital for assessing recall-oriented retrieval systems. However, many studies investigating retrieval model bias lack validation of their query generation methods as accurate representations of retrievability for real users and their queries. This limitation results from the absence of established criteria for query generation in retrievability assessments. Typically, researchers resort to using frequent collocations from document corpora when no query log is available. In this study, we address the issue of reproducibility and seek to validate query generation methods by comparing retrievability scores generated from artificially generated queries to those derived from query logs. Our findings demonstrate a minimal or negligible correlation between retrievability scores from artificial queries and those from query logs. This suggests that artificially generated queries may not accurately reflect retrievability scores as derived from query logs. We further explore alternative query generation techniques, uncovering a variation that exhibits the highest correlation. This alternative approach holds promise for improving reproducibility when query logs are unavailable.
[ { "created": "Mon, 15 Apr 2024 05:56:13 GMT", "version": "v1" } ]
2024-04-16
[ [ "Sinha", "Aman", "" ], [ "Mall", "Priyanshu Raj", "" ], [ "Roy", "Dwaipayan", "" ] ]
Quantifying bias in retrieval functions through document retrievability scores is vital for assessing recall-oriented retrieval systems. However, many studies investigating retrieval model bias lack validation of their query generation methods as accurate representations of retrievability for real users and their queries. This limitation results from the absence of established criteria for query generation in retrievability assessments. Typically, researchers resort to using frequent collocations from document corpora when no query log is available. In this study, we address the issue of reproducibility and seek to validate query generation methods by comparing retrievability scores generated from artificially generated queries to those derived from query logs. Our findings demonstrate a minimal or negligible correlation between retrievability scores from artificial queries and those from query logs. This suggests that artificially generated queries may not accurately reflect retrievability scores as derived from query logs. We further explore alternative query generation techniques, uncovering a variation that exhibits the highest correlation. This alternative approach holds promise for improving reproducibility when query logs are unavailable.
1911.04866
Alexandros Milolidakis
Alexandros Milolidakis, Romain Fontugne, Xenofontas Dimitropoulos
Detecting Network Disruptions At Colocation Facilities
10 pages, IEEE INFOCOM 2019-IEEE Conference on Computer Communications
In IEEE INFOCOM 2019-IEEE Conference on Computer Communications (pp. 2161-2169). IEEE (2019)
10.1109/INFOCOM.2019.8737615
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Colocation facilities and Internet eXchange Points (IXPs) provide neutral places for concurrent networks to daily exchange terabytes of data traffic. Although very reliable, these facilities are not immune to failure and may experience difficulties that can have significant impacts on exchanged traffic. In this paper we devise a methodology to identify collocation facilities in traceroute data and to monitor delay and routing patterns between facilities. We also present an anomaly detection technique to report abnormal traffic changes usually due to facilities outages. We evaluate this method with eight months of traceroute data from the RIPE Atlas measurement platform and manually inspect the most prominent events, that are: an IXP outage, a DDoS attack, and a power failure in a facility. These case studies validate the benefits of the proposed system to detect real world outages from traceroute data. We also investigate the impact of anomalies at the metropolitan-level and identify outages that span across up to eight facilities.
[ { "created": "Tue, 12 Nov 2019 14:07:25 GMT", "version": "v1" } ]
2019-11-13
[ [ "Milolidakis", "Alexandros", "" ], [ "Fontugne", "Romain", "" ], [ "Dimitropoulos", "Xenofontas", "" ] ]
Colocation facilities and Internet eXchange Points (IXPs) provide neutral places for concurrent networks to daily exchange terabytes of data traffic. Although very reliable, these facilities are not immune to failure and may experience difficulties that can have significant impacts on exchanged traffic. In this paper we devise a methodology to identify collocation facilities in traceroute data and to monitor delay and routing patterns between facilities. We also present an anomaly detection technique to report abnormal traffic changes usually due to facilities outages. We evaluate this method with eight months of traceroute data from the RIPE Atlas measurement platform and manually inspect the most prominent events, that are: an IXP outage, a DDoS attack, and a power failure in a facility. These case studies validate the benefits of the proposed system to detect real world outages from traceroute data. We also investigate the impact of anomalies at the metropolitan-level and identify outages that span across up to eight facilities.
1502.07591
Cristopher Moore
Cristopher Moore
The phase transition in random regular exact cover
Added sentence pointing out that the threshold is never an integer
null
null
null
cs.CC cond-mat.stat-mech math.CO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A $k$-uniform, $d$-regular instance of Exact Cover is a family of $m$ sets $F_{n,d,k} = \{ S_j \subseteq \{1,...,n\} \}$, where each subset has size $k$ and each $1 \le i \le n$ is contained in $d$ of the $S_j$. It is satisfiable if there is a subset $T \subseteq \{1,...,n\}$ such that $|T \cap S_j|=1$ for all $j$. Alternately, we can consider it a $d$-regular instance of Positive 1-in-$k$ SAT, i.e., a Boolean formula with $m$ clauses and $n$ variables where each clause contains $k$ variables and demands that exactly one of them is true. We determine the satisfiability threshold for random instances of this type with $k > 2$. Letting $d^\star = \frac{\ln k}{(k-1)(- \ln (1-1/k))} + 1$, we show that $F_{n,d,k}$ is satisfiable with high probability if $d < d^\star$ and unsatisfiable with high probability if $d > d^\star$. We do this with a simple application of the first and second moment methods, boosting the probability of satisfiability below $d^\star$ to $1-o(1)$ using the small subgraph conditioning method.
[ { "created": "Thu, 26 Feb 2015 15:22:02 GMT", "version": "v1" }, { "created": "Fri, 27 Feb 2015 01:45:31 GMT", "version": "v2" }, { "created": "Wed, 4 Mar 2015 17:49:19 GMT", "version": "v3" } ]
2015-03-05
[ [ "Moore", "Cristopher", "" ] ]
A $k$-uniform, $d$-regular instance of Exact Cover is a family of $m$ sets $F_{n,d,k} = \{ S_j \subseteq \{1,...,n\} \}$, where each subset has size $k$ and each $1 \le i \le n$ is contained in $d$ of the $S_j$. It is satisfiable if there is a subset $T \subseteq \{1,...,n\}$ such that $|T \cap S_j|=1$ for all $j$. Alternately, we can consider it a $d$-regular instance of Positive 1-in-$k$ SAT, i.e., a Boolean formula with $m$ clauses and $n$ variables where each clause contains $k$ variables and demands that exactly one of them is true. We determine the satisfiability threshold for random instances of this type with $k > 2$. Letting $d^\star = \frac{\ln k}{(k-1)(- \ln (1-1/k))} + 1$, we show that $F_{n,d,k}$ is satisfiable with high probability if $d < d^\star$ and unsatisfiable with high probability if $d > d^\star$. We do this with a simple application of the first and second moment methods, boosting the probability of satisfiability below $d^\star$ to $1-o(1)$ using the small subgraph conditioning method.
2308.08956
Emma Nilsson
Emma Nilsson, Jonas Lukasczyk, Talha Bin Masood, Christoph Garth, Ingrid Hotz
Probabilistic Gradient-Based Extrema Tracking
null
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
Feature tracking is a common task in visualization applications, where methods based on topological data analysis (TDA) have successfully been applied in the past for feature definition as well as tracking. In this work, we focus on tracking extrema of temporal scalar fields. A family of TDA approaches address this task by establishing one-to-one correspondences between extrema based on discrete gradient vector fields. More specifically, two extrema of subsequent time steps are matched if they fall into their respective ascending and descending manifolds. However, due to this one-to-one assignment, these approaches are prone to fail where, e.g., extrema are located in regions with low gradient magnitude, or are located close to boundaries of the manifolds. Therefore, we propose a probabilistic matching that captures a larger set of possible correspondences via neighborhood sampling, or by computing the overlap of the manifolds. We illustrate the usefulness of the approach with two application cases.
[ { "created": "Thu, 17 Aug 2023 12:55:38 GMT", "version": "v1" } ]
2023-08-21
[ [ "Nilsson", "Emma", "" ], [ "Lukasczyk", "Jonas", "" ], [ "Masood", "Talha Bin", "" ], [ "Garth", "Christoph", "" ], [ "Hotz", "Ingrid", "" ] ]
Feature tracking is a common task in visualization applications, where methods based on topological data analysis (TDA) have successfully been applied in the past for feature definition as well as tracking. In this work, we focus on tracking extrema of temporal scalar fields. A family of TDA approaches address this task by establishing one-to-one correspondences between extrema based on discrete gradient vector fields. More specifically, two extrema of subsequent time steps are matched if they fall into their respective ascending and descending manifolds. However, due to this one-to-one assignment, these approaches are prone to fail where, e.g., extrema are located in regions with low gradient magnitude, or are located close to boundaries of the manifolds. Therefore, we propose a probabilistic matching that captures a larger set of possible correspondences via neighborhood sampling, or by computing the overlap of the manifolds. We illustrate the usefulness of the approach with two application cases.
1504.04339
Tamara Bonaci
Tamara Bonaci, Jeffrey Herron, Tariq Yusuf, Junjie Yan, Tadayoshi Kohno and Howard Jay Chizeck
To Make a Robot Secure: An Experimental Analysis of Cyber Security Threats Against Teleoperated Surgical Robots
null
null
null
null
cs.RO cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Teleoperated robots are playing an increasingly important role in military actions and medical services. In the future, remotely operated surgical robots will likely be used in more scenarios such as battlefields and emergency response. But rapidly growing applications of teleoperated surgery raise the question; what if the computer systems for these robots are attacked, taken over and even turned into weapons? Our work seeks to answer this question by systematically analyzing possible cyber security attacks against Raven II, an advanced teleoperated robotic surgery system. We identify a slew of possible cyber security threats, and experimentally evaluate their scopes and impacts. We demonstrate the ability to maliciously control a wide range of robots functions, and even to completely ignore or override command inputs from the surgeon. We further find that it is possible to abuse the robot's existing emergency stop (E-stop) mechanism to execute efficient (single packet) attacks. We then consider steps to mitigate these identified attacks, and experimentally evaluate the feasibility of applying the existing security solutions against these threats. The broader goal of our paper, however, is to raise awareness and increase understanding of these emerging threats. We anticipate that the majority of attacks against telerobotic surgery will also be relevant to other teleoperated robotic and co-robotic systems.
[ { "created": "Thu, 16 Apr 2015 19:01:28 GMT", "version": "v1" }, { "created": "Tue, 12 May 2015 17:55:38 GMT", "version": "v2" } ]
2015-05-13
[ [ "Bonaci", "Tamara", "" ], [ "Herron", "Jeffrey", "" ], [ "Yusuf", "Tariq", "" ], [ "Yan", "Junjie", "" ], [ "Kohno", "Tadayoshi", "" ], [ "Chizeck", "Howard Jay", "" ] ]
Teleoperated robots are playing an increasingly important role in military actions and medical services. In the future, remotely operated surgical robots will likely be used in more scenarios such as battlefields and emergency response. But rapidly growing applications of teleoperated surgery raise the question; what if the computer systems for these robots are attacked, taken over and even turned into weapons? Our work seeks to answer this question by systematically analyzing possible cyber security attacks against Raven II, an advanced teleoperated robotic surgery system. We identify a slew of possible cyber security threats, and experimentally evaluate their scopes and impacts. We demonstrate the ability to maliciously control a wide range of robots functions, and even to completely ignore or override command inputs from the surgeon. We further find that it is possible to abuse the robot's existing emergency stop (E-stop) mechanism to execute efficient (single packet) attacks. We then consider steps to mitigate these identified attacks, and experimentally evaluate the feasibility of applying the existing security solutions against these threats. The broader goal of our paper, however, is to raise awareness and increase understanding of these emerging threats. We anticipate that the majority of attacks against telerobotic surgery will also be relevant to other teleoperated robotic and co-robotic systems.
2203.04698
Dwaraknath Gnaneshwar Mr
Dwaraknath Gnaneshwar, Bharath Ramsundar, Dhairya Gandhi, Rachel Kurchin, Venkatasubramanian Viswanathan
Score-Based Generative Models for Molecule Generation
null
null
null
null
cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Recent advances in generative models have made exploring design spaces easier for de novo molecule generation. However, popular generative models like GANs and normalizing flows face challenges such as training instabilities due to adversarial training and architectural constraints, respectively. Score-based generative models sidestep these challenges by modelling the gradient of the log probability density using a score function approximation, as opposed to modelling the density function directly, and sampling from it using annealed Langevin Dynamics. We believe that score-based generative models could open up new opportunities in molecule generation due to their architectural flexibility, such as replacing the score function with an SE(3) equivariant model. In this work, we lay the foundations by testing the efficacy of score-based models for molecule generation. We train a Transformer-based score function on Self-Referencing Embedded Strings (SELFIES) representations of 1.5 million samples from the ZINC dataset and use the Moses benchmarking framework to evaluate the generated samples on a suite of metrics.
[ { "created": "Mon, 7 Mar 2022 13:46:02 GMT", "version": "v1" } ]
2022-03-10
[ [ "Gnaneshwar", "Dwaraknath", "" ], [ "Ramsundar", "Bharath", "" ], [ "Gandhi", "Dhairya", "" ], [ "Kurchin", "Rachel", "" ], [ "Viswanathan", "Venkatasubramanian", "" ] ]
Recent advances in generative models have made exploring design spaces easier for de novo molecule generation. However, popular generative models like GANs and normalizing flows face challenges such as training instabilities due to adversarial training and architectural constraints, respectively. Score-based generative models sidestep these challenges by modelling the gradient of the log probability density using a score function approximation, as opposed to modelling the density function directly, and sampling from it using annealed Langevin Dynamics. We believe that score-based generative models could open up new opportunities in molecule generation due to their architectural flexibility, such as replacing the score function with an SE(3) equivariant model. In this work, we lay the foundations by testing the efficacy of score-based models for molecule generation. We train a Transformer-based score function on Self-Referencing Embedded Strings (SELFIES) representations of 1.5 million samples from the ZINC dataset and use the Moses benchmarking framework to evaluate the generated samples on a suite of metrics.
1504.01380
Maitham Alhubail
Maitham Makki Alhubail and Qiqi Wang
The swept rule for breaking the latency barrier in time advancing PDEs
30 pages
Journal of Computational Physics (2016), pp. 110-121
10.1016/j.jcp.2015.11.026
null
cs.CE cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article investigates the swept rule of space-time domain decomposition, an idea to break the latency barrier via communicating less often when explicitly solving time-dependent PDEs. The swept rule decomposes space and time among computing nodes in ways that exploit the domains of influence and the domain of dependency, making it possible to communicate once per many timesteps without redundant computation. The article presents simple theoretical analysis to the performance of the swept rule which then was shown to be accurate by conducting numerical experiments.
[ { "created": "Mon, 6 Apr 2015 16:00:32 GMT", "version": "v1" }, { "created": "Sat, 14 Nov 2015 16:26:05 GMT", "version": "v2" } ]
2015-12-10
[ [ "Alhubail", "Maitham Makki", "" ], [ "Wang", "Qiqi", "" ] ]
This article investigates the swept rule of space-time domain decomposition, an idea to break the latency barrier via communicating less often when explicitly solving time-dependent PDEs. The swept rule decomposes space and time among computing nodes in ways that exploit the domains of influence and the domain of dependency, making it possible to communicate once per many timesteps without redundant computation. The article presents simple theoretical analysis to the performance of the swept rule which then was shown to be accurate by conducting numerical experiments.
cs/0008001
Randal E. Bryant
Randal E. Bryant, Miroslav N. Velev
Boolean Satisfiability with Transitivity Constraints
Submitted to ACM Transactions on Computational Logic
null
null
null
cs.LO
null
We consider a variant of the Boolean satisfiability problem where a subset E of the propositional variables appearing in formula Fsat encode a symmetric, transitive, binary relation over N elements. Each of these relational variables, e[i,j], for 1 <= i < j <= N, expresses whether or not the relation holds between elements i and j. The task is to either find a satisfying assignment to Fsat that also satisfies all transitivity constraints over the relational variables (e.g., e[1,2] & e[2,3] ==> e[1,3]), or to prove that no such assignment exists. Solving this satisfiability problem is the final and most difficult step in our decision procedure for a logic of equality with uninterpreted functions. This procedure forms the core of our tool for verifying pipelined microprocessors. To use a conventional Boolean satisfiability checker, we augment the set of clauses expressing Fsat with clauses expressing the transitivity constraints. We consider methods to reduce the number of such clauses based on the sparse structure of the relational variables. To use Ordered Binary Decision Diagrams (OBDDs), we show that for some sets E, the OBDD representation of the transitivity constraints has exponential size for all possible variable orderings. By considering only those relational variables that occur in the OBDD representation of Fsat, our experiments show that we can readily construct an OBDD representation of the relevant transitivity constraints and thus solve the constrained satisfiability problem.
[ { "created": "Tue, 1 Aug 2000 13:51:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bryant", "Randal E.", "" ], [ "Velev", "Miroslav N.", "" ] ]
We consider a variant of the Boolean satisfiability problem where a subset E of the propositional variables appearing in formula Fsat encode a symmetric, transitive, binary relation over N elements. Each of these relational variables, e[i,j], for 1 <= i < j <= N, expresses whether or not the relation holds between elements i and j. The task is to either find a satisfying assignment to Fsat that also satisfies all transitivity constraints over the relational variables (e.g., e[1,2] & e[2,3] ==> e[1,3]), or to prove that no such assignment exists. Solving this satisfiability problem is the final and most difficult step in our decision procedure for a logic of equality with uninterpreted functions. This procedure forms the core of our tool for verifying pipelined microprocessors. To use a conventional Boolean satisfiability checker, we augment the set of clauses expressing Fsat with clauses expressing the transitivity constraints. We consider methods to reduce the number of such clauses based on the sparse structure of the relational variables. To use Ordered Binary Decision Diagrams (OBDDs), we show that for some sets E, the OBDD representation of the transitivity constraints has exponential size for all possible variable orderings. By considering only those relational variables that occur in the OBDD representation of Fsat, our experiments show that we can readily construct an OBDD representation of the relevant transitivity constraints and thus solve the constrained satisfiability problem.
1307.4264
Rong Zheng
Huy Nguyen and Rong Zheng
A Data-driven Study of Influences in Twitter Communities
11 pages
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a quantitative study of Twitter, one of the most popular micro-blogging services, from the perspective of user influence. We crawl several datasets from the most active communities on Twitter and obtain 20.5 million user profiles, along with 420.2 million directed relations and 105 million tweets among the users. User influence scores are obtained from influence measurement services, Klout and PeerIndex. Our analysis reveals interesting findings, including non-power-law influence distribution, strong reciprocity among users in a community, the existence of homophily and hierarchical relationships in social influences. Most importantly, we observe that whether a user retweets a message is strongly influenced by the first of his followees who posted that message. To capture such an effect, we propose the first influencer (FI) information diffusion model and show through extensive evaluation that compared to the widely adopted independent cascade model, the FI model is more stable and more accurate in predicting influence spreads in Twitter communities.
[ { "created": "Tue, 16 Jul 2013 13:07:24 GMT", "version": "v1" } ]
2013-07-17
[ [ "Nguyen", "Huy", "" ], [ "Zheng", "Rong", "" ] ]
This paper presents a quantitative study of Twitter, one of the most popular micro-blogging services, from the perspective of user influence. We crawl several datasets from the most active communities on Twitter and obtain 20.5 million user profiles, along with 420.2 million directed relations and 105 million tweets among the users. User influence scores are obtained from influence measurement services, Klout and PeerIndex. Our analysis reveals interesting findings, including non-power-law influence distribution, strong reciprocity among users in a community, the existence of homophily and hierarchical relationships in social influences. Most importantly, we observe that whether a user retweets a message is strongly influenced by the first of his followees who posted that message. To capture such an effect, we propose the first influencer (FI) information diffusion model and show through extensive evaluation that compared to the widely adopted independent cascade model, the FI model is more stable and more accurate in predicting influence spreads in Twitter communities.
1912.09816
Petr Chunaev
Petr Chunaev
Community detection in node-attributed social networks: a survey
This is an essentially revised version of the manuscript
null
10.1016/j.cosrev.2020.100286
null
cs.SI cs.LG cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection is a fundamental problem in social network analysis consisting in unsupervised dividing social actors (nodes in a social graph) with certain social connections (edges in a social graph) into densely knitted and highly related groups with each group well separated from the others. Classical approaches for community detection usually deal only with network structure and ignore features of its nodes (called node attributes), although many real-world social networks provide additional actors' information such as interests. It is believed that the attributes may clarify and enrich the knowledge about the actors and give sense to the communities. This belief has motivated the progress in developing community detection methods that use both the structure and the attributes of network (i.e. deal with a node-attributed graph) to yield more informative and qualitative results. During the last decade many such methods based on different ideas have appeared. Although there exist partial overviews of them, a recent survey is a necessity as the growing number of the methods may cause repetitions in methodology and uncertainty in practice. In this paper we aim at describing and clarifying the overall situation in the field of community detection in node-attributed social networks. Namely, we perform an exhaustive search of known methods and propose a classification of them based on when and how structure and attributes are fused. We not only give a description of each class but also provide general technical ideas behind each method in the class. Furthermore, we pay attention to available information which methods outperform others and which datasets and quality measures are used for their evaluation. Basing on the information collected, we make conclusions on the current state of the field and disclose several problems that seem important to be resolved in future.
[ { "created": "Fri, 20 Dec 2019 13:35:32 GMT", "version": "v1" }, { "created": "Mon, 15 Jun 2020 11:01:39 GMT", "version": "v2" } ]
2022-01-14
[ [ "Chunaev", "Petr", "" ] ]
Community detection is a fundamental problem in social network analysis consisting in unsupervised dividing social actors (nodes in a social graph) with certain social connections (edges in a social graph) into densely knitted and highly related groups with each group well separated from the others. Classical approaches for community detection usually deal only with network structure and ignore features of its nodes (called node attributes), although many real-world social networks provide additional actors' information such as interests. It is believed that the attributes may clarify and enrich the knowledge about the actors and give sense to the communities. This belief has motivated the progress in developing community detection methods that use both the structure and the attributes of network (i.e. deal with a node-attributed graph) to yield more informative and qualitative results. During the last decade many such methods based on different ideas have appeared. Although there exist partial overviews of them, a recent survey is a necessity as the growing number of the methods may cause repetitions in methodology and uncertainty in practice. In this paper we aim at describing and clarifying the overall situation in the field of community detection in node-attributed social networks. Namely, we perform an exhaustive search of known methods and propose a classification of them based on when and how structure and attributes are fused. We not only give a description of each class but also provide general technical ideas behind each method in the class. Furthermore, we pay attention to available information which methods outperform others and which datasets and quality measures are used for their evaluation. Basing on the information collected, we make conclusions on the current state of the field and disclose several problems that seem important to be resolved in future.
1207.3208
Brian Huffman
Brian Huffman
Formal Verification of Monad Transformers
ICFP 2012: The 17th ACM SIGPLAN International Conference on Functional Programming, 12 pages
null
null
null
cs.LO
http://creativecommons.org/licenses/publicdomain/
We present techniques for reasoning about constructor classes that (like the monad class) fix polymorphic operations and assert polymorphic axioms. We do not require a logic with first-class type constructors, first-class polymorphism, or type quantification; instead, we rely on a domain-theoretic model of the type system in a universal domain to provide these features. These ideas are implemented in the Tycon library for the Isabelle theorem prover, which builds on the HOLCF library of domain theory. The Tycon library provides various axiomatic type constructor classes, including functors and monads. It also provides automation for instantiating those classes, and for defining further subclasses. We use the Tycon library to formalize three Haskell monad transformers: the error transformer, the writer transformer, and the resumption transformer. The error and writer transformers do not universally preserve the monad laws; however, we establish datatype invariants for each, showing that they are valid monads when viewed as abstract datatypes.
[ { "created": "Fri, 13 Jul 2012 11:53:44 GMT", "version": "v1" } ]
2012-07-16
[ [ "Huffman", "Brian", "" ] ]
We present techniques for reasoning about constructor classes that (like the monad class) fix polymorphic operations and assert polymorphic axioms. We do not require a logic with first-class type constructors, first-class polymorphism, or type quantification; instead, we rely on a domain-theoretic model of the type system in a universal domain to provide these features. These ideas are implemented in the Tycon library for the Isabelle theorem prover, which builds on the HOLCF library of domain theory. The Tycon library provides various axiomatic type constructor classes, including functors and monads. It also provides automation for instantiating those classes, and for defining further subclasses. We use the Tycon library to formalize three Haskell monad transformers: the error transformer, the writer transformer, and the resumption transformer. The error and writer transformers do not universally preserve the monad laws; however, we establish datatype invariants for each, showing that they are valid monads when viewed as abstract datatypes.
1503.04377
EPTCS
Jakob Rehof (TU-Dortmund)
Proceedings Seventh Workshop on Intersection Types and Related Systems
null
EPTCS 177, 2015
10.4204/EPTCS.177
null
cs.LO cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume contains a final and revised selection of papers presented at the Seventh Workshop on Intersection Types and Related Systems (ITRS 2014), held in Vienna (Austria) on July 18th, affiliated with TLCA 2014, Typed Lambda Calculi and Applications (held jointly with RTA, Rewriting Techniques and Applications) as part of FLoC and the Vienna Summer of Logic (VSL) 2014. Intersection types have been introduced in the late 1970s as a language for describing properties of lambda calculus which were not captured by all previous type systems. They provided the first characterisation of strongly normalising lambda terms and have become a powerful syntactic and semantic tool for analysing various normalisation properties as well as lambda models. Over the years the scope of research on intersection types has broadened. Recently, there have been a number of breakthroughs in the use of intersection types and similar technology for practical purposes such as program analysis, verification and concurrency, and program synthesis. The aim of the ITRS workshop series is to bring together researchers working on both the theory and practical applications of systems based on intersection types and related approaches (e.g., union types, refinement types, behavioral types).
[ { "created": "Sun, 15 Mar 2015 02:58:54 GMT", "version": "v1" } ]
2015-03-17
[ [ "Rehof", "Jakob", "", "TU-Dortmund" ] ]
This volume contains a final and revised selection of papers presented at the Seventh Workshop on Intersection Types and Related Systems (ITRS 2014), held in Vienna (Austria) on July 18th, affiliated with TLCA 2014, Typed Lambda Calculi and Applications (held jointly with RTA, Rewriting Techniques and Applications) as part of FLoC and the Vienna Summer of Logic (VSL) 2014. Intersection types have been introduced in the late 1970s as a language for describing properties of lambda calculus which were not captured by all previous type systems. They provided the first characterisation of strongly normalising lambda terms and have become a powerful syntactic and semantic tool for analysing various normalisation properties as well as lambda models. Over the years the scope of research on intersection types has broadened. Recently, there have been a number of breakthroughs in the use of intersection types and similar technology for practical purposes such as program analysis, verification and concurrency, and program synthesis. The aim of the ITRS workshop series is to bring together researchers working on both the theory and practical applications of systems based on intersection types and related approaches (e.g., union types, refinement types, behavioral types).
2406.10842
Zhuoxu Duan
Zhuoxu Duan, Zhengye Yang, Samuel Westby, Christoph Riedl, Brooke Foucault Welles, Richard J. Radke
Large Language Models for Automatic Milestone Detection in Group Discussions
null
null
null
null
cs.CL cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large language models like GPT have proven widely successful on natural language understanding tasks based on written text documents. In this paper, we investigate an LLM's performance on recordings of a group oral communication task in which utterances are often truncated or not well-formed. We propose a new group task experiment involving a puzzle with several milestones that can be achieved in any order. We investigate methods for processing transcripts to detect if, when, and by whom a milestone has been completed. We demonstrate that iteratively prompting GPT with transcription chunks outperforms semantic similarity search methods using text embeddings, and further discuss the quality and randomness of GPT responses under different context window sizes.
[ { "created": "Sun, 16 Jun 2024 08:32:22 GMT", "version": "v1" } ]
2024-06-18
[ [ "Duan", "Zhuoxu", "" ], [ "Yang", "Zhengye", "" ], [ "Westby", "Samuel", "" ], [ "Riedl", "Christoph", "" ], [ "Welles", "Brooke Foucault", "" ], [ "Radke", "Richard J.", "" ] ]
Large language models like GPT have proven widely successful on natural language understanding tasks based on written text documents. In this paper, we investigate an LLM's performance on recordings of a group oral communication task in which utterances are often truncated or not well-formed. We propose a new group task experiment involving a puzzle with several milestones that can be achieved in any order. We investigate methods for processing transcripts to detect if, when, and by whom a milestone has been completed. We demonstrate that iteratively prompting GPT with transcription chunks outperforms semantic similarity search methods using text embeddings, and further discuss the quality and randomness of GPT responses under different context window sizes.
2309.14788
Gabriel Bathie
Gabriel Bathie, Tomasz Kociumaka and Tatiana Starikovskaya
Small-Space Algorithms for the Online Language Distance Problem for Palindromes and Squares
Accepted to ISAAC'23
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We study the online variant of the language distance problem for two classical formal languages, the language of palindromes and the language of squares, and for the two most fundamental distances, the Hamming distance and the edit (Levenshtein) distance. In this problem, defined for a fixed formal language $L$, we are given a string $T$ of length $n$, and the task is to compute the minimal distance to $L$ from every prefix of $T$. We focus on the low-distance regime, where one must compute only the distances smaller than a given threshold $k$. In this work, our contribution is twofold: - First, we show streaming algorithms, which access the input string $T$ only through a single left-to-right scan. Both for palindromes and squares, our algorithms use $O(k \cdot\mathrm{poly}~\log n)$ space and time per character in the Hamming-distance case and $O(k^2 \cdot\mathrm{poly}~\log n)$ space and time per character in the edit-distance case. These algorithms are randomised by necessity, and they err with probability inverse-polynomial in $n$. - Second, we show deterministic read-only online algorithms, which are also provided with read-only random access to the already processed characters of $T$. Both for palindromes and squares, our algorithms use $O(k \cdot\mathrm{poly}~\log n)$ space and time per character in the Hamming-distance case and $O(k^4 \cdot\mathrm{poly}~\log n)$ space and amortised time per character in the edit-distance case.
[ { "created": "Tue, 26 Sep 2023 09:36:24 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 13:18:33 GMT", "version": "v2" } ]
2024-05-01
[ [ "Bathie", "Gabriel", "" ], [ "Kociumaka", "Tomasz", "" ], [ "Starikovskaya", "Tatiana", "" ] ]
We study the online variant of the language distance problem for two classical formal languages, the language of palindromes and the language of squares, and for the two most fundamental distances, the Hamming distance and the edit (Levenshtein) distance. In this problem, defined for a fixed formal language $L$, we are given a string $T$ of length $n$, and the task is to compute the minimal distance to $L$ from every prefix of $T$. We focus on the low-distance regime, where one must compute only the distances smaller than a given threshold $k$. In this work, our contribution is twofold: - First, we show streaming algorithms, which access the input string $T$ only through a single left-to-right scan. Both for palindromes and squares, our algorithms use $O(k \cdot\mathrm{poly}~\log n)$ space and time per character in the Hamming-distance case and $O(k^2 \cdot\mathrm{poly}~\log n)$ space and time per character in the edit-distance case. These algorithms are randomised by necessity, and they err with probability inverse-polynomial in $n$. - Second, we show deterministic read-only online algorithms, which are also provided with read-only random access to the already processed characters of $T$. Both for palindromes and squares, our algorithms use $O(k \cdot\mathrm{poly}~\log n)$ space and time per character in the Hamming-distance case and $O(k^4 \cdot\mathrm{poly}~\log n)$ space and amortised time per character in the edit-distance case.
2309.02561
Jensen Gao
Jensen Gao, Bidipta Sarkar, Fei Xia, Ted Xiao, Jiajun Wu, Brian Ichter, Anirudha Majumdar, Dorsa Sadigh
Physically Grounded Vision-Language Models for Robotic Manipulation
Updated version for ICRA 2024
null
null
null
cs.RO cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in vision-language models (VLMs) have led to improved performance on tasks such as visual question answering and image captioning. Consequently, these models are now well-positioned to reason about the physical world, particularly within domains such as robotic manipulation. However, current VLMs are limited in their understanding of the physical concepts (e.g., material, fragility) of common objects, which restricts their usefulness for robotic manipulation tasks that involve interaction and physical reasoning about such objects. To address this limitation, we propose PhysObjects, an object-centric dataset of 39.6K crowd-sourced and 417K automated physical concept annotations of common household objects. We demonstrate that fine-tuning a VLM on PhysObjects improves its understanding of physical object concepts, including generalization to held-out concepts, by capturing human priors of these concepts from visual appearance. We incorporate this physically grounded VLM in an interactive framework with a large language model-based robotic planner, and show improved planning performance on tasks that require reasoning about physical object concepts, compared to baselines that do not leverage physically grounded VLMs. We additionally illustrate the benefits of our physically grounded VLM on a real robot, where it improves task success rates. We release our dataset and provide further details and visualizations of our results at https://iliad.stanford.edu/pg-vlm/.
[ { "created": "Tue, 5 Sep 2023 20:21:03 GMT", "version": "v1" }, { "created": "Wed, 13 Sep 2023 21:40:56 GMT", "version": "v2" }, { "created": "Thu, 29 Feb 2024 08:44:12 GMT", "version": "v3" }, { "created": "Sun, 3 Mar 2024 08:12:36 GMT", "version": "v4" } ]
2024-03-05
[ [ "Gao", "Jensen", "" ], [ "Sarkar", "Bidipta", "" ], [ "Xia", "Fei", "" ], [ "Xiao", "Ted", "" ], [ "Wu", "Jiajun", "" ], [ "Ichter", "Brian", "" ], [ "Majumdar", "Anirudha", "" ], [ "Sadigh", "Dorsa", "" ] ]
Recent advances in vision-language models (VLMs) have led to improved performance on tasks such as visual question answering and image captioning. Consequently, these models are now well-positioned to reason about the physical world, particularly within domains such as robotic manipulation. However, current VLMs are limited in their understanding of the physical concepts (e.g., material, fragility) of common objects, which restricts their usefulness for robotic manipulation tasks that involve interaction and physical reasoning about such objects. To address this limitation, we propose PhysObjects, an object-centric dataset of 39.6K crowd-sourced and 417K automated physical concept annotations of common household objects. We demonstrate that fine-tuning a VLM on PhysObjects improves its understanding of physical object concepts, including generalization to held-out concepts, by capturing human priors of these concepts from visual appearance. We incorporate this physically grounded VLM in an interactive framework with a large language model-based robotic planner, and show improved planning performance on tasks that require reasoning about physical object concepts, compared to baselines that do not leverage physically grounded VLMs. We additionally illustrate the benefits of our physically grounded VLM on a real robot, where it improves task success rates. We release our dataset and provide further details and visualizations of our results at https://iliad.stanford.edu/pg-vlm/.
1307.7790
Kester Quist-Aphetsi
Quist-Aphetsi Kester
Using SOA with Web Services for effective Integration of Hospital Information Systems via an Enterprise Service Bus
6 pages. International Journal of Research in Engineering & Advanced Technology (IJREAT), 2013. arXiv admin note: text overlap with arXiv:1204.0179 by other authors without attribution
International Journal of Research in Engineering & Advanced Technology (IJREAT).pp: 1-6.1.2.(2013)
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/3.0/
Hospitals are distributed across geographical areas and it is important for all hospitals to share information as well as integrate their systems for effective researching and health delivery. Health personals and institutions in need of information from hospitals with respect to geographical areas can easily do researches on patients, treatments, disease outbreaks, and effects of drugs. This research work is aimed at integrating of database systems of hospital across geographical areas via a service bus. A centralized service bus was used to facilitate interoperability of applications across platforms and enhance communication within the hospital infrastructure as well as creating enabling environment for new layer of abstractions to be added without modification of the entire system. Concept of Service Oriented Architecture with web services was used for rapid integration solution in solving the challenges faced during integration of multiple incompatible applications.
[ { "created": "Tue, 30 Jul 2013 02:46:25 GMT", "version": "v1" } ]
2013-07-31
[ [ "Kester", "Quist-Aphetsi", "" ] ]
Hospitals are distributed across geographical areas and it is important for all hospitals to share information as well as integrate their systems for effective researching and health delivery. Health personals and institutions in need of information from hospitals with respect to geographical areas can easily do researches on patients, treatments, disease outbreaks, and effects of drugs. This research work is aimed at integrating of database systems of hospital across geographical areas via a service bus. A centralized service bus was used to facilitate interoperability of applications across platforms and enhance communication within the hospital infrastructure as well as creating enabling environment for new layer of abstractions to be added without modification of the entire system. Concept of Service Oriented Architecture with web services was used for rapid integration solution in solving the challenges faced during integration of multiple incompatible applications.
2202.11295
Jingxin Zhang
Jingxin Zhang, Donghua Zhou, Maoyin Chen, Xia Hong
Continual learning-based probabilistic slow feature analysis for multimode dynamic process monitoring
This paper has been submitted to IEEE Transactions on Automation Science and Engineering for potential publication
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
In this paper, a novel multimode dynamic process monitoring approach is proposed by extending elastic weight consolidation (EWC) to probabilistic slow feature analysis (PSFA) in order to extract multimode slow features for online monitoring. EWC was originally introduced in the setting of machine learning of sequential multi-tasks with the aim of avoiding catastrophic forgetting issue, which equally poses as a major challenge in multimode dynamic process monitoring. When a new mode arrives, a set of data should be collected so that this mode can be identified by PSFA and prior knowledge. Then, a regularization term is introduced to prevent new data from significantly interfering with the learned knowledge, where the parameter importance measures are estimated. The proposed method is denoted as PSFA-EWC, which is updated continually and capable of achieving excellent performance for successive modes. Different from traditional multimode monitoring algorithms, PSFA-EWC furnishes backward and forward transfer ability. The significant features of previous modes are retained while consolidating new information, which may contribute to learning new relevant modes. Compared with several known methods, the effectiveness of the proposed method is demonstrated via a continuous stirred tank heater and a practical coal pulverizing system.
[ { "created": "Wed, 23 Feb 2022 03:57:59 GMT", "version": "v1" }, { "created": "Thu, 28 Apr 2022 14:44:55 GMT", "version": "v2" } ]
2022-04-29
[ [ "Zhang", "Jingxin", "" ], [ "Zhou", "Donghua", "" ], [ "Chen", "Maoyin", "" ], [ "Hong", "Xia", "" ] ]
In this paper, a novel multimode dynamic process monitoring approach is proposed by extending elastic weight consolidation (EWC) to probabilistic slow feature analysis (PSFA) in order to extract multimode slow features for online monitoring. EWC was originally introduced in the setting of machine learning of sequential multi-tasks with the aim of avoiding catastrophic forgetting issue, which equally poses as a major challenge in multimode dynamic process monitoring. When a new mode arrives, a set of data should be collected so that this mode can be identified by PSFA and prior knowledge. Then, a regularization term is introduced to prevent new data from significantly interfering with the learned knowledge, where the parameter importance measures are estimated. The proposed method is denoted as PSFA-EWC, which is updated continually and capable of achieving excellent performance for successive modes. Different from traditional multimode monitoring algorithms, PSFA-EWC furnishes backward and forward transfer ability. The significant features of previous modes are retained while consolidating new information, which may contribute to learning new relevant modes. Compared with several known methods, the effectiveness of the proposed method is demonstrated via a continuous stirred tank heater and a practical coal pulverizing system.
1212.0892
Vasiliy Tereshkov
Vasiliy M. Tereshkov
An Intuitive Approach to Inertial Sensor Bias Estimation
6 pages, 7 figures
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple approach to gyro and accelerometer bias estimation is proposed. It does not involve Kalman filtering or similar formal techniques. Instead, it is based on physical intuition and exploits a duality between gimbaled and strapdown inertial systems. The estimation problem is decoupled into two separate stages. At the first stage, inertial system attitude errors are corrected by means of a feedback from an external aid. In the presence of uncompensated biases, the steady-state feedback rebalances those biases and can be used to estimate them. At the second stage, the desired bias estimates are expressed in a closed form in terms of the feedback signal. The estimator has only three tunable parameters and is easy to implement and use. The tests proved the feasibility of the proposed approach for the estimation of low-cost MEMS inertial sensor biases on a moving land vehicle.
[ { "created": "Tue, 4 Dec 2012 22:11:10 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2013 09:12:46 GMT", "version": "v2" }, { "created": "Mon, 1 Jul 2013 13:04:41 GMT", "version": "v3" } ]
2013-07-02
[ [ "Tereshkov", "Vasiliy M.", "" ] ]
A simple approach to gyro and accelerometer bias estimation is proposed. It does not involve Kalman filtering or similar formal techniques. Instead, it is based on physical intuition and exploits a duality between gimbaled and strapdown inertial systems. The estimation problem is decoupled into two separate stages. At the first stage, inertial system attitude errors are corrected by means of a feedback from an external aid. In the presence of uncompensated biases, the steady-state feedback rebalances those biases and can be used to estimate them. At the second stage, the desired bias estimates are expressed in a closed form in terms of the feedback signal. The estimator has only three tunable parameters and is easy to implement and use. The tests proved the feasibility of the proposed approach for the estimation of low-cost MEMS inertial sensor biases on a moving land vehicle.
1903.02508
Oded Lachish Dr
Nikola K. Blanchard and Eldar Fischer and Oded Lachish and Felix Reidl
Longest paths in 2-edge-connected cubic graphs
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove almost tight bounds on the length of paths in $2$-edge-connected cubic graphs. Concretely, we show that (i) every $2$-edge-connected cubic graph of size $n$ has a path of length $\Omega\left(\frac{\log^2{n}}{\log{\log{n}}}\right)$, and (ii) there exists a $2$-edge-connected cubic graph, such that every path in the graph has length $O(\log^2{n})$.
[ { "created": "Wed, 6 Mar 2019 17:32:56 GMT", "version": "v1" } ]
2019-03-07
[ [ "Blanchard", "Nikola K.", "" ], [ "Fischer", "Eldar", "" ], [ "Lachish", "Oded", "" ], [ "Reidl", "Felix", "" ] ]
We prove almost tight bounds on the length of paths in $2$-edge-connected cubic graphs. Concretely, we show that (i) every $2$-edge-connected cubic graph of size $n$ has a path of length $\Omega\left(\frac{\log^2{n}}{\log{\log{n}}}\right)$, and (ii) there exists a $2$-edge-connected cubic graph, such that every path in the graph has length $O(\log^2{n})$.
1512.04150
Bolei Zhou
Bolei Zhou and Aditya Khosla and Agata Lapedriza and Aude Oliva and Antonio Torralba
Learning Deep Features for Discriminative Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2% top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them
[ { "created": "Mon, 14 Dec 2015 01:32:33 GMT", "version": "v1" } ]
2015-12-15
[ [ "Zhou", "Bolei", "" ], [ "Khosla", "Aditya", "" ], [ "Lapedriza", "Agata", "" ], [ "Oliva", "Aude", "" ], [ "Torralba", "Antonio", "" ] ]
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2% top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them
2306.02413
Sam Powers
Sam Powers, Abhinav Gupta, Chris Paxton
Evaluating Continual Learning on a Home Robot
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robots in home environments need to be able to learn new skills continuously as data becomes available, becoming ever more capable over time while using as little real-world data as possible. However, traditional robot learning approaches typically assume large amounts of iid data, which is inconsistent with this goal. In contrast, continual learning methods like CLEAR and SANE allow autonomous agents to learn off of a stream of non-iid samples; they, however, have not previously been demonstrated on real robotics platforms. In this work, we show how continual learning methods can be adapted for use on a real, low-cost home robot, and in particular look at the case where we have extremely small numbers of examples, in a task-id-free setting. Specifically, we propose SANER, a method for continuously learning a library of skills, and ABIP (Attention-Based Interaction Policies) as the backbone to support it. We learn four sequential kitchen tasks on a low-cost home robot, using only a handful of demonstrations per task.
[ { "created": "Sun, 4 Jun 2023 17:14:49 GMT", "version": "v1" } ]
2023-06-06
[ [ "Powers", "Sam", "" ], [ "Gupta", "Abhinav", "" ], [ "Paxton", "Chris", "" ] ]
Robots in home environments need to be able to learn new skills continuously as data becomes available, becoming ever more capable over time while using as little real-world data as possible. However, traditional robot learning approaches typically assume large amounts of iid data, which is inconsistent with this goal. In contrast, continual learning methods like CLEAR and SANE allow autonomous agents to learn off of a stream of non-iid samples; they, however, have not previously been demonstrated on real robotics platforms. In this work, we show how continual learning methods can be adapted for use on a real, low-cost home robot, and in particular look at the case where we have extremely small numbers of examples, in a task-id-free setting. Specifically, we propose SANER, a method for continuously learning a library of skills, and ABIP (Attention-Based Interaction Policies) as the backbone to support it. We learn four sequential kitchen tasks on a low-cost home robot, using only a handful of demonstrations per task.
2201.09574
Jian-Wei Liu
Ze-yu Liu, Jian-wei Liu, Xin Zuo, Ming-fei Hu
Multi-Scale Iterative Refinement Network for RGB-D Salient Object Detection
40 pages
Engineering Applications of Artificial Intelligence(2021)
10.1016/j.engappai.2021.104473
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The extensive research leveraging RGB-D information has been exploited in salient object detection. However, salient visual cues appear in various scales and resolutions of RGB images due to semantic gaps at different feature levels. Meanwhile, similar salient patterns are available in cross-modal depth images as well as multi-scale versions. Cross-modal fusion and multi-scale refinement are still an open problem in RGB-D salient object detection task. In this paper, we begin by introducing top-down and bottom-up iterative refinement architecture to leverage multi-scale features, and then devise attention based fusion module (ABF) to address on cross-modal correlation. We conduct extensive experiments on seven public datasets. The experimental results show the effectiveness of our devised method
[ { "created": "Mon, 24 Jan 2022 10:33:00 GMT", "version": "v1" } ]
2022-01-25
[ [ "Liu", "Ze-yu", "" ], [ "Liu", "Jian-wei", "" ], [ "Zuo", "Xin", "" ], [ "Hu", "Ming-fei", "" ] ]
The extensive research leveraging RGB-D information has been exploited in salient object detection. However, salient visual cues appear in various scales and resolutions of RGB images due to semantic gaps at different feature levels. Meanwhile, similar salient patterns are available in cross-modal depth images as well as multi-scale versions. Cross-modal fusion and multi-scale refinement are still an open problem in RGB-D salient object detection task. In this paper, we begin by introducing top-down and bottom-up iterative refinement architecture to leverage multi-scale features, and then devise attention based fusion module (ABF) to address on cross-modal correlation. We conduct extensive experiments on seven public datasets. The experimental results show the effectiveness of our devised method
2212.12070
Miquel Ferriol-Galm\'es
Miquel Ferriol-Galm\'es, Jordi Paillisse, Jos\'e Su\'arez-Varela, Krzysztof Rusek, Shihan Xiao, Xiang Shi, Xiangle Cheng, Pere Barlet-Ros, Albert Cabellos-Aparicio
RouteNet-Fermi: Network Modeling with Graph Neural Networks
This paper has been accepted for publication at IEEE/ACM Transactions on Networking 2023 (DOI: 10.1109/TNET.2023.3269983). \copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses
null
10.1109/TNET.2023.3269983
null
cs.NI cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Network models are an essential block of modern networks. For example, they are widely used in network planning and optimization. However, as networks increase in scale and complexity, some models present limitations, such as the assumption of Markovian traffic in queuing theory models, or the high computational cost of network simulators. Recent advances in machine learning, such as Graph Neural Networks (GNN), are enabling a new generation of network models that are data-driven and can learn complex non-linear behaviors. In this paper, we present RouteNet-Fermi, a custom GNN model that shares the same goals as Queuing Theory, while being considerably more accurate in the presence of realistic traffic models. The proposed model predicts accurately the delay, jitter, and packet loss of a network. We have tested RouteNet-Fermi in networks of increasing size (up to 300 nodes), including samples with mixed traffic profiles -- e.g., with complex non-Markovian models -- and arbitrary routing and queue scheduling configurations. Our experimental results show that RouteNet-Fermi achieves similar accuracy as computationally-expensive packet-level simulators and scales accurately to larger networks. Our model produces delay estimates with a mean relative error of 6.24% when applied to a test dataset of 1,000 samples, including network topologies one order of magnitude larger than those seen during training. Finally, we have also evaluated RouteNet-Fermi with measurements from a physical testbed and packet traces from a real-life network.
[ { "created": "Thu, 22 Dec 2022 23:02:40 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 16:47:16 GMT", "version": "v2" }, { "created": "Wed, 20 Sep 2023 07:42:10 GMT", "version": "v3" } ]
2023-09-21
[ [ "Ferriol-Galmés", "Miquel", "" ], [ "Paillisse", "Jordi", "" ], [ "Suárez-Varela", "José", "" ], [ "Rusek", "Krzysztof", "" ], [ "Xiao", "Shihan", "" ], [ "Shi", "Xiang", "" ], [ "Cheng", "Xiangle", "" ], [ "Barlet-Ros", "Pere", "" ], [ "Cabellos-Aparicio", "Albert", "" ] ]
Network models are an essential block of modern networks. For example, they are widely used in network planning and optimization. However, as networks increase in scale and complexity, some models present limitations, such as the assumption of Markovian traffic in queuing theory models, or the high computational cost of network simulators. Recent advances in machine learning, such as Graph Neural Networks (GNN), are enabling a new generation of network models that are data-driven and can learn complex non-linear behaviors. In this paper, we present RouteNet-Fermi, a custom GNN model that shares the same goals as Queuing Theory, while being considerably more accurate in the presence of realistic traffic models. The proposed model predicts accurately the delay, jitter, and packet loss of a network. We have tested RouteNet-Fermi in networks of increasing size (up to 300 nodes), including samples with mixed traffic profiles -- e.g., with complex non-Markovian models -- and arbitrary routing and queue scheduling configurations. Our experimental results show that RouteNet-Fermi achieves similar accuracy as computationally-expensive packet-level simulators and scales accurately to larger networks. Our model produces delay estimates with a mean relative error of 6.24% when applied to a test dataset of 1,000 samples, including network topologies one order of magnitude larger than those seen during training. Finally, we have also evaluated RouteNet-Fermi with measurements from a physical testbed and packet traces from a real-life network.
2201.03115
Rohitash Chandra
Rohitash Chandra, Venkatesh Kulkarni
Semantic and sentiment analysis of selected Bhagavad Gita translations using BERT-based language framework
null
IEEE Access, 2022
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
It is well known that translations of songs and poems not only break rhythm and rhyming patterns, but can also result in loss of semantic information. The Bhagavad Gita is an ancient Hindu philosophical text originally written in Sanskrit that features a conversation between Lord Krishna and Arjuna prior to the Mahabharata war. The Bhagavad Gita is also one of the key sacred texts in Hinduism and is known as the forefront of the Vedic corpus of Hinduism. In the last two centuries, there has been a lot of interest in Hindu philosophy from western scholars; hence, the Bhagavad Gita has been translated in a number of languages. However, there is not much work that validates the quality of the English translations. Recent progress of language models powered by deep learning has enabled not only translations but a better understanding of language and texts with semantic and sentiment analysis. Our work is motivated by the recent progress of language models powered by deep learning methods. In this paper, we present a framework that compares selected translations (from Sanskrit to English) of the Bhagavad Gita using semantic and sentiment analyses. We use hand-labelled sentiment dataset for tuning state-of-art deep learning-based language model known as bidirectional encoder representations from transformers (BERT). We provide sentiment and semantic analysis for selected chapters and verses across translations. Our results show that although the style and vocabulary in the respective translations vary widely, the sentiment analysis and semantic similarity shows that the message conveyed are mostly similar.
[ { "created": "Sun, 9 Jan 2022 23:59:11 GMT", "version": "v1" }, { "created": "Tue, 15 Feb 2022 10:22:32 GMT", "version": "v2" } ]
2022-02-16
[ [ "Chandra", "Rohitash", "" ], [ "Kulkarni", "Venkatesh", "" ] ]
It is well known that translations of songs and poems not only break rhythm and rhyming patterns, but can also result in loss of semantic information. The Bhagavad Gita is an ancient Hindu philosophical text originally written in Sanskrit that features a conversation between Lord Krishna and Arjuna prior to the Mahabharata war. The Bhagavad Gita is also one of the key sacred texts in Hinduism and is known as the forefront of the Vedic corpus of Hinduism. In the last two centuries, there has been a lot of interest in Hindu philosophy from western scholars; hence, the Bhagavad Gita has been translated in a number of languages. However, there is not much work that validates the quality of the English translations. Recent progress of language models powered by deep learning has enabled not only translations but a better understanding of language and texts with semantic and sentiment analysis. Our work is motivated by the recent progress of language models powered by deep learning methods. In this paper, we present a framework that compares selected translations (from Sanskrit to English) of the Bhagavad Gita using semantic and sentiment analyses. We use hand-labelled sentiment dataset for tuning state-of-art deep learning-based language model known as bidirectional encoder representations from transformers (BERT). We provide sentiment and semantic analysis for selected chapters and verses across translations. Our results show that although the style and vocabulary in the respective translations vary widely, the sentiment analysis and semantic similarity shows that the message conveyed are mostly similar.
2009.10569
Ozan Unal
Ozan Unal, Luc Van Gool, Dengxin Dai
Improving Point Cloud Semantic Segmentation by Learning 3D Object Detection
Accepted at IEEE Winter Conference on Applications of Computer Vision 2021 (WACV'21)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud semantic segmentation plays an essential role in autonomous driving, providing vital information about drivable surfaces and nearby objects that can aid higher level tasks such as path planning and collision avoidance. While current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes, they show a significant drop in performance for underrepresented classes that share similar geometric features. We propose a novel Detection Aware 3D Semantic Segmentation (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task. By utilizing multitask training, the shared feature representation of the network is guided to be aware of per class detection features that aid tackling the differentiation of geometrically similar classes. We additionally provide a pipeline that uses DASS to generate high recall proposals for existing 2-stage detectors and demonstrate that the added supervisory signal can be used to improve 3D orientation estimation capabilities. Extensive experiments on both the SemanticKITTI and KITTI object datasets show that DASS can improve 3D semantic segmentation results of geometrically similar classes up to 37.8% IoU in image FOV while maintaining high precision bird's-eye view (BEV) detection results.
[ { "created": "Tue, 22 Sep 2020 14:17:40 GMT", "version": "v1" }, { "created": "Wed, 23 Sep 2020 08:18:00 GMT", "version": "v2" }, { "created": "Sat, 7 Nov 2020 15:58:19 GMT", "version": "v3" } ]
2020-11-10
[ [ "Unal", "Ozan", "" ], [ "Van Gool", "Luc", "" ], [ "Dai", "Dengxin", "" ] ]
Point cloud semantic segmentation plays an essential role in autonomous driving, providing vital information about drivable surfaces and nearby objects that can aid higher level tasks such as path planning and collision avoidance. While current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes, they show a significant drop in performance for underrepresented classes that share similar geometric features. We propose a novel Detection Aware 3D Semantic Segmentation (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task. By utilizing multitask training, the shared feature representation of the network is guided to be aware of per class detection features that aid tackling the differentiation of geometrically similar classes. We additionally provide a pipeline that uses DASS to generate high recall proposals for existing 2-stage detectors and demonstrate that the added supervisory signal can be used to improve 3D orientation estimation capabilities. Extensive experiments on both the SemanticKITTI and KITTI object datasets show that DASS can improve 3D semantic segmentation results of geometrically similar classes up to 37.8% IoU in image FOV while maintaining high precision bird's-eye view (BEV) detection results.
1903.10152
Xiaowei Hu
Xiaowei Hu, Chi-Wing Fu, Lei Zhu, Tianyu Wang, Pheng-Ann Heng
SAC-Net: Spatial Attenuation Context for Salient Object Detection
null
null
10.1109/TCSVT.2020.2995220
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new deep neural network design for salient object detection by maximizing the integration of local and global image context within, around, and beyond the salient objects. Our key idea is to adaptively propagate and aggregate the image context features with variable attenuation over the entire feature maps. To achieve this, we design the spatial attenuation context (SAC) module to recurrently translate and aggregate the context features independently with different attenuation factors and then to attentively learn the weights to adaptively integrate the aggregated context features. By further embedding the module to process individual layers in a deep network, namely SAC-Net, we can train the network end-to-end and optimize the context features for detecting salient objects. Compared with 29 state-of-the-art methods, experimental results show that our method performs favorably over all the others on six common benchmark data, both quantitatively and visually.
[ { "created": "Mon, 25 Mar 2019 06:56:15 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2019 01:34:49 GMT", "version": "v2" }, { "created": "Tue, 12 May 2020 12:45:17 GMT", "version": "v3" } ]
2020-05-21
[ [ "Hu", "Xiaowei", "" ], [ "Fu", "Chi-Wing", "" ], [ "Zhu", "Lei", "" ], [ "Wang", "Tianyu", "" ], [ "Heng", "Pheng-Ann", "" ] ]
This paper presents a new deep neural network design for salient object detection by maximizing the integration of local and global image context within, around, and beyond the salient objects. Our key idea is to adaptively propagate and aggregate the image context features with variable attenuation over the entire feature maps. To achieve this, we design the spatial attenuation context (SAC) module to recurrently translate and aggregate the context features independently with different attenuation factors and then to attentively learn the weights to adaptively integrate the aggregated context features. By further embedding the module to process individual layers in a deep network, namely SAC-Net, we can train the network end-to-end and optimize the context features for detecting salient objects. Compared with 29 state-of-the-art methods, experimental results show that our method performs favorably over all the others on six common benchmark data, both quantitatively and visually.
2103.07854
Jianhua Sun
Jianhua Sun, Yuxuan Li, Hao-Shu Fang, Cewu Lu
Three Steps to Multimodal Trajectory Prediction: Modality Clustering, Classification and Synthesis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal prediction results are essential for trajectory prediction task as there is no single correct answer for the future. Previous frameworks can be divided into three categories: regression, generation and classification frameworks. However, these frameworks have weaknesses in different aspects so that they cannot model the multimodal prediction task comprehensively. In this paper, we present a novel insight along with a brand-new prediction framework by formulating multimodal prediction into three steps: modality clustering, classification and synthesis, and address the shortcomings of earlier frameworks. Exhaustive experiments on popular benchmarks have demonstrated that our proposed method surpasses state-of-the-art works even without introducing social and map information. Specifically, we achieve 19.2% and 20.8% improvement on ADE and FDE respectively on ETH/UCY dataset. Our code will be made publicly availabe.
[ { "created": "Sun, 14 Mar 2021 06:21:03 GMT", "version": "v1" }, { "created": "Mon, 22 Mar 2021 15:22:04 GMT", "version": "v2" } ]
2021-03-23
[ [ "Sun", "Jianhua", "" ], [ "Li", "Yuxuan", "" ], [ "Fang", "Hao-Shu", "" ], [ "Lu", "Cewu", "" ] ]
Multimodal prediction results are essential for trajectory prediction task as there is no single correct answer for the future. Previous frameworks can be divided into three categories: regression, generation and classification frameworks. However, these frameworks have weaknesses in different aspects so that they cannot model the multimodal prediction task comprehensively. In this paper, we present a novel insight along with a brand-new prediction framework by formulating multimodal prediction into three steps: modality clustering, classification and synthesis, and address the shortcomings of earlier frameworks. Exhaustive experiments on popular benchmarks have demonstrated that our proposed method surpasses state-of-the-art works even without introducing social and map information. Specifically, we achieve 19.2% and 20.8% improvement on ADE and FDE respectively on ETH/UCY dataset. Our code will be made publicly availabe.
2202.07137
Wanming Hao
Wanming Hao, Fuhui Zhou, Ming Zeng, Octavia A. Dobre, Naofal Al-Dhahir
Ultra Wide Band THz IRS Communications: Applications, Challenges, Key Techniques, and Research Opportunities
null
IEEE Network,2022
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Terahertz (THz) communication is a promising technology for future wireless networks due to its ultra-wide bandwidth. However, THz signals suffer from severe attenuation and poor diffraction capability, making it vulnerable to blocking obstacles. To compensate for these two shortcomings and improve the system performance, an intelligent reflecting surface (IRS) can be exploited to change the propagation direction and enhance the signal strength. In this article, we investigate this promising ultra wide band (UWB) THz IRS communication paradigm. We start by motivating our research and describing several potential application scenarios. Then, we identify major challenges faced by UWB THz IRS communications. To overcome these challenges, several effective key techniques are developed, i.e., the time delayer-based sparse radio frequency antenna structure, delay hybrid precoding and IRS deployment. Simulation results are also presented to compare the system performance for these proposed techniques, thus demonstrating their effectiveness. Finally, we highlight several open issues and research opportunities for UWB THz IRS communications.
[ { "created": "Tue, 15 Feb 2022 02:15:50 GMT", "version": "v1" } ]
2022-02-16
[ [ "Hao", "Wanming", "" ], [ "Zhou", "Fuhui", "" ], [ "Zeng", "Ming", "" ], [ "Dobre", "Octavia A.", "" ], [ "Al-Dhahir", "Naofal", "" ] ]
Terahertz (THz) communication is a promising technology for future wireless networks due to its ultra-wide bandwidth. However, THz signals suffer from severe attenuation and poor diffraction capability, making it vulnerable to blocking obstacles. To compensate for these two shortcomings and improve the system performance, an intelligent reflecting surface (IRS) can be exploited to change the propagation direction and enhance the signal strength. In this article, we investigate this promising ultra wide band (UWB) THz IRS communication paradigm. We start by motivating our research and describing several potential application scenarios. Then, we identify major challenges faced by UWB THz IRS communications. To overcome these challenges, several effective key techniques are developed, i.e., the time delayer-based sparse radio frequency antenna structure, delay hybrid precoding and IRS deployment. Simulation results are also presented to compare the system performance for these proposed techniques, thus demonstrating their effectiveness. Finally, we highlight several open issues and research opportunities for UWB THz IRS communications.
1909.00384
Hyunjung Kwak
Gloria Hyunjung Kwak and Pan Hui
DeepHealth: Review and challenges of artificial intelligence in health informatics
42 pages, 19 figures, under review
null
null
null
cs.LG cs.CV eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence has provided us with an exploration of a whole new research era. As more data and better computational power become available, the approach is being implemented in various fields. The demand for it in health informatics is also increasing, and we can expect to see the potential benefits of its applications in healthcare. It can help clinicians diagnose disease, identify drug effects for each patient, understand the relationship between genotypes and phenotypes, explore new phenotypes or treatment recommendations, and predict infectious disease outbreaks with high accuracy. In contrast to traditional models, recent artificial intelligence approaches do not require domain-specific data pre-processing, and it is expected that it will ultimately change life in the future. Despite its notable advantages, there are some key challenges on data (high dimensionality, heterogeneity, time dependency, sparsity, irregularity, lack of label, bias) and model (reliability, interpretability, feasibility, security, scalability) for practical use. This article presents a comprehensive review of research applying artificial intelligence in health informatics, focusing on the last seven years in the fields of medical imaging, electronic health records, genomics, sensing, and online communication health, as well as challenges and promising directions for future research. We highlight ongoing popular approaches' research and identify several challenges in building models.
[ { "created": "Sun, 1 Sep 2019 11:54:38 GMT", "version": "v1" }, { "created": "Sat, 8 Aug 2020 05:54:41 GMT", "version": "v2" } ]
2020-08-11
[ [ "Kwak", "Gloria Hyunjung", "" ], [ "Hui", "Pan", "" ] ]
Artificial intelligence has provided us with an exploration of a whole new research era. As more data and better computational power become available, the approach is being implemented in various fields. The demand for it in health informatics is also increasing, and we can expect to see the potential benefits of its applications in healthcare. It can help clinicians diagnose disease, identify drug effects for each patient, understand the relationship between genotypes and phenotypes, explore new phenotypes or treatment recommendations, and predict infectious disease outbreaks with high accuracy. In contrast to traditional models, recent artificial intelligence approaches do not require domain-specific data pre-processing, and it is expected that it will ultimately change life in the future. Despite its notable advantages, there are some key challenges on data (high dimensionality, heterogeneity, time dependency, sparsity, irregularity, lack of label, bias) and model (reliability, interpretability, feasibility, security, scalability) for practical use. This article presents a comprehensive review of research applying artificial intelligence in health informatics, focusing on the last seven years in the fields of medical imaging, electronic health records, genomics, sensing, and online communication health, as well as challenges and promising directions for future research. We highlight ongoing popular approaches' research and identify several challenges in building models.
1909.05363
Armins Stepanjans
Armins Stepanjans and Andr\'e Freitas
Identifying and Explaining Discriminative Attributes
EMNLP-IJCNLP 2019, source code available at https://github.com/ab-10/hawk
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task. This paper describes an explicit word vector representation model (WVM) to support the identification of discriminative attributes. A core contribution of the paper is a quantitative and qualitative comparative analysis of different types of data sources and Knowledge Bases in the construction of explainable and explicit WVMs: (i) knowledge graphs built from dictionary definitions, (ii) entity-attribute-relationships graphs derived from images and (iii) commonsense knowledge graphs. Using a detailed quantitative and qualitative analysis, we demonstrate that these data sources have complementary semantic aspects, supporting the creation of explicit semantic vector spaces. The explicit vector spaces are evaluated using the task of discriminative attribute identification, showing comparable performance to the state-of-the-art systems in the task (F1-score = 0.69), while delivering full model transparency and explainability.
[ { "created": "Thu, 5 Sep 2019 01:13:41 GMT", "version": "v1" } ]
2019-09-13
[ [ "Stepanjans", "Armins", "" ], [ "Freitas", "André", "" ] ]
Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task. This paper describes an explicit word vector representation model (WVM) to support the identification of discriminative attributes. A core contribution of the paper is a quantitative and qualitative comparative analysis of different types of data sources and Knowledge Bases in the construction of explainable and explicit WVMs: (i) knowledge graphs built from dictionary definitions, (ii) entity-attribute-relationships graphs derived from images and (iii) commonsense knowledge graphs. Using a detailed quantitative and qualitative analysis, we demonstrate that these data sources have complementary semantic aspects, supporting the creation of explicit semantic vector spaces. The explicit vector spaces are evaluated using the task of discriminative attribute identification, showing comparable performance to the state-of-the-art systems in the task (F1-score = 0.69), while delivering full model transparency and explainability.
1204.5952
Sergei Kozyrev
S. Albeverio, S.V. Kozyrev
Clustering by hypergraphs and dimensionality of cluster systems
15 pages
p-Adic Numbers, Ultrametric Analysis and Applications, 4 (2012) no. 3, 167--178
null
null
cs.DS q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present paper we discuss the clustering procedure in the case where instead of a single metric we have a family of metrics. In this case we can obtain a partially ordered graph of clusters which is not necessarily a tree. We discuss a structure of a hypergraph above this graph. We propose two definitions of dimension for hyperedges of this hypergraph and show that for the multidimensional p-adic case both dimensions are reduced to the number of p-adic parameters. We discuss the application of the hypergraph clustering procedure to the construction of phylogenetic graphs in biology. In this case the dimension of a hyperedge will describe the number of sources of genetic diversity.
[ { "created": "Thu, 26 Apr 2012 14:57:59 GMT", "version": "v1" } ]
2012-08-01
[ [ "Albeverio", "S.", "" ], [ "Kozyrev", "S. V.", "" ] ]
In the present paper we discuss the clustering procedure in the case where instead of a single metric we have a family of metrics. In this case we can obtain a partially ordered graph of clusters which is not necessarily a tree. We discuss a structure of a hypergraph above this graph. We propose two definitions of dimension for hyperedges of this hypergraph and show that for the multidimensional p-adic case both dimensions are reduced to the number of p-adic parameters. We discuss the application of the hypergraph clustering procedure to the construction of phylogenetic graphs in biology. In this case the dimension of a hyperedge will describe the number of sources of genetic diversity.
1608.05444
Alan Jeffrey
Connor G. Brewster and Alan Jeffrey
A Model of Navigation History
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Navigation has been a core component of the web since its inception: users and scripts can follow hyperlinks, and can go back or forwards through the navigation history. In this paper, we present a formal model aligned with the WHATWG specification of navigation history, and investigate its properties. The fundamental property of navigation history is that traversing the history by delta then by delta' should be the same as traversing by delta+delta'. In particular, traversing by +1 (forward) then by -1 (back) is the same as traversing by 0 (doing nothing). We show that the specification-aligned model does not satisfy this property, by exhibiting a series of counter-examples, which motivate four patches to the model. We present a series of experiments, showing that browsers are inconsistent in their implementation of navigation history, but that their behaviour is closer to the patched model than to the specification-aligned model. We propose patches to the specification to align it with the patched model.
[ { "created": "Thu, 18 Aug 2016 22:35:40 GMT", "version": "v1" } ]
2016-08-22
[ [ "Brewster", "Connor G.", "" ], [ "Jeffrey", "Alan", "" ] ]
Navigation has been a core component of the web since its inception: users and scripts can follow hyperlinks, and can go back or forwards through the navigation history. In this paper, we present a formal model aligned with the WHATWG specification of navigation history, and investigate its properties. The fundamental property of navigation history is that traversing the history by delta then by delta' should be the same as traversing by delta+delta'. In particular, traversing by +1 (forward) then by -1 (back) is the same as traversing by 0 (doing nothing). We show that the specification-aligned model does not satisfy this property, by exhibiting a series of counter-examples, which motivate four patches to the model. We present a series of experiments, showing that browsers are inconsistent in their implementation of navigation history, but that their behaviour is closer to the patched model than to the specification-aligned model. We propose patches to the specification to align it with the patched model.
2405.03971
Zhang Bozhen
Zhiwei Li, Bozhen Zhang, Lei Yang, Tianyu Shen, Nuo Xu, Ruosen Hao, Weiting Li, Tao Yan, Huaping Liu
Unified End-to-End V2X Cooperative Autonomous Driving
null
null
null
null
cs.CV cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
V2X cooperation, through the integration of sensor data from both vehicles and infrastructure, is considered a pivotal approach to advancing autonomous driving technology. Current research primarily focuses on enhancing perception accuracy, often overlooking the systematic improvement of accident prediction accuracy through end-to-end learning, leading to insufficient attention to the safety issues of autonomous driving. To address this challenge, this paper introduces the UniE2EV2X framework, a V2X-integrated end-to-end autonomous driving system that consolidates key driving modules within a unified network. The framework employs a deformable attention-based data fusion strategy, effectively facilitating cooperation between vehicles and infrastructure. The main advantages include: 1) significantly enhancing agents' perception and motion prediction capabilities, thereby improving the accuracy of accident predictions; 2) ensuring high reliability in the data fusion process; 3) superior end-to-end perception compared to modular approaches. Furthermore, We implement the UniE2EV2X framework on the challenging DeepAccident, a simulation dataset designed for V2X cooperative driving.
[ { "created": "Tue, 7 May 2024 03:01:40 GMT", "version": "v1" } ]
2024-05-08
[ [ "Li", "Zhiwei", "" ], [ "Zhang", "Bozhen", "" ], [ "Yang", "Lei", "" ], [ "Shen", "Tianyu", "" ], [ "Xu", "Nuo", "" ], [ "Hao", "Ruosen", "" ], [ "Li", "Weiting", "" ], [ "Yan", "Tao", "" ], [ "Liu", "Huaping", "" ] ]
V2X cooperation, through the integration of sensor data from both vehicles and infrastructure, is considered a pivotal approach to advancing autonomous driving technology. Current research primarily focuses on enhancing perception accuracy, often overlooking the systematic improvement of accident prediction accuracy through end-to-end learning, leading to insufficient attention to the safety issues of autonomous driving. To address this challenge, this paper introduces the UniE2EV2X framework, a V2X-integrated end-to-end autonomous driving system that consolidates key driving modules within a unified network. The framework employs a deformable attention-based data fusion strategy, effectively facilitating cooperation between vehicles and infrastructure. The main advantages include: 1) significantly enhancing agents' perception and motion prediction capabilities, thereby improving the accuracy of accident predictions; 2) ensuring high reliability in the data fusion process; 3) superior end-to-end perception compared to modular approaches. Furthermore, We implement the UniE2EV2X framework on the challenging DeepAccident, a simulation dataset designed for V2X cooperative driving.
2407.10026
Shubhransh Singhvi
Shubhransh Singhvi, Omer Sabary, Daniella Bar-Lev and Eitan Yaakobi
Conditional Entropies of k-Deletion/Insertion Channels
arXiv admin note: substantial text overlap with arXiv:2202.03024
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The channel output entropy of a transmitted sequence is the entropy of the possible channel outputs and similarly the channel input entropy of a received sequence is the entropy of all possible transmitted sequences. The goal of this work is to study these entropy values for the k-deletion, k-insertion channels, where exactly k symbols are deleted, inserted in the transmitted sequence, respectively. If all possible sequences are transmitted with the same probability then studying the input and output entropies is equivalent. For both the 1-deletion and 1-insertion channels, it is proved that among all sequences with a fixed number of runs, the input entropy is minimized for sequences with a skewed distribution of their run lengths and it is maximized for sequences with a balanced distribution of their run lengths. Among our results, we establish a conjecture by Atashpendar et al. which claims that for the 1-deletion channel, the input entropy is maximized by the alternating sequences over all binary sequences. This conjecture is also verified for the 2-deletion channel, where it is proved that constant sequences with a single run minimize the input entropy.
[ { "created": "Sat, 13 Jul 2024 23:04:56 GMT", "version": "v1" } ]
2024-07-16
[ [ "Singhvi", "Shubhransh", "" ], [ "Sabary", "Omer", "" ], [ "Bar-Lev", "Daniella", "" ], [ "Yaakobi", "Eitan", "" ] ]
The channel output entropy of a transmitted sequence is the entropy of the possible channel outputs and similarly the channel input entropy of a received sequence is the entropy of all possible transmitted sequences. The goal of this work is to study these entropy values for the k-deletion, k-insertion channels, where exactly k symbols are deleted, inserted in the transmitted sequence, respectively. If all possible sequences are transmitted with the same probability then studying the input and output entropies is equivalent. For both the 1-deletion and 1-insertion channels, it is proved that among all sequences with a fixed number of runs, the input entropy is minimized for sequences with a skewed distribution of their run lengths and it is maximized for sequences with a balanced distribution of their run lengths. Among our results, we establish a conjecture by Atashpendar et al. which claims that for the 1-deletion channel, the input entropy is maximized by the alternating sequences over all binary sequences. This conjecture is also verified for the 2-deletion channel, where it is proved that constant sequences with a single run minimize the input entropy.
1412.0348
Arturs Backurs
Arturs Backurs, Piotr Indyk
Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)
STOC'15
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time. In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be tight. Specifically, we show that, if the edit distance can be computed in time $O(n^{2-\delta})$ for some constant $\delta>0$, then the satisfiability of conjunctive normal form formulas with $N$ variables and $M$ clauses can be solved in time $M^{O(1)} 2^{(1-\epsilon)N}$ for a constant $\epsilon>0$. The latter result would violate the Strong Exponential Time Hypothesis, which postulates that such algorithms do not exist.
[ { "created": "Mon, 1 Dec 2014 04:57:06 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2015 21:13:21 GMT", "version": "v2" }, { "created": "Mon, 3 Apr 2017 17:11:08 GMT", "version": "v3" }, { "created": "Tue, 15 Aug 2017 18:01:17 GMT", "version": "v4" } ]
2017-08-17
[ [ "Backurs", "Arturs", "" ], [ "Indyk", "Piotr", "" ] ]
The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time. In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be tight. Specifically, we show that, if the edit distance can be computed in time $O(n^{2-\delta})$ for some constant $\delta>0$, then the satisfiability of conjunctive normal form formulas with $N$ variables and $M$ clauses can be solved in time $M^{O(1)} 2^{(1-\epsilon)N}$ for a constant $\epsilon>0$. The latter result would violate the Strong Exponential Time Hypothesis, which postulates that such algorithms do not exist.
2103.02015
Nati Daniel
Nati Daniel, Ariel Larey, Eliel Aknin, Garrett A. Osswald, Julie M. Caldwell, Mark Rochman, Margaret H. Collins, Guang-Yu Yang, Nicoleta C. Arva, Kelley E. Capocelli, Marc E. Rothenberg, Yonatan Savir
PECNet: A Deep Multi-Label Segmentation Network for Eosinophilic Esophagitis Biopsy Diagnostics
null
null
null
null
cs.CV cs.LG eess.IV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background. Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus associated with elevated numbers of eosinophils. Disease diagnosis and monitoring requires determining the concentration of eosinophils in esophageal biopsies, a time-consuming, tedious and somewhat subjective task currently performed by pathologists. Methods. Herein, we aimed to use machine learning to identify, quantitate and diagnose EoE. We labeled more than 100M pixels of 4345 images obtained by scanning whole slides of H&E-stained sections of esophageal biopsies derived from 23 EoE patients. We used this dataset to train a multi-label segmentation deep network. To validate the network, we examined a replication cohort of 1089 whole slide images from 419 patients derived from multiple institutions. Findings. PECNet segmented both intact and not-intact eosinophils with a mean intersection over union (mIoU) of 0.93. This segmentation was able to quantitate intact eosinophils with a mean absolute error of 0.611 eosinophils and classify EoE disease activity with an accuracy of 98.5%. Using whole slide images from the validation cohort, PECNet achieved an accuracy of 94.8%, sensitivity of 94.3%, and specificity of 95.14% in reporting EoE disease activity. Interpretation. We have developed a deep learning multi-label semantic segmentation network that successfully addresses two of the main challenges in EoE diagnostics and digital pathology, the need to detect several types of small features simultaneously and the ability to analyze whole slides efficiently. Our results pave the way for an automated diagnosis of EoE and can be utilized for other conditions with similar challenges.
[ { "created": "Tue, 2 Mar 2021 20:37:57 GMT", "version": "v1" } ]
2021-03-04
[ [ "Daniel", "Nati", "" ], [ "Larey", "Ariel", "" ], [ "Aknin", "Eliel", "" ], [ "Osswald", "Garrett A.", "" ], [ "Caldwell", "Julie M.", "" ], [ "Rochman", "Mark", "" ], [ "Collins", "Margaret H.", "" ], [ "Yang", "Guang-Yu", "" ], [ "Arva", "Nicoleta C.", "" ], [ "Capocelli", "Kelley E.", "" ], [ "Rothenberg", "Marc E.", "" ], [ "Savir", "Yonatan", "" ] ]
Background. Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus associated with elevated numbers of eosinophils. Disease diagnosis and monitoring requires determining the concentration of eosinophils in esophageal biopsies, a time-consuming, tedious and somewhat subjective task currently performed by pathologists. Methods. Herein, we aimed to use machine learning to identify, quantitate and diagnose EoE. We labeled more than 100M pixels of 4345 images obtained by scanning whole slides of H&E-stained sections of esophageal biopsies derived from 23 EoE patients. We used this dataset to train a multi-label segmentation deep network. To validate the network, we examined a replication cohort of 1089 whole slide images from 419 patients derived from multiple institutions. Findings. PECNet segmented both intact and not-intact eosinophils with a mean intersection over union (mIoU) of 0.93. This segmentation was able to quantitate intact eosinophils with a mean absolute error of 0.611 eosinophils and classify EoE disease activity with an accuracy of 98.5%. Using whole slide images from the validation cohort, PECNet achieved an accuracy of 94.8%, sensitivity of 94.3%, and specificity of 95.14% in reporting EoE disease activity. Interpretation. We have developed a deep learning multi-label semantic segmentation network that successfully addresses two of the main challenges in EoE diagnostics and digital pathology, the need to detect several types of small features simultaneously and the ability to analyze whole slides efficiently. Our results pave the way for an automated diagnosis of EoE and can be utilized for other conditions with similar challenges.
2404.02053
Enmin Zhu
Enmin Zhu, Jerome Yen
BERTopic-Driven Stock Market Predictions: Unraveling Sentiment Insights
null
null
null
null
cs.CL cs.CE q-fin.ST
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the intersection of Natural Language Processing (NLP) and financial analysis, focusing on the impact of sentiment analysis in stock price prediction. We employ BERTopic, an advanced NLP technique, to analyze the sentiment of topics derived from stock market comments. Our methodology integrates this sentiment analysis with various deep learning models, renowned for their effectiveness in time series and stock prediction tasks. Through comprehensive experiments, we demonstrate that incorporating topic sentiment notably enhances the performance of these models. The results indicate that topics in stock market comments provide implicit, valuable insights into stock market volatility and price trends. This study contributes to the field by showcasing the potential of NLP in enriching financial analysis and opens up avenues for further research into real-time sentiment analysis and the exploration of emotional and contextual aspects of market sentiment. The integration of advanced NLP techniques like BERTopic with traditional financial analysis methods marks a step forward in developing more sophisticated tools for understanding and predicting market behaviors.
[ { "created": "Tue, 2 Apr 2024 15:50:10 GMT", "version": "v1" }, { "created": "Thu, 4 Apr 2024 08:05:37 GMT", "version": "v2" } ]
2024-04-05
[ [ "Zhu", "Enmin", "" ], [ "Yen", "Jerome", "" ] ]
This paper explores the intersection of Natural Language Processing (NLP) and financial analysis, focusing on the impact of sentiment analysis in stock price prediction. We employ BERTopic, an advanced NLP technique, to analyze the sentiment of topics derived from stock market comments. Our methodology integrates this sentiment analysis with various deep learning models, renowned for their effectiveness in time series and stock prediction tasks. Through comprehensive experiments, we demonstrate that incorporating topic sentiment notably enhances the performance of these models. The results indicate that topics in stock market comments provide implicit, valuable insights into stock market volatility and price trends. This study contributes to the field by showcasing the potential of NLP in enriching financial analysis and opens up avenues for further research into real-time sentiment analysis and the exploration of emotional and contextual aspects of market sentiment. The integration of advanced NLP techniques like BERTopic with traditional financial analysis methods marks a step forward in developing more sophisticated tools for understanding and predicting market behaviors.
2111.04261
Fei Cheng
Fei Cheng, Shuntaro Yada, Ribeka Tanaka, Eiji Aramaki, Sadao Kurohashi
JaMIE: A Pipeline Japanese Medical Information Extraction System
8 pages
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We present an open-access natural language processing toolkit for Japanese medical information extraction. We first propose a novel relation annotation schema for investigating the medical and temporal relations between medical entities in Japanese medical reports. We experiment with the practical annotation scenarios by separately annotating two different types of reports. We design a pipeline system with three components for recognizing medical entities, classifying entity modalities, and extracting relations. The empirical results show accurate analyzing performance and suggest the satisfactory annotation quality, the effective annotation strategy for targeting report types, and the superiority of the latest contextual embedding models.
[ { "created": "Mon, 8 Nov 2021 03:54:09 GMT", "version": "v1" } ]
2021-11-09
[ [ "Cheng", "Fei", "" ], [ "Yada", "Shuntaro", "" ], [ "Tanaka", "Ribeka", "" ], [ "Aramaki", "Eiji", "" ], [ "Kurohashi", "Sadao", "" ] ]
We present an open-access natural language processing toolkit for Japanese medical information extraction. We first propose a novel relation annotation schema for investigating the medical and temporal relations between medical entities in Japanese medical reports. We experiment with the practical annotation scenarios by separately annotating two different types of reports. We design a pipeline system with three components for recognizing medical entities, classifying entity modalities, and extracting relations. The empirical results show accurate analyzing performance and suggest the satisfactory annotation quality, the effective annotation strategy for targeting report types, and the superiority of the latest contextual embedding models.
2102.02608
Mira Gonen
Mira Gonen, Michael Langberg, Alex Sprintson
Minimizing the alphabet size in codes with restricted error sets
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
This paper focuses on error-correcting codes that can handle a predefined set of specific error patterns. The need for such codes arises in many settings of practical interest, including wireless communication and flash memory systems. In many such settings, a smaller field size is achievable than that offered by MDS and other standard codes. We establish a connection between the minimum alphabet size for this generalized setting and the combinatorial properties of a hypergraph that represents the prespecified collection of error patterns. We also show a connection between error and erasure correcting codes in this specialized setting. This allows us to establish bounds on the minimum alphabet size and show an advantage of non-linear codes over linear codes in a generalized setting. We also consider a variation of the problem which allows a small probability of decoding error and relate it to an approximate version of hypergraph coloring.
[ { "created": "Thu, 4 Feb 2021 13:41:24 GMT", "version": "v1" } ]
2021-02-05
[ [ "Gonen", "Mira", "" ], [ "Langberg", "Michael", "" ], [ "Sprintson", "Alex", "" ] ]
This paper focuses on error-correcting codes that can handle a predefined set of specific error patterns. The need for such codes arises in many settings of practical interest, including wireless communication and flash memory systems. In many such settings, a smaller field size is achievable than that offered by MDS and other standard codes. We establish a connection between the minimum alphabet size for this generalized setting and the combinatorial properties of a hypergraph that represents the prespecified collection of error patterns. We also show a connection between error and erasure correcting codes in this specialized setting. This allows us to establish bounds on the minimum alphabet size and show an advantage of non-linear codes over linear codes in a generalized setting. We also consider a variation of the problem which allows a small probability of decoding error and relate it to an approximate version of hypergraph coloring.
1712.07863
Bernhard C. Geiger
Bernhard C. Geiger and Tobias Koch
On the Information Dimension of Multivariate Gaussian Processes
This work will be presented in part at the 2018 International Zurich Seminar on Information and Communication
IEEE Trans. on Information Theory 65(10):6496-6518. (C) IEEE 2019
10.1109/TIT.2019.2922186
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The authors have recently defined the R\'enyi information dimension rate $d(\{X_t\})$ of a stationary stochastic process $\{X_t,\,t\in\mathbb{Z}\}$ as the entropy rate of the uniformly-quantized process divided by minus the logarithm of the quantizer step size $1/m$ in the limit as $m\to\infty$ (B. Geiger and T. Koch, "On the information dimension rate of stochastic processes," in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017). For Gaussian processes with a given spectral distribution function $F_X$, they showed that the information dimension rate equals the Lebesgue measure of the set of harmonics where the derivative of $F_X$ is positive. This paper extends this result to multivariate Gaussian processes with a given matrix-valued spectral distribution function $F_{\mathbf{X}}$. It is demonstrated that the information dimension rate equals the average rank of the derivative of $F_{\mathbf{X}}$. As a side result, it is shown that the scale and translation invariance of information dimension carries over from random variables to stochastic processes.
[ { "created": "Thu, 21 Dec 2017 10:36:44 GMT", "version": "v1" } ]
2019-10-11
[ [ "Geiger", "Bernhard C.", "" ], [ "Koch", "Tobias", "" ] ]
The authors have recently defined the R\'enyi information dimension rate $d(\{X_t\})$ of a stationary stochastic process $\{X_t,\,t\in\mathbb{Z}\}$ as the entropy rate of the uniformly-quantized process divided by minus the logarithm of the quantizer step size $1/m$ in the limit as $m\to\infty$ (B. Geiger and T. Koch, "On the information dimension rate of stochastic processes," in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017). For Gaussian processes with a given spectral distribution function $F_X$, they showed that the information dimension rate equals the Lebesgue measure of the set of harmonics where the derivative of $F_X$ is positive. This paper extends this result to multivariate Gaussian processes with a given matrix-valued spectral distribution function $F_{\mathbf{X}}$. It is demonstrated that the information dimension rate equals the average rank of the derivative of $F_{\mathbf{X}}$. As a side result, it is shown that the scale and translation invariance of information dimension carries over from random variables to stochastic processes.
2401.09678
Simon Chu
Simon Chu, Justin Koe, David Garlan, and Eunsuk Kang
Integrating Graceful Degradation and Recovery through Requirement-driven Adaptation
Pre-print for the SEAMS '24 conference (Software Engineering for Adaptive and Self-Managing Systems Conference)
null
null
null
cs.SE cs.FL cs.LO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Cyber-physical systems (CPS) are subject to environmental uncertainties such as adverse operating conditions, malicious attacks, and hardware degradation. These uncertainties may lead to failures that put the system in a sub-optimal or unsafe state. Systems that are resilient to such uncertainties rely on two types of operations: (1) graceful degradation, to ensure that the system maintains an acceptable level of safety during unexpected environmental conditions and (2) recovery, to facilitate the resumption of normal system functions. Typically, mechanisms for degradation and recovery are developed independently from each other, and later integrated into a system, requiring the designer to develop an additional, ad-hoc logic for activating and coordinating between the two operations. In this paper, we propose a self-adaptation approach for improving system resiliency through automated triggering and coordination of graceful degradation and recovery. The key idea behind our approach is to treat degradation and recovery as requirement-driven adaptation tasks: Degradation can be thought of as temporarily weakening original (i.e., ideal) system requirements to be achieved by the system, and recovery as strengthening the weakened requirements when the environment returns within an expected operating boundary. Furthermore, by treating weakening and strengthening as dual operations, we argue that a single requirement-based adaptation method is sufficient to enable coordination between degradation and recovery. Given system requirements specified in signal temporal logic (STL), we propose a run-time adaptation framework that performs degradation and recovery in response to environmental changes. We describe a prototype implementation of our framework and demonstrate the feasibility of the proposed approach using a case study in unmanned underwater vehicles.
[ { "created": "Thu, 18 Jan 2024 02:04:37 GMT", "version": "v1" }, { "created": "Mon, 8 Apr 2024 16:44:50 GMT", "version": "v2" } ]
2024-04-09
[ [ "Chu", "Simon", "" ], [ "Koe", "Justin", "" ], [ "Garlan", "David", "" ], [ "Kang", "Eunsuk", "" ] ]
Cyber-physical systems (CPS) are subject to environmental uncertainties such as adverse operating conditions, malicious attacks, and hardware degradation. These uncertainties may lead to failures that put the system in a sub-optimal or unsafe state. Systems that are resilient to such uncertainties rely on two types of operations: (1) graceful degradation, to ensure that the system maintains an acceptable level of safety during unexpected environmental conditions and (2) recovery, to facilitate the resumption of normal system functions. Typically, mechanisms for degradation and recovery are developed independently from each other, and later integrated into a system, requiring the designer to develop an additional, ad-hoc logic for activating and coordinating between the two operations. In this paper, we propose a self-adaptation approach for improving system resiliency through automated triggering and coordination of graceful degradation and recovery. The key idea behind our approach is to treat degradation and recovery as requirement-driven adaptation tasks: Degradation can be thought of as temporarily weakening original (i.e., ideal) system requirements to be achieved by the system, and recovery as strengthening the weakened requirements when the environment returns within an expected operating boundary. Furthermore, by treating weakening and strengthening as dual operations, we argue that a single requirement-based adaptation method is sufficient to enable coordination between degradation and recovery. Given system requirements specified in signal temporal logic (STL), we propose a run-time adaptation framework that performs degradation and recovery in response to environmental changes. We describe a prototype implementation of our framework and demonstrate the feasibility of the proposed approach using a case study in unmanned underwater vehicles.
2002.09571
Nicholas Cheney
Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune, Nick Cheney
Learning to Continually Learn
null
null
null
null
cs.LG cs.CV cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).
[ { "created": "Fri, 21 Feb 2020 22:52:00 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2020 03:22:48 GMT", "version": "v2" } ]
2020-03-05
[ [ "Beaulieu", "Shawn", "" ], [ "Frati", "Lapo", "" ], [ "Miconi", "Thomas", "" ], [ "Lehman", "Joel", "" ], [ "Stanley", "Kenneth O.", "" ], [ "Clune", "Jeff", "" ], [ "Cheney", "Nick", "" ] ]
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).
2309.08622
Yijia Dai
Yijia Dai, Wen Sun
Representation Learning in Low-rank Slate-based Recommender Systems
in MFPL, ICML 2023
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reinforcement learning (RL) in recommendation systems offers the potential to optimize recommendations for long-term user engagement. However, the environment often involves large state and action spaces, which makes it hard to efficiently learn and explore. In this work, we propose a sample-efficient representation learning algorithm, using the standard slate recommendation setup, to treat this as an online RL problem with low-rank Markov decision processes (MDPs). We also construct the recommender simulation environment with the proposed setup and sampling method.
[ { "created": "Sun, 10 Sep 2023 21:40:51 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 03:05:32 GMT", "version": "v2" } ]
2023-09-20
[ [ "Dai", "Yijia", "" ], [ "Sun", "Wen", "" ] ]
Reinforcement learning (RL) in recommendation systems offers the potential to optimize recommendations for long-term user engagement. However, the environment often involves large state and action spaces, which makes it hard to efficiently learn and explore. In this work, we propose a sample-efficient representation learning algorithm, using the standard slate recommendation setup, to treat this as an online RL problem with low-rank Markov decision processes (MDPs). We also construct the recommender simulation environment with the proposed setup and sampling method.
2312.14544
Hongliu Cao
Hongliu Cao, Minh Nhat Do, Alexis Ravanel, Eoin Thomas
Inclusive normalization of face images to passport format
null
null
10.1109/IJCNN54540.2023.10191995
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Face recognition has been used more and more in real world applications in recent years. However, when the skin color bias is coupled with intra-personal variations like harsh illumination, the face recognition task is more likely to fail, even during human inspection. Face normalization methods try to deal with such challenges by removing intra-personal variations from an input image while keeping the identity the same. However, most face normalization methods can only remove one or two variations and ignore dataset biases such as skin color bias. The outputs of many face normalization methods are also not realistic to human observers. In this work, a style based face normalization model (StyleFNM) is proposed to remove most intra-personal variations including large changes in pose, bad or harsh illumination, low resolution, blur, facial expressions, and accessories like sunglasses among others. The dataset bias is also dealt with in this paper by controlling a pretrained GAN to generate a balanced dataset of passport-like images. The experimental results show that StyleFNM can generate more realistic outputs and can improve significantly the accuracy and fairness of face recognition systems.
[ { "created": "Fri, 22 Dec 2023 09:15:33 GMT", "version": "v1" } ]
2023-12-25
[ [ "Cao", "Hongliu", "" ], [ "Do", "Minh Nhat", "" ], [ "Ravanel", "Alexis", "" ], [ "Thomas", "Eoin", "" ] ]
Face recognition has been used more and more in real world applications in recent years. However, when the skin color bias is coupled with intra-personal variations like harsh illumination, the face recognition task is more likely to fail, even during human inspection. Face normalization methods try to deal with such challenges by removing intra-personal variations from an input image while keeping the identity the same. However, most face normalization methods can only remove one or two variations and ignore dataset biases such as skin color bias. The outputs of many face normalization methods are also not realistic to human observers. In this work, a style based face normalization model (StyleFNM) is proposed to remove most intra-personal variations including large changes in pose, bad or harsh illumination, low resolution, blur, facial expressions, and accessories like sunglasses among others. The dataset bias is also dealt with in this paper by controlling a pretrained GAN to generate a balanced dataset of passport-like images. The experimental results show that StyleFNM can generate more realistic outputs and can improve significantly the accuracy and fairness of face recognition systems.
2109.07196
Jiajun Wang
Jiajun Wang, Gang Han, Xiaozhu Ju and Mingguo Zhao
Whole-Body Control with Motion/Force Transmissibility for Parallel-Legged Robot
6 pages, 7 figures, submitted to IROS 2022
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For achieving kinematically suitable configurations and highly dynamic task execution, an efficient way is to consider robot performance indices in the whole-body control (WBC) of robots. However, current WBC methods have not considered the intrinsic features of parallel robots, especially motion/force transmissibility (MFT). This paper proposes an MFT-enhanced WBC scheme for parallel-legged robots. Introducing the performance indices of MFT into a WBC is challenging due to the nonlinear relationship between MFT indices and the robot configuration. To overcome this challenge, we establish the MFT preferable space of the robot offline and formulate it as a polyhedron in the joint space at the acceleration level. Then, the WBC employs the polyhedron as a soft constraint. As a result, the robot possesses high-speed and high-acceleration capabilities by satisfying this constraint. The offline preprocessing relieves the online computation burden and helps the WBC achieve a 1kHz servo rate. Finally, we validate the performance and robustness of the proposed method via simulations and experiments on a parallel-legged bipedal robot.
[ { "created": "Wed, 15 Sep 2021 10:27:57 GMT", "version": "v1" }, { "created": "Tue, 1 Mar 2022 11:07:11 GMT", "version": "v2" } ]
2022-03-02
[ [ "Wang", "Jiajun", "" ], [ "Han", "Gang", "" ], [ "Ju", "Xiaozhu", "" ], [ "Zhao", "Mingguo", "" ] ]
For achieving kinematically suitable configurations and highly dynamic task execution, an efficient way is to consider robot performance indices in the whole-body control (WBC) of robots. However, current WBC methods have not considered the intrinsic features of parallel robots, especially motion/force transmissibility (MFT). This paper proposes an MFT-enhanced WBC scheme for parallel-legged robots. Introducing the performance indices of MFT into a WBC is challenging due to the nonlinear relationship between MFT indices and the robot configuration. To overcome this challenge, we establish the MFT preferable space of the robot offline and formulate it as a polyhedron in the joint space at the acceleration level. Then, the WBC employs the polyhedron as a soft constraint. As a result, the robot possesses high-speed and high-acceleration capabilities by satisfying this constraint. The offline preprocessing relieves the online computation burden and helps the WBC achieve a 1kHz servo rate. Finally, we validate the performance and robustness of the proposed method via simulations and experiments on a parallel-legged bipedal robot.
1604.07973
Andr\'es Silva
Andr\'es Silva
Reference and Structure of Software Engineering Theories
Position paper, 4 pages
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper tries to contribute towards the solution of an important question raised in the SE literature: What is a Software Engineering (SE) specific theory? Which are the main features of a theory that is endemic to SE? In this paper we will use 'theory' as the term is used in traditional sciences. Other uses of the term 'theory' are discussed. Finally, we propose to focus on the reference class and on the structuring of SE theories as a basis for further progress.
[ { "created": "Wed, 27 Apr 2016 08:35:46 GMT", "version": "v1" } ]
2016-04-28
[ [ "Silva", "Andrés", "" ] ]
This paper tries to contribute towards the solution of an important question raised in the SE literature: What is a Software Engineering (SE) specific theory? Which are the main features of a theory that is endemic to SE? In this paper we will use 'theory' as the term is used in traditional sciences. Other uses of the term 'theory' are discussed. Finally, we propose to focus on the reference class and on the structuring of SE theories as a basis for further progress.
1901.08537
Guillaume Sartoretti
Guillaume Sartoretti and William Paivine and Yunfei Shi and Yue Wu and Howie Choset
Distributed Learning of Decentralized Control Policies for Articulated Mobile Robots
\c{opyright} 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
null
10.1109/TRO.2019.2922493
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art distributed algorithms for reinforcement learning rely on multiple independent agents, which simultaneously learn in parallel environments while asynchronously updating a common, shared policy. Moreover, decentralized control architectures (e.g., CPGs) can coordinate spatially distributed portions of an articulated robot to achieve system-level objectives. In this work, we investigate the relationship between distributed learning and decentralized control by learning decentralized control policies for the locomotion of articulated robots in challenging environments. To this end, we present an approach that leverages the structure of the asynchronous advantage actor-critic (A3C) algorithm to provide a natural means of learning decentralized control policies on a single articulated robot. Our primary contribution shows individual agents in the A3C algorithm can be defined by independently controlled portions of the robot's body, thus enabling distributed learning on a single robot for efficient hardware implementation. We present results of closed-loop locomotion in unstructured terrains on a snake and a hexapod robot, using decentralized controllers learned offline and online respectively. Preprint of the paper submitted to the IEEE Transactions in Robotics (T-RO) journal in October 2018, and accepted for publication as a regular paper in May 2019.
[ { "created": "Thu, 24 Jan 2019 17:59:58 GMT", "version": "v1" }, { "created": "Sun, 9 Jun 2019 17:49:16 GMT", "version": "v2" } ]
2021-02-02
[ [ "Sartoretti", "Guillaume", "" ], [ "Paivine", "William", "" ], [ "Shi", "Yunfei", "" ], [ "Wu", "Yue", "" ], [ "Choset", "Howie", "" ] ]
State-of-the-art distributed algorithms for reinforcement learning rely on multiple independent agents, which simultaneously learn in parallel environments while asynchronously updating a common, shared policy. Moreover, decentralized control architectures (e.g., CPGs) can coordinate spatially distributed portions of an articulated robot to achieve system-level objectives. In this work, we investigate the relationship between distributed learning and decentralized control by learning decentralized control policies for the locomotion of articulated robots in challenging environments. To this end, we present an approach that leverages the structure of the asynchronous advantage actor-critic (A3C) algorithm to provide a natural means of learning decentralized control policies on a single articulated robot. Our primary contribution shows individual agents in the A3C algorithm can be defined by independently controlled portions of the robot's body, thus enabling distributed learning on a single robot for efficient hardware implementation. We present results of closed-loop locomotion in unstructured terrains on a snake and a hexapod robot, using decentralized controllers learned offline and online respectively. Preprint of the paper submitted to the IEEE Transactions in Robotics (T-RO) journal in October 2018, and accepted for publication as a regular paper in May 2019.
1912.07172
Shuyan Zhang
Yuming Li, Rong Zhang, Yuchen Li, Ke Shu, Shuyan Zhang, Aoying Zhou
Lauca: Generating Application-Oriented Synthetic Workloads
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The synthetic workload is essential and critical to the performance evaluation of database systems. When evaluating the database performance for a specific application, the similarity between synthetic workload and real application workload determines the credibility of evaluation results. However, the workload currently used for performance evaluation is difficult to have the same workload characteristics as the target application, which leads to inaccurate evaluation results. To address this problem, we propose a workload duplicator (Lauca) that can generate synthetic workloads with highly similar performance metrics for specific applications. To the best of our knowledge, Lauca is the first application-oriented transactional workload generator. By carefully studying the application-oriented synthetic workload generation problem, we present the key workload characteristics (transaction logic and data access distribution) of online transaction processing (OLTP) applications, and propose novel workload characterization and generation algorithms, which guarantee the high fidelity of synthetic workloads. We conduct extensive experiments using workloads from TPC-C, SmallBank and micro benchmarks on both MySQL and PostgreSQL databases, and experimental results show that Lauca consistently generates high-quality synthetic workloads.
[ { "created": "Mon, 16 Dec 2019 03:13:17 GMT", "version": "v1" } ]
2019-12-17
[ [ "Li", "Yuming", "" ], [ "Zhang", "Rong", "" ], [ "Li", "Yuchen", "" ], [ "Shu", "Ke", "" ], [ "Zhang", "Shuyan", "" ], [ "Zhou", "Aoying", "" ] ]
The synthetic workload is essential and critical to the performance evaluation of database systems. When evaluating the database performance for a specific application, the similarity between synthetic workload and real application workload determines the credibility of evaluation results. However, the workload currently used for performance evaluation is difficult to have the same workload characteristics as the target application, which leads to inaccurate evaluation results. To address this problem, we propose a workload duplicator (Lauca) that can generate synthetic workloads with highly similar performance metrics for specific applications. To the best of our knowledge, Lauca is the first application-oriented transactional workload generator. By carefully studying the application-oriented synthetic workload generation problem, we present the key workload characteristics (transaction logic and data access distribution) of online transaction processing (OLTP) applications, and propose novel workload characterization and generation algorithms, which guarantee the high fidelity of synthetic workloads. We conduct extensive experiments using workloads from TPC-C, SmallBank and micro benchmarks on both MySQL and PostgreSQL databases, and experimental results show that Lauca consistently generates high-quality synthetic workloads.
2003.11303
Sunghun Joung
Sunghun Joung, Seungryong Kim, Hanjae Kim, Minsu Kim, Ig-Jae Kim, Junghyun Cho, Kwanghoon Sohn
Cylindrical Convolutional Networks for Joint Object Detection and Viewpoint Estimation
CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing techniques to encode spatial invariance within deep convolutional neural networks only model 2D transformation fields. This does not account for the fact that objects in a 2D space are a projection of 3D ones, and thus they have limited ability to severe object viewpoint changes. To overcome this limitation, we introduce a learnable module, cylindrical convolutional networks (CCNs), that exploit cylindrical representation of a convolutional kernel defined in the 3D space. CCNs extract a view-specific feature through a view-specific convolutional kernel to predict object category scores at each viewpoint. With the view-specific feature, we simultaneously determine objective category and viewpoints using the proposed sinusoidal soft-argmax module. Our experiments demonstrate the effectiveness of the cylindrical convolutional networks on joint object detection and viewpoint estimation.
[ { "created": "Wed, 25 Mar 2020 10:24:58 GMT", "version": "v1" } ]
2020-03-26
[ [ "Joung", "Sunghun", "" ], [ "Kim", "Seungryong", "" ], [ "Kim", "Hanjae", "" ], [ "Kim", "Minsu", "" ], [ "Kim", "Ig-Jae", "" ], [ "Cho", "Junghyun", "" ], [ "Sohn", "Kwanghoon", "" ] ]
Existing techniques to encode spatial invariance within deep convolutional neural networks only model 2D transformation fields. This does not account for the fact that objects in a 2D space are a projection of 3D ones, and thus they have limited ability to severe object viewpoint changes. To overcome this limitation, we introduce a learnable module, cylindrical convolutional networks (CCNs), that exploit cylindrical representation of a convolutional kernel defined in the 3D space. CCNs extract a view-specific feature through a view-specific convolutional kernel to predict object category scores at each viewpoint. With the view-specific feature, we simultaneously determine objective category and viewpoints using the proposed sinusoidal soft-argmax module. Our experiments demonstrate the effectiveness of the cylindrical convolutional networks on joint object detection and viewpoint estimation.
2408.05829
Katherine Dearstyne
Katherine R. Dearstyne, Alberto D. Rodriguez, Jane Cleland-Huang
Supporting Software Maintenance with Dynamically Generated Document Hierarchies
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software documentation supports a broad set of software maintenance tasks; however, creating and maintaining high-quality, multi-level software documentation can be incredibly time-consuming and therefore many code bases suffer from a lack of adequate documentation. We address this problem through presenting HGEN, a fully automated pipeline that leverages LLMs to transform source code through a series of six stages into a well-organized hierarchy of formatted documents. We evaluate HGEN both quantitatively and qualitatively. First, we use it to generate documentation for three diverse projects, and engage key developers in comparing the quality of the generated documentation against their own previously produced manually-crafted documentation. We then pilot HGEN in nine different industrial projects using diverse datasets provided by each project. We collect feedback from project stakeholders, and analyze it using an inductive approach to identify recurring themes. Results show that HGEN produces artifact hierarchies similar in quality to manually constructed documentation, with much higher coverage of the core concepts than the baseline approach. Stakeholder feedback highlights HGEN's commercial impact potential as a tool for accelerating code comprehension and maintenance tasks. Results and associated supplemental materials can be found at https://zenodo.org/records/11403244
[ { "created": "Sun, 11 Aug 2024 17:11:14 GMT", "version": "v1" } ]
2024-08-13
[ [ "Dearstyne", "Katherine R.", "" ], [ "Rodriguez", "Alberto D.", "" ], [ "Cleland-Huang", "Jane", "" ] ]
Software documentation supports a broad set of software maintenance tasks; however, creating and maintaining high-quality, multi-level software documentation can be incredibly time-consuming and therefore many code bases suffer from a lack of adequate documentation. We address this problem through presenting HGEN, a fully automated pipeline that leverages LLMs to transform source code through a series of six stages into a well-organized hierarchy of formatted documents. We evaluate HGEN both quantitatively and qualitatively. First, we use it to generate documentation for three diverse projects, and engage key developers in comparing the quality of the generated documentation against their own previously produced manually-crafted documentation. We then pilot HGEN in nine different industrial projects using diverse datasets provided by each project. We collect feedback from project stakeholders, and analyze it using an inductive approach to identify recurring themes. Results show that HGEN produces artifact hierarchies similar in quality to manually constructed documentation, with much higher coverage of the core concepts than the baseline approach. Stakeholder feedback highlights HGEN's commercial impact potential as a tool for accelerating code comprehension and maintenance tasks. Results and associated supplemental materials can be found at https://zenodo.org/records/11403244
2012.07464
Alejandro Su\'arez Hern\'andez
Alejandro Su\'arez-Hern\'andez and Javier Segovia-Aguas and Carme Torras and Guillem Aleny\`a
Online Action Recognition
Accepted version in AAAI 21: https://ojs.aaai.org/index.php/AAAI/article/view/17423
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognition in planning seeks to find agent intentions, goals or activities given a set of observations and a knowledge library (e.g. goal states, plans or domain theories). In this work we introduce the problem of Online Action Recognition. It consists in recognizing, in an open world, the planning action that best explains a partially observable state transition from a knowledge library of first-order STRIPS actions, which is initially empty. We frame this as an optimization problem, and propose two algorithms to address it: Action Unification (AU) and Online Action Recognition through Unification (OARU). The former builds on logic unification and generalizes two input actions using weighted partial MaxSAT. The latter looks for an action within the library that explains an observed transition. If there is such action, it generalizes it making use of AU, building in this way an AU hierarchy. Otherwise, OARU inserts a Trivial Grounded Action (TGA) in the library that explains just that transition. We report results on benchmarks from the International Planning Competition and PDDLGym, where OARU recognizes actions accurately with respect to expert knowledge, and shows real-time performance.
[ { "created": "Mon, 14 Dec 2020 12:37:20 GMT", "version": "v1" }, { "created": "Tue, 3 Aug 2021 14:38:17 GMT", "version": "v2" } ]
2021-08-04
[ [ "Suárez-Hernández", "Alejandro", "" ], [ "Segovia-Aguas", "Javier", "" ], [ "Torras", "Carme", "" ], [ "Alenyà", "Guillem", "" ] ]
Recognition in planning seeks to find agent intentions, goals or activities given a set of observations and a knowledge library (e.g. goal states, plans or domain theories). In this work we introduce the problem of Online Action Recognition. It consists in recognizing, in an open world, the planning action that best explains a partially observable state transition from a knowledge library of first-order STRIPS actions, which is initially empty. We frame this as an optimization problem, and propose two algorithms to address it: Action Unification (AU) and Online Action Recognition through Unification (OARU). The former builds on logic unification and generalizes two input actions using weighted partial MaxSAT. The latter looks for an action within the library that explains an observed transition. If there is such action, it generalizes it making use of AU, building in this way an AU hierarchy. Otherwise, OARU inserts a Trivial Grounded Action (TGA) in the library that explains just that transition. We report results on benchmarks from the International Planning Competition and PDDLGym, where OARU recognizes actions accurately with respect to expert knowledge, and shows real-time performance.
2006.08767
Borja Gonzalez Le\'on
Borja G. Le\'on, Murray Shanahan, Francesco Belardinelli
Systematic Generalisation through Task Temporal Logic and Deep Reinforcement Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work introduces a neuro-symbolic agent that combines deep reinforcement learning (DRL) with temporal logic (TL) to achieve systematic zero-shot, i.e., never-seen-before, generalisation of formally specified instructions. In particular, we present a neuro-symbolic framework where a symbolic module transforms TL specifications into a form that helps the training of a DRL agent targeting generalisation, while a neural module learns systematically to solve the given tasks. We study the emergence of systematic learning in different settings and find that the architecture of the convolutional layers is key when generalising to new instructions. We also provide evidence that systematic learning can emerge with abstract operators such as negation when learning from a few training examples, which previous research have struggled with.
[ { "created": "Fri, 12 Jun 2020 09:02:40 GMT", "version": "v1" }, { "created": "Wed, 30 Sep 2020 19:28:18 GMT", "version": "v2" }, { "created": "Mon, 13 Sep 2021 13:12:32 GMT", "version": "v3" } ]
2021-09-14
[ [ "León", "Borja G.", "" ], [ "Shanahan", "Murray", "" ], [ "Belardinelli", "Francesco", "" ] ]
This work introduces a neuro-symbolic agent that combines deep reinforcement learning (DRL) with temporal logic (TL) to achieve systematic zero-shot, i.e., never-seen-before, generalisation of formally specified instructions. In particular, we present a neuro-symbolic framework where a symbolic module transforms TL specifications into a form that helps the training of a DRL agent targeting generalisation, while a neural module learns systematically to solve the given tasks. We study the emergence of systematic learning in different settings and find that the architecture of the convolutional layers is key when generalising to new instructions. We also provide evidence that systematic learning can emerge with abstract operators such as negation when learning from a few training examples, which previous research have struggled with.
2201.06386
Alex B\"auerle
Alex B\"auerle, Aybuke Gul Turker, Ken Burke, Osman Aka, Timo Ropinski, Christina Greer, and Mani Varadarajan
Visual Identification of Problematic Bias in Large Label Spaces
null
null
null
null
cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
While the need for well-trained, fair ML systems is increasing ever more, measuring fairness for modern models and datasets is becoming increasingly difficult as they grow at an unprecedented pace. One key challenge in scaling common fairness metrics to such models and datasets is the requirement of exhaustive ground truth labeling, which cannot always be done. Indeed, this often rules out the application of traditional analysis metrics and systems. At the same time, ML-fairness assessments cannot be made algorithmically, as fairness is a highly subjective matter. Thus, domain experts need to be able to extract and reason about bias throughout models and datasets to make informed decisions. While visual analysis tools are of great help when investigating potential bias in DL models, none of the existing approaches have been designed for the specific tasks and challenges that arise in large label spaces. Addressing the lack of visualization work in this area, we propose guidelines for designing visualizations for such large label spaces, considering both technical and ethical issues. Our proposed visualization approach can be integrated into classical model and data pipelines, and we provide an implementation of our techniques open-sourced as a TensorBoard plug-in. With our approach, different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias.
[ { "created": "Mon, 17 Jan 2022 12:51:08 GMT", "version": "v1" } ]
2022-01-19
[ [ "Bäuerle", "Alex", "" ], [ "Turker", "Aybuke Gul", "" ], [ "Burke", "Ken", "" ], [ "Aka", "Osman", "" ], [ "Ropinski", "Timo", "" ], [ "Greer", "Christina", "" ], [ "Varadarajan", "Mani", "" ] ]
While the need for well-trained, fair ML systems is increasing ever more, measuring fairness for modern models and datasets is becoming increasingly difficult as they grow at an unprecedented pace. One key challenge in scaling common fairness metrics to such models and datasets is the requirement of exhaustive ground truth labeling, which cannot always be done. Indeed, this often rules out the application of traditional analysis metrics and systems. At the same time, ML-fairness assessments cannot be made algorithmically, as fairness is a highly subjective matter. Thus, domain experts need to be able to extract and reason about bias throughout models and datasets to make informed decisions. While visual analysis tools are of great help when investigating potential bias in DL models, none of the existing approaches have been designed for the specific tasks and challenges that arise in large label spaces. Addressing the lack of visualization work in this area, we propose guidelines for designing visualizations for such large label spaces, considering both technical and ethical issues. Our proposed visualization approach can be integrated into classical model and data pipelines, and we provide an implementation of our techniques open-sourced as a TensorBoard plug-in. With our approach, different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias.
2003.08288
Siu-Wing Cheng
Siu-Wing Cheng and Man-Kit Lau
Dynamic Distribution-Sensitive Point Location
To appear in Proceedings of the International Symposium of Computational Geometry, 2020
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a dynamic data structure for the distribution-sensitive point location problem. Suppose that there is a fixed query distribution in $\mathbb{R}^2$, and we are given an oracle that can return in $O(1)$ time the probability of a query point falling into a polygonal region of constant complexity. We can maintain a convex subdivision $\cal S$ with $n$ vertices such that each query is answered in $O(\mathrm{OPT})$ expected time, where OPT is the minimum expected time of the best linear decision tree for point location in $\cal S$. The space and construction time are $O(n\log^2 n)$. An update of $\cal S$ as a mixed sequence of $k$ edge insertions and deletions takes $O(k\log^5 n)$ amortized time. As a corollary, the randomized incremental construction of the Voronoi diagram of $n$ sites can be performed in $O(n\log^5 n)$ expected time so that, during the incremental construction, a nearest neighbor query at any time can be answered optimally with respect to the intermediate Voronoi diagram at that time.
[ { "created": "Wed, 18 Mar 2020 15:51:52 GMT", "version": "v1" }, { "created": "Sat, 28 Mar 2020 03:01:49 GMT", "version": "v2" }, { "created": "Wed, 1 Apr 2020 04:37:46 GMT", "version": "v3" }, { "created": "Sat, 25 Apr 2020 06:38:38 GMT", "version": "v4" } ]
2020-04-28
[ [ "Cheng", "Siu-Wing", "" ], [ "Lau", "Man-Kit", "" ] ]
We propose a dynamic data structure for the distribution-sensitive point location problem. Suppose that there is a fixed query distribution in $\mathbb{R}^2$, and we are given an oracle that can return in $O(1)$ time the probability of a query point falling into a polygonal region of constant complexity. We can maintain a convex subdivision $\cal S$ with $n$ vertices such that each query is answered in $O(\mathrm{OPT})$ expected time, where OPT is the minimum expected time of the best linear decision tree for point location in $\cal S$. The space and construction time are $O(n\log^2 n)$. An update of $\cal S$ as a mixed sequence of $k$ edge insertions and deletions takes $O(k\log^5 n)$ amortized time. As a corollary, the randomized incremental construction of the Voronoi diagram of $n$ sites can be performed in $O(n\log^5 n)$ expected time so that, during the incremental construction, a nearest neighbor query at any time can be answered optimally with respect to the intermediate Voronoi diagram at that time.
2104.02939
Shu Kong
Shu Kong, Deva Ramanan
OpenGAN: Open-Set Recognition via Open Data Generation
ICCV 2021 Best Paper Honorable Mention
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-world machine learning systems need to analyze test data that may differ from training data. In K-way classification, this is crisply formulated as open-set recognition, core to which is the ability to discriminate open-set data outside the K closed-set classes. Two conceptually elegant ideas for open-set discrimination are: 1) discriminatively learning an open-vs-closed binary discriminator by exploiting some outlier data as the open-set, and 2) unsupervised learning the closed-set data distribution with a GAN, using its discriminator as the open-set likelihood function. However, the former generalizes poorly to diverse open test data due to overfitting to the training outliers, which are unlikely to exhaustively span the open-world. The latter does not work well, presumably due to the instable training of GANs. Motivated by the above, we propose OpenGAN, which addresses the limitation of each approach by combining them with several technical insights. First, we show that a carefully selected GAN-discriminator on some real outlier data already achieves the state-of-the-art. Second, we augment the available set of real open training examples with adversarially synthesized "fake" data. Third and most importantly, we build the discriminator over the features computed by the closed-world K-way networks. This allows OpenGAN to be implemented via a lightweight discriminator head built on top of an existing K-way network. Extensive experiments show that OpenGAN significantly outperforms prior open-set methods.
[ { "created": "Wed, 7 Apr 2021 06:19:24 GMT", "version": "v1" }, { "created": "Fri, 9 Apr 2021 02:55:27 GMT", "version": "v2" }, { "created": "Wed, 13 Oct 2021 05:23:31 GMT", "version": "v3" } ]
2021-10-14
[ [ "Kong", "Shu", "" ], [ "Ramanan", "Deva", "" ] ]
Real-world machine learning systems need to analyze test data that may differ from training data. In K-way classification, this is crisply formulated as open-set recognition, core to which is the ability to discriminate open-set data outside the K closed-set classes. Two conceptually elegant ideas for open-set discrimination are: 1) discriminatively learning an open-vs-closed binary discriminator by exploiting some outlier data as the open-set, and 2) unsupervised learning the closed-set data distribution with a GAN, using its discriminator as the open-set likelihood function. However, the former generalizes poorly to diverse open test data due to overfitting to the training outliers, which are unlikely to exhaustively span the open-world. The latter does not work well, presumably due to the instable training of GANs. Motivated by the above, we propose OpenGAN, which addresses the limitation of each approach by combining them with several technical insights. First, we show that a carefully selected GAN-discriminator on some real outlier data already achieves the state-of-the-art. Second, we augment the available set of real open training examples with adversarially synthesized "fake" data. Third and most importantly, we build the discriminator over the features computed by the closed-world K-way networks. This allows OpenGAN to be implemented via a lightweight discriminator head built on top of an existing K-way network. Extensive experiments show that OpenGAN significantly outperforms prior open-set methods.
1607.04648
Subarna Tripathi
Subarna Tripathi and Zachary C. Lipton and Serge Belongie and Truong Nguyen
Context Matters: Refining Object Detection in Video with Recurrent Neural Networks
To appear in BMVC 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given the vast amounts of video available online, and recent breakthroughs in object detection with static images, object detection in video offers a promising new frontier. However, motion blur and compression artifacts cause substantial frame-level variability, even in videos that appear smooth to the eye. Additionally, video datasets tend to have sparsely annotated frames. We present a new framework for improving object detection in videos that captures temporal context and encourages consistency of predictions. First, we train a pseudo-labeler, that is, a domain-adapted convolutional neural network for object detection. The pseudo-labeler is first trained individually on the subset of labeled frames, and then subsequently applied to all frames. Then we train a recurrent neural network that takes as input sequences of pseudo-labeled frames and optimizes an objective that encourages both accuracy on the target frame and consistency across consecutive frames. The approach incorporates strong supervision of target frames, weak-supervision on context frames, and regularization via a smoothness penalty. Our approach achieves mean Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest image-based baselines for the Youtube-Video Objects dataset. Our experiments demonstrate that neighboring frames can provide valuable information, even absent labels.
[ { "created": "Fri, 15 Jul 2016 20:02:25 GMT", "version": "v1" }, { "created": "Tue, 19 Jul 2016 03:00:35 GMT", "version": "v2" } ]
2016-07-20
[ [ "Tripathi", "Subarna", "" ], [ "Lipton", "Zachary C.", "" ], [ "Belongie", "Serge", "" ], [ "Nguyen", "Truong", "" ] ]
Given the vast amounts of video available online, and recent breakthroughs in object detection with static images, object detection in video offers a promising new frontier. However, motion blur and compression artifacts cause substantial frame-level variability, even in videos that appear smooth to the eye. Additionally, video datasets tend to have sparsely annotated frames. We present a new framework for improving object detection in videos that captures temporal context and encourages consistency of predictions. First, we train a pseudo-labeler, that is, a domain-adapted convolutional neural network for object detection. The pseudo-labeler is first trained individually on the subset of labeled frames, and then subsequently applied to all frames. Then we train a recurrent neural network that takes as input sequences of pseudo-labeled frames and optimizes an objective that encourages both accuracy on the target frame and consistency across consecutive frames. The approach incorporates strong supervision of target frames, weak-supervision on context frames, and regularization via a smoothness penalty. Our approach achieves mean Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest image-based baselines for the Youtube-Video Objects dataset. Our experiments demonstrate that neighboring frames can provide valuable information, even absent labels.
2308.16122
Yuta Sato
Yuta Sato, Pak Hei Lam, Shruti Gupta, Fareesah Hussain
Spatial Graph Coarsening: Weather and Weekday Prediction with London's Bike-Sharing Service using GNN
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This study introduced the use of Graph Neural Network (GNN) for predicting the weather and weekday of a day in London, from the dataset of Santander Cycles bike-sharing system as a graph classification task. The proposed GNN models newly introduced (i) a concatenation operator of graph features with trained node embeddings and (ii) a graph coarsening operator based on geographical contiguity, namely "Spatial Graph Coarsening". With the node features of land-use characteristics and number of households around the bike stations and graph features of temperatures in the city, our proposed models outperformed the baseline model in cross-entropy loss and accuracy of the validation dataset.
[ { "created": "Wed, 30 Aug 2023 16:21:02 GMT", "version": "v1" } ]
2023-08-31
[ [ "Sato", "Yuta", "" ], [ "Lam", "Pak Hei", "" ], [ "Gupta", "Shruti", "" ], [ "Hussain", "Fareesah", "" ] ]
This study introduced the use of Graph Neural Network (GNN) for predicting the weather and weekday of a day in London, from the dataset of Santander Cycles bike-sharing system as a graph classification task. The proposed GNN models newly introduced (i) a concatenation operator of graph features with trained node embeddings and (ii) a graph coarsening operator based on geographical contiguity, namely "Spatial Graph Coarsening". With the node features of land-use characteristics and number of households around the bike stations and graph features of temperatures in the city, our proposed models outperformed the baseline model in cross-entropy loss and accuracy of the validation dataset.
2010.02428
Tao Li
Tao Li, Tushar Khot, Daniel Khashabi, Ashish Sabharwal, Vivek Srikumar
UnQovering Stereotyping Biases via Underspecified Questions
Accepted at Findings of EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework to probe and quantify biases through underspecified questions. We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors: positional dependence and question independence. We design a formalism that isolates the aforementioned errors. As case studies, we use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion. We probe five transformer-based QA models trained on two QA datasets, along with their underlying language models. Our broad study reveals that (1) all these models, with and without fine-tuning, have notable stereotyping biases in these classes; (2) larger models often have higher bias; and (3) the effect of fine-tuning on bias varies strongly with the dataset and the model size.
[ { "created": "Tue, 6 Oct 2020 01:49:52 GMT", "version": "v1" }, { "created": "Wed, 7 Oct 2020 04:51:22 GMT", "version": "v2" }, { "created": "Sat, 10 Oct 2020 01:48:31 GMT", "version": "v3" } ]
2020-10-13
[ [ "Li", "Tao", "" ], [ "Khot", "Tushar", "" ], [ "Khashabi", "Daniel", "" ], [ "Sabharwal", "Ashish", "" ], [ "Srikumar", "Vivek", "" ] ]
While language embeddings have been shown to have stereotyping biases, how these biases affect downstream question answering (QA) models remains unexplored. We present UNQOVER, a general framework to probe and quantify biases through underspecified questions. We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors: positional dependence and question independence. We design a formalism that isolates the aforementioned errors. As case studies, we use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion. We probe five transformer-based QA models trained on two QA datasets, along with their underlying language models. Our broad study reveals that (1) all these models, with and without fine-tuning, have notable stereotyping biases in these classes; (2) larger models often have higher bias; and (3) the effect of fine-tuning on bias varies strongly with the dataset and the model size.
1808.08268
Alexander Broad
Alexander Broad, Todd Murphey, Brenna Argall
Learning Models for Shared Control of Human-Machine Systems with Unknown Dynamics
Robotics: Science and Systems Proceedings, 2017
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach to shared control of human-machine systems. Our method assumes no a priori knowledge of the system dynamics. Instead, we learn both the dynamics and information about the user's interaction from observation through the use of the Koopman operator. Using the learned model, we define an optimization problem to compute the optimal policy for a given task, and compare the user input to the optimal input. We demonstrate the efficacy of our approach with a user study. We also analyze the individual nature of the learned models by comparing the effectiveness of our approach when the demonstration data comes from a user's own interactions, from the interactions of a group of users and from a domain expert. Positive results include statistically significant improvements on task metrics when comparing a user-only control paradigm with our shared control paradigm. Surprising results include findings that suggest that individualizing the model based on a user's own data does not effect the ability to learn a useful dynamic system. We explore this tension as it relates to developing human-in-the-loop systems further in the discussion.
[ { "created": "Fri, 24 Aug 2018 19:07:10 GMT", "version": "v1" } ]
2018-08-28
[ [ "Broad", "Alexander", "" ], [ "Murphey", "Todd", "" ], [ "Argall", "Brenna", "" ] ]
We present a novel approach to shared control of human-machine systems. Our method assumes no a priori knowledge of the system dynamics. Instead, we learn both the dynamics and information about the user's interaction from observation through the use of the Koopman operator. Using the learned model, we define an optimization problem to compute the optimal policy for a given task, and compare the user input to the optimal input. We demonstrate the efficacy of our approach with a user study. We also analyze the individual nature of the learned models by comparing the effectiveness of our approach when the demonstration data comes from a user's own interactions, from the interactions of a group of users and from a domain expert. Positive results include statistically significant improvements on task metrics when comparing a user-only control paradigm with our shared control paradigm. Surprising results include findings that suggest that individualizing the model based on a user's own data does not effect the ability to learn a useful dynamic system. We explore this tension as it relates to developing human-in-the-loop systems further in the discussion.
1812.10378
Polina Lemenkova
Polina Lemenkova
Urban-Rural Environmental Gradient in a Developing City: Testing ENVI GIS Functionality
5 pages, 2 figures, 1 table
Conference Proceedings 'Abishevskie Readings. Innovation in the Complex Processing of Mineral Raw Materials', 21-22 Jan 2016
10.6084/m9.figshare.7210286
null
cs.CY cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
The research performs urban ecosystem analysis supported by ENVI GIS by integrated studies on land cover types and geospatial modeling of Taipei city. The paper deals with the role of anthropogenic pressure on the structure of the landscape and change of land cover types. Methods included assessment of the impact from anthropogenic activities on the natural ecosystems, evaluation of the rate and scale of landscape dynamics using remote sensing data and GIS. The research aims to assist environmentalists and city planners to evaluate strategies for specific objectives of urban development in Taiwan, China.
[ { "created": "Thu, 6 Dec 2018 02:10:53 GMT", "version": "v1" } ]
2018-12-27
[ [ "Lemenkova", "Polina", "" ] ]
The research performs urban ecosystem analysis supported by ENVI GIS by integrated studies on land cover types and geospatial modeling of Taipei city. The paper deals with the role of anthropogenic pressure on the structure of the landscape and change of land cover types. Methods included assessment of the impact from anthropogenic activities on the natural ecosystems, evaluation of the rate and scale of landscape dynamics using remote sensing data and GIS. The research aims to assist environmentalists and city planners to evaluate strategies for specific objectives of urban development in Taiwan, China.
2102.12844
Walter Bennette
Walter Bennette, Sally Dufek, Karsten Maurer, Sean Sisti, Bunyod Tusmatov
Generalized Adversarial Distances to Efficiently Discover Classifier Errors
8 pages, 5 figures, International Conference of Machine Learning and Applications 2020
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Given a black-box classification model and an unlabeled evaluation dataset from some application domain, efficient strategies need to be developed to evaluate the model. Random sampling allows a user to estimate metrics like accuracy, precision, and recall, but may not provide insight to high-confidence errors. High-confidence errors are rare events for which the model is highly confident in its prediction, but is wrong. Such errors can represent costly mistakes and should be explicitly searched for. In this paper we propose a generalization to the Adversarial Distance search that leverages concepts from adversarial machine learning to identify predictions for which a classifier may be overly confident. These predictions are useful instances to sample when looking for high-confidence errors because they are prone to a higher rate of error than expected. Our generalization allows Adversarial Distance to be applied to any classifier or data domain. Experimental results show that the generalized method finds errors at rates greater than expected given the confidence of the sampled predictions, and outperforms competing methods.
[ { "created": "Thu, 25 Feb 2021 13:31:21 GMT", "version": "v1" } ]
2021-02-26
[ [ "Bennette", "Walter", "" ], [ "Dufek", "Sally", "" ], [ "Maurer", "Karsten", "" ], [ "Sisti", "Sean", "" ], [ "Tusmatov", "Bunyod", "" ] ]
Given a black-box classification model and an unlabeled evaluation dataset from some application domain, efficient strategies need to be developed to evaluate the model. Random sampling allows a user to estimate metrics like accuracy, precision, and recall, but may not provide insight to high-confidence errors. High-confidence errors are rare events for which the model is highly confident in its prediction, but is wrong. Such errors can represent costly mistakes and should be explicitly searched for. In this paper we propose a generalization to the Adversarial Distance search that leverages concepts from adversarial machine learning to identify predictions for which a classifier may be overly confident. These predictions are useful instances to sample when looking for high-confidence errors because they are prone to a higher rate of error than expected. Our generalization allows Adversarial Distance to be applied to any classifier or data domain. Experimental results show that the generalized method finds errors at rates greater than expected given the confidence of the sampled predictions, and outperforms competing methods.
2405.02791
Mengxian Hu
Mengxian Hu, Minghao Zhu, Xun Zhou, Qingqing Yan, Shu Li, Chengju Liu, Qijun Chen
Efficient Text-driven Motion Generation via Latent Consistency Training
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion diffusion models excel at text-driven motion generation but struggle with real-time inference since motion sequences are time-axis redundant and solving reverse diffusion trajectory involves tens or hundreds of sequential iterations. In this paper, we propose a Motion Latent Consistency Training (MLCT) framework, which allows for large-scale skip sampling of compact motion latent representation by constraining the consistency of the outputs of adjacent perturbed states on the precomputed trajectory. In particular, we design a flexible motion autoencoder with quantization constraints to guarantee the low-dimensionality, succinctness, and boundednes of the motion embedding space. We further present a conditionally guided consistency training framework based on conditional trajectory simulation without additional pre-training diffusion model, which significantly improves the conditional generation performance with minimal training cost. Experiments on two benchmarks demonstrate our model's state-of-the-art performance with an 80\% inference cost saving and around 14 ms on a single RTX 4090 GPU.
[ { "created": "Sun, 5 May 2024 02:11:57 GMT", "version": "v1" }, { "created": "Sat, 25 May 2024 05:01:20 GMT", "version": "v2" } ]
2024-05-28
[ [ "Hu", "Mengxian", "" ], [ "Zhu", "Minghao", "" ], [ "Zhou", "Xun", "" ], [ "Yan", "Qingqing", "" ], [ "Li", "Shu", "" ], [ "Liu", "Chengju", "" ], [ "Chen", "Qijun", "" ] ]
Motion diffusion models excel at text-driven motion generation but struggle with real-time inference since motion sequences are time-axis redundant and solving reverse diffusion trajectory involves tens or hundreds of sequential iterations. In this paper, we propose a Motion Latent Consistency Training (MLCT) framework, which allows for large-scale skip sampling of compact motion latent representation by constraining the consistency of the outputs of adjacent perturbed states on the precomputed trajectory. In particular, we design a flexible motion autoencoder with quantization constraints to guarantee the low-dimensionality, succinctness, and boundednes of the motion embedding space. We further present a conditionally guided consistency training framework based on conditional trajectory simulation without additional pre-training diffusion model, which significantly improves the conditional generation performance with minimal training cost. Experiments on two benchmarks demonstrate our model's state-of-the-art performance with an 80\% inference cost saving and around 14 ms on a single RTX 4090 GPU.
2003.00899
George Cevora
Kate Wilkinson, George Cevora
Demonstrating Rosa: the fairness solution for any Data Analytic pipeline
corrected typo in fig 8 caption
null
null
null
cs.LG stat.AP
http://creativecommons.org/licenses/by-sa/4.0/
Most datasets of interest to the analytics industry are impacted by various forms of human bias. The outcomes of Data Analytics [DA] or Machine Learning [ML] on such data are therefore prone to replicating the bias. As a result, a large number of biased decision-making systems based on DA/ML have recently attracted attention. In this paper we introduce Rosa, a free, web-based tool to easily de-bias datasets with respect to a chosen characteristic. Rosa is based on the principles of Fair Adversarial Networks, developed by illumr Ltd., and can therefore remove interactive, non-linear, and non-binary bias. Rosa is stand-alone pre-processing step / API, meaning it can be used easily with any DA/ML pipeline. We test the efficacy of Rosa in removing bias from data-driven decision making systems by performing standard DA tasks on five real-world datasets, selected for their relevance to current DA problems, and also their high potential for bias. We use simple ML models to model a characteristic of analytical interest, and compare the level of bias in the model output both with and without Rosa as a pre-processing step. We find that in all cases there is a substantial decrease in bias of the data-driven decision making systems when the data is pre-processed with Rosa.
[ { "created": "Fri, 28 Feb 2020 10:02:58 GMT", "version": "v1" }, { "created": "Fri, 5 Mar 2021 15:59:13 GMT", "version": "v2" } ]
2021-03-08
[ [ "Wilkinson", "Kate", "" ], [ "Cevora", "George", "" ] ]
Most datasets of interest to the analytics industry are impacted by various forms of human bias. The outcomes of Data Analytics [DA] or Machine Learning [ML] on such data are therefore prone to replicating the bias. As a result, a large number of biased decision-making systems based on DA/ML have recently attracted attention. In this paper we introduce Rosa, a free, web-based tool to easily de-bias datasets with respect to a chosen characteristic. Rosa is based on the principles of Fair Adversarial Networks, developed by illumr Ltd., and can therefore remove interactive, non-linear, and non-binary bias. Rosa is stand-alone pre-processing step / API, meaning it can be used easily with any DA/ML pipeline. We test the efficacy of Rosa in removing bias from data-driven decision making systems by performing standard DA tasks on five real-world datasets, selected for their relevance to current DA problems, and also their high potential for bias. We use simple ML models to model a characteristic of analytical interest, and compare the level of bias in the model output both with and without Rosa as a pre-processing step. We find that in all cases there is a substantial decrease in bias of the data-driven decision making systems when the data is pre-processed with Rosa.
1911.00493
Veit Elser
Veit Elser
Learning Without Loss
52 pages, 24 figures, 1 table
null
null
null
cs.LG math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore a new approach for training neural networks where all loss functions are replaced by hard constraints. The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.). Instead of taking gradient steps, the optimizer in the constraint based approach, called relaxed-reflect-reflect (RRR), derives its steps from projections to local constraints. In neural networks one such projection makes the minimal modification to the inputs $x$, the associated weights $w$, and the pre-activation value $y$ at each neuron, to satisfy the equation $x\cdot w=y$. These projections, along with a host of other local projections (constraining pre- and post-activations, etc.) can be partitioned into two sets such that all the projections in each set can be applied concurrently, across the network and across all data in the training batch. This partitioning into two sets is analogous to the situation in phase retrieval and the setting for which the general purpose RRR optimizer was designed. Owing to the novelty of the method, this paper also serves as a self-contained tutorial. Starting with a single-layer network that performs non-negative matrix factorization, and concluding with a generative model comprising an autoencoder and classifier, all applications and their implementations by projections are described in complete detail. Although the new approach has the potential to extend the scope of neural networks (e.g. by defining activation not through functions but constraint sets), most of the featured models are standard to allow comparison with stochastic gradient descent.
[ { "created": "Tue, 29 Oct 2019 19:20:08 GMT", "version": "v1" } ]
2019-11-04
[ [ "Elser", "Veit", "" ] ]
We explore a new approach for training neural networks where all loss functions are replaced by hard constraints. The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.). Instead of taking gradient steps, the optimizer in the constraint based approach, called relaxed-reflect-reflect (RRR), derives its steps from projections to local constraints. In neural networks one such projection makes the minimal modification to the inputs $x$, the associated weights $w$, and the pre-activation value $y$ at each neuron, to satisfy the equation $x\cdot w=y$. These projections, along with a host of other local projections (constraining pre- and post-activations, etc.) can be partitioned into two sets such that all the projections in each set can be applied concurrently, across the network and across all data in the training batch. This partitioning into two sets is analogous to the situation in phase retrieval and the setting for which the general purpose RRR optimizer was designed. Owing to the novelty of the method, this paper also serves as a self-contained tutorial. Starting with a single-layer network that performs non-negative matrix factorization, and concluding with a generative model comprising an autoencoder and classifier, all applications and their implementations by projections are described in complete detail. Although the new approach has the potential to extend the scope of neural networks (e.g. by defining activation not through functions but constraint sets), most of the featured models are standard to allow comparison with stochastic gradient descent.
2302.05960
Paul Gutkovich
Paul Gutkovich, Zi Song Yeoh
Computing Truncated Metric Dimension of Trees
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $G=(V,E)$ be a simple, unweighted, connected graph. Let $d(u,v)$ denote the distance between vertices $u,v$. A resolving set of $G$ is a subset $S$ of $V$ such that knowing the distance from a vertex $v$ to every vertex in $S$ uniquely identifies $v$. The metric dimension of $G$ is defined as the size of the smallest resolving set of $G$. We define the $k$-truncated resolving set and $k$-truncated metric dimension of a graph similarly, but with the notion of distance replaced with $d_k(u,v) := \min(d(u,v),k+1)$. In this paper, we demonstrate that computing $k$-truncated dimension of trees is NP-Hard for general $k$. We then present a polynomial-time algorithm to compute $k$-truncated dimension of trees when $k$ is a fixed constant.
[ { "created": "Sun, 12 Feb 2023 17:18:14 GMT", "version": "v1" } ]
2023-02-14
[ [ "Gutkovich", "Paul", "" ], [ "Yeoh", "Zi Song", "" ] ]
Let $G=(V,E)$ be a simple, unweighted, connected graph. Let $d(u,v)$ denote the distance between vertices $u,v$. A resolving set of $G$ is a subset $S$ of $V$ such that knowing the distance from a vertex $v$ to every vertex in $S$ uniquely identifies $v$. The metric dimension of $G$ is defined as the size of the smallest resolving set of $G$. We define the $k$-truncated resolving set and $k$-truncated metric dimension of a graph similarly, but with the notion of distance replaced with $d_k(u,v) := \min(d(u,v),k+1)$. In this paper, we demonstrate that computing $k$-truncated dimension of trees is NP-Hard for general $k$. We then present a polynomial-time algorithm to compute $k$-truncated dimension of trees when $k$ is a fixed constant.
1102.2566
Morgan Barbier
Morgan Barbier (INRIA Saclay - Ile de France), Barreto S. L. M. Paulo (IME/USP)
Key Reduction of McEliece's Cryptosystem Using List Decoding
null
International Symposium of Information Theory (ISIT) (2011) 2657-2661
null
null
cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different variants of the code-based McEliece cryptosystem were pro- posed to reduce the size of the public key. All these variants use very structured codes, which open the door to new attacks exploiting the underlying structure. In this paper, we show that the dyadic variant can be designed to resist all known attacks. In light of a new study on list decoding algorithms for binary Goppa codes, we explain how to increase the security level for given public keysizes. Using the state-of-the-art list decoding algorithm instead of unique decoding, we exhibit a keysize gain of about 4% for the standard McEliece cryptosystem and up to 21% for the adjusted dyadic variant.
[ { "created": "Sun, 13 Feb 2011 07:26:03 GMT", "version": "v1" }, { "created": "Tue, 15 Nov 2011 09:37:57 GMT", "version": "v2" } ]
2011-11-17
[ [ "Barbier", "Morgan", "", "INRIA Saclay - Ile de France" ], [ "Paulo", "Barreto S. L. M.", "", "IME/USP" ] ]
Different variants of the code-based McEliece cryptosystem were pro- posed to reduce the size of the public key. All these variants use very structured codes, which open the door to new attacks exploiting the underlying structure. In this paper, we show that the dyadic variant can be designed to resist all known attacks. In light of a new study on list decoding algorithms for binary Goppa codes, we explain how to increase the security level for given public keysizes. Using the state-of-the-art list decoding algorithm instead of unique decoding, we exhibit a keysize gain of about 4% for the standard McEliece cryptosystem and up to 21% for the adjusted dyadic variant.
1607.01993
Nikos Gorogiannis
James Brotherston, Nikos Gorogiannis and Max Kanovich
Biabduction (and Related Problems) in Array Separation Logic
null
null
null
null
cs.LO cs.DS cs.PL math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate array separation logic (ASL), a variant of symbolic-heap separation logic in which the data structures are either pointers or arrays, i.e., contiguous blocks of allocated memory. This logic provides a language for compositional memory safety proofs of imperative array programs. We focus on the biabduction problem for this logic, which has been established as the key to automatic specification inference at the industrial scale. We present an NP decision procedure for biabduction in ASL that produces solutions of reasonable quality, and we also show that the problem of finding a consistent solution is NP-hard. Along the way, we study satisfiability and entailment in our logic, giving decision procedures and complexity bounds for both problems. We show satisfiability to be NP-complete, and entailment to be decidable with high complexity. The somewhat surprising fact that biabduction is much simpler than entailment is explained by the fact that, as we show, the element of choice over biabduction solutions enables us to dramatically reduce the search space.
[ { "created": "Thu, 7 Jul 2016 12:49:04 GMT", "version": "v1" }, { "created": "Wed, 16 Nov 2016 21:44:27 GMT", "version": "v2" }, { "created": "Fri, 18 Nov 2016 11:20:20 GMT", "version": "v3" } ]
2016-11-21
[ [ "Brotherston", "James", "" ], [ "Gorogiannis", "Nikos", "" ], [ "Kanovich", "Max", "" ] ]
We investigate array separation logic (ASL), a variant of symbolic-heap separation logic in which the data structures are either pointers or arrays, i.e., contiguous blocks of allocated memory. This logic provides a language for compositional memory safety proofs of imperative array programs. We focus on the biabduction problem for this logic, which has been established as the key to automatic specification inference at the industrial scale. We present an NP decision procedure for biabduction in ASL that produces solutions of reasonable quality, and we also show that the problem of finding a consistent solution is NP-hard. Along the way, we study satisfiability and entailment in our logic, giving decision procedures and complexity bounds for both problems. We show satisfiability to be NP-complete, and entailment to be decidable with high complexity. The somewhat surprising fact that biabduction is much simpler than entailment is explained by the fact that, as we show, the element of choice over biabduction solutions enables us to dramatically reduce the search space.
2302.12813
Michel Galley
Baolin Peng and Michel Galley and Pengcheng He and Hao Cheng and Yujia Xie and Yu Hu and Qiuyuan Huang and Lars Liden and Zhou Yu and Weizhu Chen and Jianfeng Gao
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
15 pages
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs), such as ChatGPT, are able to generate human-like, fluent responses for many downstream tasks, e.g., task-oriented dialog and question answering. However, applying LLMs to real-world, mission-critical applications remains challenging mainly due to their tendency to generate hallucinations and their inability to use external knowledge. This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Our system makes the LLM generate responses grounded in external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses. We make the source code and models publicly available.
[ { "created": "Fri, 24 Feb 2023 18:48:43 GMT", "version": "v1" }, { "created": "Wed, 1 Mar 2023 17:21:48 GMT", "version": "v2" }, { "created": "Wed, 8 Mar 2023 23:41:49 GMT", "version": "v3" } ]
2023-03-10
[ [ "Peng", "Baolin", "" ], [ "Galley", "Michel", "" ], [ "He", "Pengcheng", "" ], [ "Cheng", "Hao", "" ], [ "Xie", "Yujia", "" ], [ "Hu", "Yu", "" ], [ "Huang", "Qiuyuan", "" ], [ "Liden", "Lars", "" ], [ "Yu", "Zhou", "" ], [ "Chen", "Weizhu", "" ], [ "Gao", "Jianfeng", "" ] ]
Large language models (LLMs), such as ChatGPT, are able to generate human-like, fluent responses for many downstream tasks, e.g., task-oriented dialog and question answering. However, applying LLMs to real-world, mission-critical applications remains challenging mainly due to their tendency to generate hallucinations and their inability to use external knowledge. This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Our system makes the LLM generate responses grounded in external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses. We make the source code and models publicly available.
1408.5512
Manuel Kauers
Shaoshi Chen, Manuel Kauers, Michael F. Singer
Desingularization of Ore Operators
null
null
null
null
cs.SC math.AC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that Ore operators can be desingularized by calculating a least common left multiple with a random operator of appropriate order. Our result generalizes a classical result about apparent singularities of linear differential equations, and it gives rise to a surprisingly simple desingularization algorithm.
[ { "created": "Sat, 23 Aug 2014 16:52:18 GMT", "version": "v1" } ]
2014-08-26
[ [ "Chen", "Shaoshi", "" ], [ "Kauers", "Manuel", "" ], [ "Singer", "Michael F.", "" ] ]
We show that Ore operators can be desingularized by calculating a least common left multiple with a random operator of appropriate order. Our result generalizes a classical result about apparent singularities of linear differential equations, and it gives rise to a surprisingly simple desingularization algorithm.
2108.07955
Jiang Yu
Yu Jiang, Lei Hu, Yongmei Zhang, and Xin Yang
WRICNet:A Weighted Rich-scale Inception Coder Network for Multi-Resolution Remote Sensing Image Change Detection
null
null
10.1109/TGRS.2022.3145652
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Majority models of remote sensing image changing detection can only get great effect in a specific resolution data set. With the purpose of improving change detection effectiveness of the model in the multi-resolution data set, a weighted rich-scale inception coder network (WRICNet) is proposed in this article, which can make a great fusion of shallow multi-scale features, and deep multi-scale features. The weighted rich-scale inception module of the proposed can obtain shallow multi-scale features, the weighted rich-scale coder module can obtain deep multi-scale features. The weighted scale block assigns appropriate weights to features of different scales, which can strengthen expressive ability of the edge of the changing area. The performance experiments on the multi-resolution data set demonstrate that, compared to the comparative methods, the proposed can further reduce the false alarm outside the change area, and the missed alarm in the change area, besides, the edge of the change area is more accurate. The ablation study of the proposed shows that the training strategy, and improvements of this article can improve the effectiveness of change detection.
[ { "created": "Wed, 18 Aug 2021 02:56:11 GMT", "version": "v1" } ]
2022-05-04
[ [ "Jiang", "Yu", "" ], [ "Hu", "Lei", "" ], [ "Zhang", "Yongmei", "" ], [ "Yang", "Xin", "" ] ]
Majority models of remote sensing image changing detection can only get great effect in a specific resolution data set. With the purpose of improving change detection effectiveness of the model in the multi-resolution data set, a weighted rich-scale inception coder network (WRICNet) is proposed in this article, which can make a great fusion of shallow multi-scale features, and deep multi-scale features. The weighted rich-scale inception module of the proposed can obtain shallow multi-scale features, the weighted rich-scale coder module can obtain deep multi-scale features. The weighted scale block assigns appropriate weights to features of different scales, which can strengthen expressive ability of the edge of the changing area. The performance experiments on the multi-resolution data set demonstrate that, compared to the comparative methods, the proposed can further reduce the false alarm outside the change area, and the missed alarm in the change area, besides, the edge of the change area is more accurate. The ablation study of the proposed shows that the training strategy, and improvements of this article can improve the effectiveness of change detection.
2210.09628
Fachrina Dewi Puspitasari
Fachrina Dewi Puspitasari and Lik-Hang Lee
Review of Persuasive User Interface as Strategy for Technology Addiction in Virtual Environments
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the era of virtuality, the increasingly ubiquitous technology bears the challenge of excessive user dependency, also known as user addiction. Augmented reality (AR) and virtual reality (VR) have become increasingly integrated into daily life. Although discussions about the drawbacks of these technologies are abundant, their exploration for solutions is still rare. Thus, using the PRISMA methodology, this paper reviewed the literature on technology addiction and persuasive technology. After describing the key research trends, the paper summed up nine persuasive elements of user interfaces (UIs) that AR and VR developers could add to their apps to make them less addictive. Furthermore, this review paper encourages more research into a persuasive strategy for controlling user dependency in virtual-physical blended cyberspace.
[ { "created": "Tue, 18 Oct 2022 06:54:06 GMT", "version": "v1" } ]
2022-10-19
[ [ "Puspitasari", "Fachrina Dewi", "" ], [ "Lee", "Lik-Hang", "" ] ]
In the era of virtuality, the increasingly ubiquitous technology bears the challenge of excessive user dependency, also known as user addiction. Augmented reality (AR) and virtual reality (VR) have become increasingly integrated into daily life. Although discussions about the drawbacks of these technologies are abundant, their exploration for solutions is still rare. Thus, using the PRISMA methodology, this paper reviewed the literature on technology addiction and persuasive technology. After describing the key research trends, the paper summed up nine persuasive elements of user interfaces (UIs) that AR and VR developers could add to their apps to make them less addictive. Furthermore, this review paper encourages more research into a persuasive strategy for controlling user dependency in virtual-physical blended cyberspace.
1208.2205
Mohammad Havaei
Sanaz Moshirian, Soheil Ghadami, Mohammad Havaei
Blind Channel Equalization
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Future services demand high data rate and quality. Thus, it is necessary to define new and robust algorithms to equalize channels and reduce noise in communications. Nowadays, new equalization algorithms are being developed to optimize the channel bandwidth and reduce noise, namely, Blind Channel Equalization. Conventional equalizations minimizing mean-square error generally require a training sequence accompanying the data sequence. In this study, the result of Least Mean Square (LMS) algorithm applied on two given communication channels is analyzed. Considering the fact that blind equalizers do not require pilot signals to recover the transmitted data, implementation of four types of Constant Modulus Algorithm (CMA) for blind equalization of the channels are shown. Finally, a comparison of the simulation results of LMS and CMA for the test channels is provided.
[ { "created": "Fri, 10 Aug 2012 15:35:01 GMT", "version": "v1" } ]
2012-08-13
[ [ "Moshirian", "Sanaz", "" ], [ "Ghadami", "Soheil", "" ], [ "Havaei", "Mohammad", "" ] ]
Future services demand high data rate and quality. Thus, it is necessary to define new and robust algorithms to equalize channels and reduce noise in communications. Nowadays, new equalization algorithms are being developed to optimize the channel bandwidth and reduce noise, namely, Blind Channel Equalization. Conventional equalizations minimizing mean-square error generally require a training sequence accompanying the data sequence. In this study, the result of Least Mean Square (LMS) algorithm applied on two given communication channels is analyzed. Considering the fact that blind equalizers do not require pilot signals to recover the transmitted data, implementation of four types of Constant Modulus Algorithm (CMA) for blind equalization of the channels are shown. Finally, a comparison of the simulation results of LMS and CMA for the test channels is provided.
2401.10480
Yiwei Li
Yiwei Li, Peiwen Yuan, Shaoxiong Feng, Boyuan Pan, Xinglin Wang, Bin Sun, Heda Wang, Kan Li
Escape Sky-high Cost: Early-stopping Self-Consistency for Multi-step Reasoning
ICLR 2024
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-consistency (SC) has been a widely used decoding strategy for chain-of-thought reasoning. Despite bringing significant performance improvements across a variety of multi-step reasoning tasks, it is a high-cost method that requires multiple sampling with the preset size. In this paper, we propose a simple and scalable sampling process, \textbf{E}arly-Stopping \textbf{S}elf-\textbf{C}onsistency (ESC), to greatly reduce the cost of SC without sacrificing performance. On this basis, one control scheme for ESC is further derivated to dynamically choose the performance-cost balance for different tasks and models. To demonstrate ESC's effectiveness, we conducted extensive experiments on three popular categories of reasoning tasks: arithmetic, commonsense and symbolic reasoning over language models with varying scales. The empirical results show that ESC reduces the average number of sampling of chain-of-thought reasoning by a significant margin on six benchmarks, including MATH (-33.8%), GSM8K (-80.1%), StrategyQA (-76.8%), CommonsenseQA (-78.5%), Coin Flip (-84.2%) and Last Letters (-67.4%), while attaining comparable performances.
[ { "created": "Fri, 19 Jan 2024 04:03:59 GMT", "version": "v1" } ]
2024-01-22
[ [ "Li", "Yiwei", "" ], [ "Yuan", "Peiwen", "" ], [ "Feng", "Shaoxiong", "" ], [ "Pan", "Boyuan", "" ], [ "Wang", "Xinglin", "" ], [ "Sun", "Bin", "" ], [ "Wang", "Heda", "" ], [ "Li", "Kan", "" ] ]
Self-consistency (SC) has been a widely used decoding strategy for chain-of-thought reasoning. Despite bringing significant performance improvements across a variety of multi-step reasoning tasks, it is a high-cost method that requires multiple sampling with the preset size. In this paper, we propose a simple and scalable sampling process, \textbf{E}arly-Stopping \textbf{S}elf-\textbf{C}onsistency (ESC), to greatly reduce the cost of SC without sacrificing performance. On this basis, one control scheme for ESC is further derivated to dynamically choose the performance-cost balance for different tasks and models. To demonstrate ESC's effectiveness, we conducted extensive experiments on three popular categories of reasoning tasks: arithmetic, commonsense and symbolic reasoning over language models with varying scales. The empirical results show that ESC reduces the average number of sampling of chain-of-thought reasoning by a significant margin on six benchmarks, including MATH (-33.8%), GSM8K (-80.1%), StrategyQA (-76.8%), CommonsenseQA (-78.5%), Coin Flip (-84.2%) and Last Letters (-67.4%), while attaining comparable performances.
1106.5928
Gabriel Cristobal
Salvador Gabarda and Gabriel Cristobal
Image denoising assessment using anisotropic stack filtering
13 pages, 8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a measure of anisotropy as a quality parameter to estimate the amount of noise in noisy images. The anisotropy of an image can be determined through a directional measure, using an appropriate statistical distribution of the information contained in the image. This new measure is achieved through a stack filtering paradigm. First, we define a local directional entropy, based on the distribution of 0's and 1's in the neigborhood of every pixel location of each stack level. Then the entropy variation of this directional entropy is used to define an anisotropic measure. The empirical results have shown that this measure can be regarded as an excellent image noise indicator, which is particularly relevant for quality assessment of denoising algorithms. The method has been evaluated with artificial and real-world degraded images.
[ { "created": "Wed, 29 Jun 2011 13:12:56 GMT", "version": "v1" } ]
2011-06-30
[ [ "Gabarda", "Salvador", "" ], [ "Cristobal", "Gabriel", "" ] ]
In this paper we propose a measure of anisotropy as a quality parameter to estimate the amount of noise in noisy images. The anisotropy of an image can be determined through a directional measure, using an appropriate statistical distribution of the information contained in the image. This new measure is achieved through a stack filtering paradigm. First, we define a local directional entropy, based on the distribution of 0's and 1's in the neigborhood of every pixel location of each stack level. Then the entropy variation of this directional entropy is used to define an anisotropic measure. The empirical results have shown that this measure can be regarded as an excellent image noise indicator, which is particularly relevant for quality assessment of denoising algorithms. The method has been evaluated with artificial and real-world degraded images.
2402.00334
Zhongxia Yan
Zhongxia Yan, Han Zheng, Cathy Wu
Multi-agent Path Finding for Cooperative Autonomous Driving
7 pages, 3 figures, IEEE International Conference on Robotics and Automation (ICRA), 2024
null
null
null
cs.MA cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anticipating possible future deployment of connected and automated vehicles (CAVs), cooperative autonomous driving at intersections has been studied by many works in control theory and intelligent transportation across decades. Simultaneously, recent parallel works in robotics have devised efficient algorithms for multi-agent path finding (MAPF), though often in environments with simplified kinematics. In this work, we hybridize insights and algorithms from MAPF with the structure and heuristics of optimizing the crossing order of CAVs at signal-free intersections. We devise an optimal and complete algorithm, Order-based Search with Kinematics Arrival Time Scheduling (OBS-KATS), which significantly outperforms existing algorithms, fixed heuristics, and prioritized planning with KATS. The performance is maintained under different vehicle arrival rates, lane lengths, crossing speeds, and control horizon. Through ablations and dissections, we offer insight on the contributing factors to OBS-KATS's performance. Our work is directly applicable to many similarly scaled traffic and multi-robot scenarios with directed lanes.
[ { "created": "Thu, 1 Feb 2024 04:39:15 GMT", "version": "v1" } ]
2024-02-02
[ [ "Yan", "Zhongxia", "" ], [ "Zheng", "Han", "" ], [ "Wu", "Cathy", "" ] ]
Anticipating possible future deployment of connected and automated vehicles (CAVs), cooperative autonomous driving at intersections has been studied by many works in control theory and intelligent transportation across decades. Simultaneously, recent parallel works in robotics have devised efficient algorithms for multi-agent path finding (MAPF), though often in environments with simplified kinematics. In this work, we hybridize insights and algorithms from MAPF with the structure and heuristics of optimizing the crossing order of CAVs at signal-free intersections. We devise an optimal and complete algorithm, Order-based Search with Kinematics Arrival Time Scheduling (OBS-KATS), which significantly outperforms existing algorithms, fixed heuristics, and prioritized planning with KATS. The performance is maintained under different vehicle arrival rates, lane lengths, crossing speeds, and control horizon. Through ablations and dissections, we offer insight on the contributing factors to OBS-KATS's performance. Our work is directly applicable to many similarly scaled traffic and multi-robot scenarios with directed lanes.
2402.02554
Alon Zolfi
Oryan Yehezkel, Alon Zolfi, Amit Baras, Yuval Elovici, Asaf Shabtai
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers
12 pages, 5 figures
null
null
null
cs.CV cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision transformers have contributed greatly to advancements in the computer vision domain, demonstrating state-of-the-art performance in diverse tasks (e.g., image classification, object detection). However, their high computational requirements grow quadratically with the number of tokens used. Token sparsification techniques have been proposed to address this issue. These techniques employ an input-dependent strategy, in which uninformative tokens are discarded from the computation pipeline, improving the model's efficiency. However, their dynamism and average-case assumption makes them vulnerable to a new threat vector - carefully crafted adversarial examples capable of fooling the sparsification mechanism, resulting in worst-case performance. In this paper, we present DeSparsify, an attack targeting the availability of vision transformers that use token sparsification mechanisms. The attack aims to exhaust the operating system's resources, while maintaining its stealthiness. Our evaluation demonstrates the attack's effectiveness on three token sparsification techniques and examines the attack's transferability between them and its effect on the GPU resources. To mitigate the impact of the attack, we propose various countermeasures.
[ { "created": "Sun, 4 Feb 2024 15:59:35 GMT", "version": "v1" } ]
2024-02-07
[ [ "Yehezkel", "Oryan", "" ], [ "Zolfi", "Alon", "" ], [ "Baras", "Amit", "" ], [ "Elovici", "Yuval", "" ], [ "Shabtai", "Asaf", "" ] ]
Vision transformers have contributed greatly to advancements in the computer vision domain, demonstrating state-of-the-art performance in diverse tasks (e.g., image classification, object detection). However, their high computational requirements grow quadratically with the number of tokens used. Token sparsification techniques have been proposed to address this issue. These techniques employ an input-dependent strategy, in which uninformative tokens are discarded from the computation pipeline, improving the model's efficiency. However, their dynamism and average-case assumption makes them vulnerable to a new threat vector - carefully crafted adversarial examples capable of fooling the sparsification mechanism, resulting in worst-case performance. In this paper, we present DeSparsify, an attack targeting the availability of vision transformers that use token sparsification mechanisms. The attack aims to exhaust the operating system's resources, while maintaining its stealthiness. Our evaluation demonstrates the attack's effectiveness on three token sparsification techniques and examines the attack's transferability between them and its effect on the GPU resources. To mitigate the impact of the attack, we propose various countermeasures.
1705.07834
Debadeepta Dey
Sanjiban Choudhury, Ashish Kapoor, Gireeja Ranade, Sebastian Scherer, Debadeepta Dey
Adaptive Information Gathering via Imitation Learning
Robotics Science and Systems, 2017
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the adaptive information gathering problem, a policy is required to select an informative sensing location using the history of measurements acquired thus far. While there is an extensive amount of prior work investigating effective practical approximations using variants of Shannon's entropy, the efficacy of such policies heavily depends on the geometric distribution of objects in the world. On the other hand, the principled approach of employing online POMDP solvers is rendered impractical by the need to explicitly sample online from a posterior distribution of world maps. We present a novel data-driven imitation learning framework to efficiently train information gathering policies. The policy imitates a clairvoyant oracle - an oracle that at train time has full knowledge about the world map and can compute maximally informative sensing locations. We analyze the learnt policy by showing that offline imitation of a clairvoyant oracle is implicitly equivalent to online oracle execution in conjunction with posterior sampling. This observation allows us to obtain powerful near-optimality guarantees for information gathering problems possessing an adaptive sub-modularity property. As demonstrated on a spectrum of 2D and 3D exploration problems, the trained policies enjoy the best of both worlds - they adapt to different world map distributions while being computationally inexpensive to evaluate.
[ { "created": "Mon, 22 May 2017 16:28:55 GMT", "version": "v1" } ]
2017-05-23
[ [ "Choudhury", "Sanjiban", "" ], [ "Kapoor", "Ashish", "" ], [ "Ranade", "Gireeja", "" ], [ "Scherer", "Sebastian", "" ], [ "Dey", "Debadeepta", "" ] ]
In the adaptive information gathering problem, a policy is required to select an informative sensing location using the history of measurements acquired thus far. While there is an extensive amount of prior work investigating effective practical approximations using variants of Shannon's entropy, the efficacy of such policies heavily depends on the geometric distribution of objects in the world. On the other hand, the principled approach of employing online POMDP solvers is rendered impractical by the need to explicitly sample online from a posterior distribution of world maps. We present a novel data-driven imitation learning framework to efficiently train information gathering policies. The policy imitates a clairvoyant oracle - an oracle that at train time has full knowledge about the world map and can compute maximally informative sensing locations. We analyze the learnt policy by showing that offline imitation of a clairvoyant oracle is implicitly equivalent to online oracle execution in conjunction with posterior sampling. This observation allows us to obtain powerful near-optimality guarantees for information gathering problems possessing an adaptive sub-modularity property. As demonstrated on a spectrum of 2D and 3D exploration problems, the trained policies enjoy the best of both worlds - they adapt to different world map distributions while being computationally inexpensive to evaluate.
1311.3336
Alexander Sprintson
C. Jasson Casey, Andrew Sutton, Gabriel Dos Reis, and Alex Sprintson
Eliminating Network Protocol Vulnerabilities Through Abstraction and Systems Language Design
null
null
null
null
cs.NI cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incorrect implementations of network protocol message specifications affect the stability, security, and cost of network system development. Most implementation defects fall into one of three categories of well defined message constraints. However, the general process of constructing network protocol stacks and systems does not capture these categorical con- straints. We introduce a systems programming language with new abstractions that capture these constraints. Safe and efficient implementations of standard message handling operations are synthesized by our compiler, and whole-program analysis is used to ensure constraints are never violated. We present language examples using the OpenFlow protocol.
[ { "created": "Wed, 13 Nov 2013 23:08:12 GMT", "version": "v1" } ]
2013-11-15
[ [ "Casey", "C. Jasson", "" ], [ "Sutton", "Andrew", "" ], [ "Reis", "Gabriel Dos", "" ], [ "Sprintson", "Alex", "" ] ]
Incorrect implementations of network protocol message specifications affect the stability, security, and cost of network system development. Most implementation defects fall into one of three categories of well defined message constraints. However, the general process of constructing network protocol stacks and systems does not capture these categorical con- straints. We introduce a systems programming language with new abstractions that capture these constraints. Safe and efficient implementations of standard message handling operations are synthesized by our compiler, and whole-program analysis is used to ensure constraints are never violated. We present language examples using the OpenFlow protocol.
2007.09202
Gopinath Mishra
Arijit Bishnu, Arijit Ghosh, Gopinath Mishra and Manaswi Paraashar
Query Complexity of Global Minimum Cut
15 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we resolve the query complexity of global minimum cut problem for a graph by designing a randomized algorithm for approximating the size of minimum cut in a graph, where the graph can be accessed through local queries like {\sc Degree}, {\sc Neighbor}, and {\sc Adjacency} queries. Given $\epsilon \in (0,1)$, the algorithm with high probability outputs an estimate $\hat{t}$ satisfying the following $(1-\epsilon) t \leq \hat{t} \leq (1+\epsilon) t$, where $m$ is the number of edges in the graph and $t$ is the size of minimum cut in the graph. The expected number of local queries used by our algorithm is $\min\left\{m+n,\frac{m}{t}\right\}\mbox{poly}\left(\log n,\frac{1}{\epsilon}\right)$ where $n$ is the number of vertices in the graph. Eden and Rosenbaum showed that $\Omega(m/t)$ many local queries are required for approximating the size of minimum cut in graphs. These two results together resolve the query complexity of the problem of estimating the size of minimum cut in graphs using local queries. Building on the lower bound of Eden and Rosenbaum, we show that, for all $t \in \mathbb{N}$, $\Omega(m)$ local queries are required to decide if the size of the minimum cut in the graph is $t$ or $t-2$. Also, we show that, for any $t \in \mathbb{N}$, $\Omega(m)$ local queries are required to find all the minimum cut edges even if it is promised that the input graph has a minimum cut of size $t$. Both of our lower bound results are randomized, and hold even if we can make {\sc Random Edge} query apart from local queries.
[ { "created": "Fri, 17 Jul 2020 19:37:28 GMT", "version": "v1" }, { "created": "Tue, 11 Aug 2020 09:59:49 GMT", "version": "v2" } ]
2020-08-12
[ [ "Bishnu", "Arijit", "" ], [ "Ghosh", "Arijit", "" ], [ "Mishra", "Gopinath", "" ], [ "Paraashar", "Manaswi", "" ] ]
In this work, we resolve the query complexity of global minimum cut problem for a graph by designing a randomized algorithm for approximating the size of minimum cut in a graph, where the graph can be accessed through local queries like {\sc Degree}, {\sc Neighbor}, and {\sc Adjacency} queries. Given $\epsilon \in (0,1)$, the algorithm with high probability outputs an estimate $\hat{t}$ satisfying the following $(1-\epsilon) t \leq \hat{t} \leq (1+\epsilon) t$, where $m$ is the number of edges in the graph and $t$ is the size of minimum cut in the graph. The expected number of local queries used by our algorithm is $\min\left\{m+n,\frac{m}{t}\right\}\mbox{poly}\left(\log n,\frac{1}{\epsilon}\right)$ where $n$ is the number of vertices in the graph. Eden and Rosenbaum showed that $\Omega(m/t)$ many local queries are required for approximating the size of minimum cut in graphs. These two results together resolve the query complexity of the problem of estimating the size of minimum cut in graphs using local queries. Building on the lower bound of Eden and Rosenbaum, we show that, for all $t \in \mathbb{N}$, $\Omega(m)$ local queries are required to decide if the size of the minimum cut in the graph is $t$ or $t-2$. Also, we show that, for any $t \in \mathbb{N}$, $\Omega(m)$ local queries are required to find all the minimum cut edges even if it is promised that the input graph has a minimum cut of size $t$. Both of our lower bound results are randomized, and hold even if we can make {\sc Random Edge} query apart from local queries.
2008.12709
Roman Shapovalov
David Novotny, Roman Shapovalov, Andrea Vedaldi
Canonical 3D Deformer Maps: Unifying parametric and non-parametric methods for dense weakly-supervised category reconstruction
Published at NeurIPS 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the Canonical 3D Deformer Map, a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects. Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings, combining their individual advantages. In particular, it learns to associate each image pixel with a deformation model of the corresponding 3D object point which is canonical, i.e. intrinsic to the identity of the point and shared across objects of the category. The result is a method that, given only sparse 2D supervision at training time, can, at test time, reconstruct the 3D shape and texture of objects from single views, while establishing meaningful dense correspondences between object instances. It also achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
[ { "created": "Fri, 28 Aug 2020 15:44:05 GMT", "version": "v1" }, { "created": "Sun, 6 Dec 2020 11:59:06 GMT", "version": "v2" } ]
2020-12-08
[ [ "Novotny", "David", "" ], [ "Shapovalov", "Roman", "" ], [ "Vedaldi", "Andrea", "" ] ]
We propose the Canonical 3D Deformer Map, a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects. Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings, combining their individual advantages. In particular, it learns to associate each image pixel with a deformation model of the corresponding 3D object point which is canonical, i.e. intrinsic to the identity of the point and shared across objects of the category. The result is a method that, given only sparse 2D supervision at training time, can, at test time, reconstruct the 3D shape and texture of objects from single views, while establishing meaningful dense correspondences between object instances. It also achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
2104.11706
Max Mowbray Mr
Max Mowbray, Panagiotis Petsagkourakis, Ehecatl Antonio del R\'io Chanona, Dongda Zhang
Safe Chance Constrained Reinforcement Learning for Batch Process Control
null
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Reinforcement Learning (RL) controllers have generated excitement within the control community. The primary advantage of RL controllers relative to existing methods is their ability to optimize uncertain systems independently of explicit assumption of process uncertainty. Recent focus on engineering applications has been directed towards the development of safe RL controllers. Previous works have proposed approaches to account for constraint satisfaction through constraint tightening from the domain of stochastic model predictive control. Here, we extend these approaches to account for plant-model mismatch. Specifically, we propose a data-driven approach that utilizes Gaussian processes for the offline simulation model and use the associated posterior uncertainty prediction to account for joint chance constraints and plant-model mismatch. The method is benchmarked against nonlinear model predictive control via case studies. The results demonstrate the ability of the methodology to account for process uncertainty, enabling satisfaction of joint chance constraints even in the presence of plant-model mismatch.
[ { "created": "Fri, 23 Apr 2021 16:48:46 GMT", "version": "v1" }, { "created": "Mon, 6 Dec 2021 11:29:43 GMT", "version": "v2" } ]
2021-12-07
[ [ "Mowbray", "Max", "" ], [ "Petsagkourakis", "Panagiotis", "" ], [ "Chanona", "Ehecatl Antonio del Río", "" ], [ "Zhang", "Dongda", "" ] ]
Reinforcement Learning (RL) controllers have generated excitement within the control community. The primary advantage of RL controllers relative to existing methods is their ability to optimize uncertain systems independently of explicit assumption of process uncertainty. Recent focus on engineering applications has been directed towards the development of safe RL controllers. Previous works have proposed approaches to account for constraint satisfaction through constraint tightening from the domain of stochastic model predictive control. Here, we extend these approaches to account for plant-model mismatch. Specifically, we propose a data-driven approach that utilizes Gaussian processes for the offline simulation model and use the associated posterior uncertainty prediction to account for joint chance constraints and plant-model mismatch. The method is benchmarked against nonlinear model predictive control via case studies. The results demonstrate the ability of the methodology to account for process uncertainty, enabling satisfaction of joint chance constraints even in the presence of plant-model mismatch.
1708.05448
Philip Thomas
Philip S. Thomas, Bruno Castro da Silva, Andrew G. Barto, and Emma Brunskill
On Ensuring that Intelligent Machines Are Well-Behaved
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve super-human performance on various tasks. Ensuring that they are well-behaved---that they do not, for example, cause harm to humans or act in a racist or sexist way---is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we address here. We propose a new framework for designing machine learning algorithms that simplifies the problem of specifying and regulating undesirable behaviors. To show the viability of this new framework, we use it to create new machine learning algorithms that preclude the sexist and harmful behaviors exhibited by standard machine learning algorithms in our experiments. Our framework for designing machine learning algorithms simplifies the safe and responsible application of machine learning.
[ { "created": "Thu, 17 Aug 2017 21:53:47 GMT", "version": "v1" } ]
2017-08-21
[ [ "Thomas", "Philip S.", "" ], [ "da Silva", "Bruno Castro", "" ], [ "Barto", "Andrew G.", "" ], [ "Brunskill", "Emma", "" ] ]
Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve super-human performance on various tasks. Ensuring that they are well-behaved---that they do not, for example, cause harm to humans or act in a racist or sexist way---is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we address here. We propose a new framework for designing machine learning algorithms that simplifies the problem of specifying and regulating undesirable behaviors. To show the viability of this new framework, we use it to create new machine learning algorithms that preclude the sexist and harmful behaviors exhibited by standard machine learning algorithms in our experiments. Our framework for designing machine learning algorithms simplifies the safe and responsible application of machine learning.
2112.07130
Jean Belo Klamti
Jean Belo Klamti and M. Anwar Hasan
A code-based hybrid signcryption scheme
We made some improvment in the paper
null
null
null
cs.CR cs.IT math.IT math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key encapsulation mechanism (KEM) that takes as input an arbitrary string, i.e., a tag, is known as tag-KEM, while a scheme that combines signature and encryption is called signcryption. In this paper, we present a code-based signcryption tag-KEM scheme. We utilize a code-based signature and an IND-CCA2 (adaptive chosen ciphertext attack) secure version of McEliece's encryption scheme. The proposed scheme uses an equivalent subcode as a public code for the receiver, making the NPcompleteness of the subcode equivalence problem to be one of our main security assumptions. We then base the signcryption tag-KEM to design a code-based hybrid signcryption scheme. A hybrid scheme deploys asymmetric- as well as symmetric-key encryption. We give security analyses of both our schemes in the standard model and prove that they are secure against IND-CCA2 (indistinguishability under adaptive chosen ciphertext attack) and SUF-CMA (strong existential unforgeability under chosen message attack).
[ { "created": "Tue, 14 Dec 2021 03:02:24 GMT", "version": "v1" }, { "created": "Tue, 21 Mar 2023 14:05:21 GMT", "version": "v2" } ]
2023-03-22
[ [ "Klamti", "Jean Belo", "" ], [ "Hasan", "M. Anwar", "" ] ]
A key encapsulation mechanism (KEM) that takes as input an arbitrary string, i.e., a tag, is known as tag-KEM, while a scheme that combines signature and encryption is called signcryption. In this paper, we present a code-based signcryption tag-KEM scheme. We utilize a code-based signature and an IND-CCA2 (adaptive chosen ciphertext attack) secure version of McEliece's encryption scheme. The proposed scheme uses an equivalent subcode as a public code for the receiver, making the NPcompleteness of the subcode equivalence problem to be one of our main security assumptions. We then base the signcryption tag-KEM to design a code-based hybrid signcryption scheme. A hybrid scheme deploys asymmetric- as well as symmetric-key encryption. We give security analyses of both our schemes in the standard model and prove that they are secure against IND-CCA2 (indistinguishability under adaptive chosen ciphertext attack) and SUF-CMA (strong existential unforgeability under chosen message attack).
1601.05748
Wentao Wu
Wentao Wu, Jeffrey F. Naughton, Harneet Singh
Sampling-Based Query Re-Optimization
This is the extended version of a paper with the same title and authors that appears in the Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2016)
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite of decades of work, query optimizers still make mistakes on "difficult" queries because of bad cardinality estimates, often due to the interaction of multiple predicates and correlations in the data. In this paper, we propose a low-cost post-processing step that can take a plan produced by the optimizer, detect when it is likely to have made such a mistake, and take steps to fix it. Specifically, our solution is a sampling-based iterative procedure that requires almost no changes to the original query optimizer or query evaluation mechanism of the system. We show that this indeed imposes low overhead and catches cases where three widely used optimizers (PostgreSQL and two commercial systems) make large errors.
[ { "created": "Thu, 21 Jan 2016 18:46:18 GMT", "version": "v1" } ]
2016-01-22
[ [ "Wu", "Wentao", "" ], [ "Naughton", "Jeffrey F.", "" ], [ "Singh", "Harneet", "" ] ]
Despite of decades of work, query optimizers still make mistakes on "difficult" queries because of bad cardinality estimates, often due to the interaction of multiple predicates and correlations in the data. In this paper, we propose a low-cost post-processing step that can take a plan produced by the optimizer, detect when it is likely to have made such a mistake, and take steps to fix it. Specifically, our solution is a sampling-based iterative procedure that requires almost no changes to the original query optimizer or query evaluation mechanism of the system. We show that this indeed imposes low overhead and catches cases where three widely used optimizers (PostgreSQL and two commercial systems) make large errors.
2403.01738
Qihe Huang
Zhengyang Zhou, Qihe Huang, Binwu Wang, Jianpeng Hou, Kuo Yang, Yuxuan Liang, Yang Wang
ComS2T: A complementary spatiotemporal learning system for data-adaptive model evolution
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatiotemporal (ST) learning has become a crucial technique to enable smart cities and sustainable urban development. Current ST learning models capture the heterogeneity via various spatial convolution and temporal evolution blocks. However, rapid urbanization leads to fluctuating distributions in urban data and city structures over short periods, resulting in existing methods suffering generalization and data adaptation issues. Despite efforts, existing methods fail to deal with newly arrived observations and those methods with generalization capacity are limited in repeated training. Motivated by complementary learning in neuroscience, we introduce a prompt-based complementary spatiotemporal learning termed ComS2T, to empower the evolution of models for data adaptation. ComS2T partitions the neural architecture into a stable neocortex for consolidating historical memory and a dynamic hippocampus for new knowledge update. We first disentangle two disjoint structures into stable and dynamic weights, and then train spatial and temporal prompts by characterizing distribution of main observations to enable prompts adaptive to new data. This data-adaptive prompt mechanism, combined with a two-stage training process, facilitates fine-tuning of the neural architecture conditioned on prompts, thereby enabling efficient adaptation during testing. Extensive experiments validate the efficacy of ComS2T in adapting to various spatiotemporal out-of-distribution scenarios while maintaining efficient inference capabilities.
[ { "created": "Mon, 4 Mar 2024 05:31:29 GMT", "version": "v1" } ]
2024-03-05
[ [ "Zhou", "Zhengyang", "" ], [ "Huang", "Qihe", "" ], [ "Wang", "Binwu", "" ], [ "Hou", "Jianpeng", "" ], [ "Yang", "Kuo", "" ], [ "Liang", "Yuxuan", "" ], [ "Wang", "Yang", "" ] ]
Spatiotemporal (ST) learning has become a crucial technique to enable smart cities and sustainable urban development. Current ST learning models capture the heterogeneity via various spatial convolution and temporal evolution blocks. However, rapid urbanization leads to fluctuating distributions in urban data and city structures over short periods, resulting in existing methods suffering generalization and data adaptation issues. Despite efforts, existing methods fail to deal with newly arrived observations and those methods with generalization capacity are limited in repeated training. Motivated by complementary learning in neuroscience, we introduce a prompt-based complementary spatiotemporal learning termed ComS2T, to empower the evolution of models for data adaptation. ComS2T partitions the neural architecture into a stable neocortex for consolidating historical memory and a dynamic hippocampus for new knowledge update. We first disentangle two disjoint structures into stable and dynamic weights, and then train spatial and temporal prompts by characterizing distribution of main observations to enable prompts adaptive to new data. This data-adaptive prompt mechanism, combined with a two-stage training process, facilitates fine-tuning of the neural architecture conditioned on prompts, thereby enabling efficient adaptation during testing. Extensive experiments validate the efficacy of ComS2T in adapting to various spatiotemporal out-of-distribution scenarios while maintaining efficient inference capabilities.
1801.03003
Lise Verlaet
Lise Verlaet (LERASS), Sidonie Gallot (LERASS)
Between collective intelligence and semantic web : hypermediating sites. Contribution to technologies of intelligence
null
EJDE - Electronic Journal of Digital Enterprise (ISSN: 1776-2960), Academic e-Journal eJ.D.E. (www.scientifics.fr/ejde), 2013, http://www.scientifics.fr/ejde/html/1776-2960%20R374.htm
null
null
cs.AI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a new form of access to knowledge through what we call "hypermediator websites". These hypermediator sites are intermediate between information devices that just scan the book culture and a "real" hypertext writing format.
[ { "created": "Mon, 8 Jan 2018 13:53:41 GMT", "version": "v1" } ]
2018-01-10
[ [ "Verlaet", "Lise", "", "LERASS" ], [ "Gallot", "Sidonie", "", "LERASS" ] ]
In this paper we present a new form of access to knowledge through what we call "hypermediator websites". These hypermediator sites are intermediate between information devices that just scan the book culture and a "real" hypertext writing format.
1403.1896
Tao Qin Dr.
Weidong Ma, Bo Zheng, Tao Qin, Pingzhong Tang, Tie-Yan Liu
Online Mechanism Design for Cloud Computing
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study the problem of online mechanism design for resources allocation and pricing in cloud computing (RAPCC). We show that in general the allocation problems in RAPCC are NP-hard, and therefore we focus on designing dominant-strategy incentive compatible (DSIC) mechanisms with good competitive ratios compared to the offline optimal allocation (with the prior knowledge about the future jobs). We propose two kinds of DSIC online mechanisms. The first mechanism, which is based on a greedy allocation rule and leverages a priority function for allocation, is very fast and has a tight competitive bound. We discuss several priority functions including exponential and linear priority functions, and show that the former one has a better competitive ratio. The second mechanism, which is based on a dynamic program for allocation, also has a tight competitive ratio and performs better than the first one when the maximum demand of cloud customers is close to the capacity of the cloud provider.
[ { "created": "Fri, 7 Mar 2014 23:17:15 GMT", "version": "v1" } ]
2014-03-11
[ [ "Ma", "Weidong", "" ], [ "Zheng", "Bo", "" ], [ "Qin", "Tao", "" ], [ "Tang", "Pingzhong", "" ], [ "Liu", "Tie-Yan", "" ] ]
In this work, we study the problem of online mechanism design for resources allocation and pricing in cloud computing (RAPCC). We show that in general the allocation problems in RAPCC are NP-hard, and therefore we focus on designing dominant-strategy incentive compatible (DSIC) mechanisms with good competitive ratios compared to the offline optimal allocation (with the prior knowledge about the future jobs). We propose two kinds of DSIC online mechanisms. The first mechanism, which is based on a greedy allocation rule and leverages a priority function for allocation, is very fast and has a tight competitive bound. We discuss several priority functions including exponential and linear priority functions, and show that the former one has a better competitive ratio. The second mechanism, which is based on a dynamic program for allocation, also has a tight competitive ratio and performs better than the first one when the maximum demand of cloud customers is close to the capacity of the cloud provider.
2408.00673
Shailendra Bhandari
Shailendra Bhandari, Pedro Lincastre and Pedro Lind
Modeling stochastic eye tracking data: A comparison of quantum generative adversarial networks and Markov models
8 pages
null
10.1145/3638530.3664134
null
cs.NE quant-ph
http://creativecommons.org/licenses/by/4.0/
We explore the use of quantum generative adversarial networks QGANs for modeling eye movement velocity data. We assess whether the advanced computational capabilities of QGANs can enhance the modeling of complex stochastic distribution beyond the traditional mathematical models, particularly the Markov model. The findings indicate that while QGANs demonstrate potential in approximating complex distributions, the Markov model consistently outperforms in accurately replicating the real data distribution. This comparison underlines the challenges and avenues for refinement in time series data generation using quantum computing techniques. It emphasizes the need for further optimization of quantum models to better align with real-world data characteristics.
[ { "created": "Thu, 1 Aug 2024 16:15:07 GMT", "version": "v1" } ]
2024-08-02
[ [ "Bhandari", "Shailendra", "" ], [ "Lincastre", "Pedro", "" ], [ "Lind", "Pedro", "" ] ]
We explore the use of quantum generative adversarial networks QGANs for modeling eye movement velocity data. We assess whether the advanced computational capabilities of QGANs can enhance the modeling of complex stochastic distribution beyond the traditional mathematical models, particularly the Markov model. The findings indicate that while QGANs demonstrate potential in approximating complex distributions, the Markov model consistently outperforms in accurately replicating the real data distribution. This comparison underlines the challenges and avenues for refinement in time series data generation using quantum computing techniques. It emphasizes the need for further optimization of quantum models to better align with real-world data characteristics.
2210.17161
Thokozile Manaka Ms
Thokozile Manaka, Terence van Zyl, Deepak Kar
Improving Cause-of-Death Classification from Verbal Autopsy Reports
Southern African Conference for Artificial Intelligence Research
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
In many lower-and-middle income countries including South Africa, data access in health facilities is restricted due to patient privacy and confidentiality policies. Further, since clinical data is unique to individual institutions and laboratories, there are insufficient data annotation standards and conventions. As a result of the scarcity of textual data, natural language processing (NLP) techniques have fared poorly in the health sector. A cause of death (COD) is often determined by a verbal autopsy (VA) report in places without reliable death registration systems. A non-clinician field worker does a VA report using a set of standardized questions as a guide to uncover symptoms of a COD. This analysis focuses on the textual part of the VA report as a case study to address the challenge of adapting NLP techniques in the health domain. We present a system that relies on two transfer learning paradigms of monolingual learning and multi-source domain adaptation to improve VA narratives for the target task of the COD classification. We use the Bidirectional Encoder Representations from Transformers (BERT) and Embeddings from Language Models (ELMo) models pre-trained on the general English and health domains to extract features from the VA narratives. Our findings suggest that this transfer learning system improves the COD classification tasks and that the narrative text contains valuable information for figuring out a COD. Our results further show that combining binary VA features and narrative text features learned via this framework boosts the classification task of COD.
[ { "created": "Mon, 31 Oct 2022 09:14:08 GMT", "version": "v1" } ]
2022-11-01
[ [ "Manaka", "Thokozile", "" ], [ "van Zyl", "Terence", "" ], [ "Kar", "Deepak", "" ] ]
In many lower-and-middle income countries including South Africa, data access in health facilities is restricted due to patient privacy and confidentiality policies. Further, since clinical data is unique to individual institutions and laboratories, there are insufficient data annotation standards and conventions. As a result of the scarcity of textual data, natural language processing (NLP) techniques have fared poorly in the health sector. A cause of death (COD) is often determined by a verbal autopsy (VA) report in places without reliable death registration systems. A non-clinician field worker does a VA report using a set of standardized questions as a guide to uncover symptoms of a COD. This analysis focuses on the textual part of the VA report as a case study to address the challenge of adapting NLP techniques in the health domain. We present a system that relies on two transfer learning paradigms of monolingual learning and multi-source domain adaptation to improve VA narratives for the target task of the COD classification. We use the Bidirectional Encoder Representations from Transformers (BERT) and Embeddings from Language Models (ELMo) models pre-trained on the general English and health domains to extract features from the VA narratives. Our findings suggest that this transfer learning system improves the COD classification tasks and that the narrative text contains valuable information for figuring out a COD. Our results further show that combining binary VA features and narrative text features learned via this framework boosts the classification task of COD.
2008.09020
Md. Khaledur Rahman
Md. Khaledur Rahman
Training Sensitivity in Graph Isomorphism Network
Accepted for publication in CIKM 2020
CIKM 2020
null
null
cs.LG cs.SI stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Graph neural network (GNN) is a popular tool to learn the lower-dimensional representation of a graph. It facilitates the applicability of machine learning tasks on graphs by incorporating domain-specific features. There are various options for underlying procedures (such as optimization functions, activation functions, etc.) that can be considered in the implementation of GNN. However, most of the existing tools are confined to one approach without any analysis. Thus, this emerging field lacks a robust implementation ignoring the highly irregular structure of the real-world graphs. In this paper, we attempt to fill this gap by studying various alternative functions for a respective module using a diverse set of benchmark datasets. Our empirical results suggest that the generally used underlying techniques do not always perform well to capture the overall structure from a set of graphs.
[ { "created": "Wed, 19 Aug 2020 03:50:28 GMT", "version": "v1" } ]
2020-08-21
[ [ "Rahman", "Md. Khaledur", "" ] ]
Graph neural network (GNN) is a popular tool to learn the lower-dimensional representation of a graph. It facilitates the applicability of machine learning tasks on graphs by incorporating domain-specific features. There are various options for underlying procedures (such as optimization functions, activation functions, etc.) that can be considered in the implementation of GNN. However, most of the existing tools are confined to one approach without any analysis. Thus, this emerging field lacks a robust implementation ignoring the highly irregular structure of the real-world graphs. In this paper, we attempt to fill this gap by studying various alternative functions for a respective module using a diverse set of benchmark datasets. Our empirical results suggest that the generally used underlying techniques do not always perform well to capture the overall structure from a set of graphs.
1809.10841
Jun Zhao Dr
Jun Zhao and Ulrik Lyngs and Nigel Shadbolt
What privacy concerns do parents have about children's mobile apps, and how can they stay SHARP?
13 pages, 12 figures, report
null
null
null
cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Tablet computers are widely used by young children. A report in 2016 shows that children aged 5 to 15 years are spending more time online than watching TV. A 2017 update of the same report shows that parents are becoming more concerned about their children's online risks compared to the previous year. Parents are working hard to protect their children's online safety. An increasing number of parents are setting up content filtering at home or having regular discussions with their children regarding online risks. However, although risks related to Social Media platforms or social video sharing sites (like YouTube) are widely known, risks posed by mobile applications or games (i.e. `apps') are less known. Behind the cute characters, apps used by children can not only have the possibility of exposing them to age-inappropriate content or excessive in-app promotions, but may also make a large amount of their personal information accessible to third-party online marketing and advertising industry. Such practices are not unique to children's apps, but young children are probably less capable of resisting the resulting personalised advertisements and game promotions. In this report, we present findings from our online survey of 220 parents with children aged 6-10, mainly from the U.K. and other western countries, regarding their privacy concerns and expectations of their children's use of mobile apps. Parents play a key role in children's use of digital technology, especially for children under 10 years old. Recent reports have highlighted parents' lack of sufficient support for choosing appropriate digital content for their children. Our report sheds some initial light on parents' key struggles and points to immediate steps and possible areas of future development.
[ { "created": "Fri, 28 Sep 2018 03:30:22 GMT", "version": "v1" } ]
2018-10-01
[ [ "Zhao", "Jun", "" ], [ "Lyngs", "Ulrik", "" ], [ "Shadbolt", "Nigel", "" ] ]
Tablet computers are widely used by young children. A report in 2016 shows that children aged 5 to 15 years are spending more time online than watching TV. A 2017 update of the same report shows that parents are becoming more concerned about their children's online risks compared to the previous year. Parents are working hard to protect their children's online safety. An increasing number of parents are setting up content filtering at home or having regular discussions with their children regarding online risks. However, although risks related to Social Media platforms or social video sharing sites (like YouTube) are widely known, risks posed by mobile applications or games (i.e. `apps') are less known. Behind the cute characters, apps used by children can not only have the possibility of exposing them to age-inappropriate content or excessive in-app promotions, but may also make a large amount of their personal information accessible to third-party online marketing and advertising industry. Such practices are not unique to children's apps, but young children are probably less capable of resisting the resulting personalised advertisements and game promotions. In this report, we present findings from our online survey of 220 parents with children aged 6-10, mainly from the U.K. and other western countries, regarding their privacy concerns and expectations of their children's use of mobile apps. Parents play a key role in children's use of digital technology, especially for children under 10 years old. Recent reports have highlighted parents' lack of sufficient support for choosing appropriate digital content for their children. Our report sheds some initial light on parents' key struggles and points to immediate steps and possible areas of future development.
2104.14928
Joris Gu\'erin
Joris Guerin, Kevin Delmas and J\'er\'emie Guiochet
Certifying Emergency Landing for Safe Urban UAV
8 pages, 4 figure, 4 tables To appear in the proceedings of the 7th international workshop on Safety and Security of Intelligent Vehicles (SSIV 2021) at DSN 2021
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned Aerial Vehicles (UAVs) have the potential to be used for many applications in urban environments. However, allowing UAVs to fly above densely populated areas raises concerns regarding safety. One of the main safety issues is the possibility for a failure to cause the loss of navigation capabilities, which can result in the UAV falling/landing in hazardous areas such as busy roads, where it can cause fatal accidents. Current standards, such as the SORA published in 2019, do not consider applicable mitigation techniques to handle this kind of hazardous situations. Consequently, certifying UAV urban operations implies to demonstrate very high levels of integrity, which results in prohibitive development costs. To address this issue, this paper explores the concept of Emergency Landing (EL). A safety analysis is conducted on an urban UAV case study, and requirements are proposed to enable the integration of EL as an acceptable mitigation mean in the SORA. Based on these requirements, an EL implementation was developed, together with a runtime monitoring architecture to enhance confidence in the system. Preliminary qualitative results are presented and the monitor seem to be able to detect errors of the EL system effectively.
[ { "created": "Fri, 30 Apr 2021 11:47:46 GMT", "version": "v1" } ]
2021-05-03
[ [ "Guerin", "Joris", "" ], [ "Delmas", "Kevin", "" ], [ "Guiochet", "Jérémie", "" ] ]
Unmanned Aerial Vehicles (UAVs) have the potential to be used for many applications in urban environments. However, allowing UAVs to fly above densely populated areas raises concerns regarding safety. One of the main safety issues is the possibility for a failure to cause the loss of navigation capabilities, which can result in the UAV falling/landing in hazardous areas such as busy roads, where it can cause fatal accidents. Current standards, such as the SORA published in 2019, do not consider applicable mitigation techniques to handle this kind of hazardous situations. Consequently, certifying UAV urban operations implies to demonstrate very high levels of integrity, which results in prohibitive development costs. To address this issue, this paper explores the concept of Emergency Landing (EL). A safety analysis is conducted on an urban UAV case study, and requirements are proposed to enable the integration of EL as an acceptable mitigation mean in the SORA. Based on these requirements, an EL implementation was developed, together with a runtime monitoring architecture to enhance confidence in the system. Preliminary qualitative results are presented and the monitor seem to be able to detect errors of the EL system effectively.
2110.09470
Meera Hahn
Meera Hahn, Devendra Chaplot, Shubham Tulsiani, Mustafa Mukadam, James M. Rehg, Abhinav Gupta
No RL, No Simulation: Learning to Navigate without Navigating
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards. However, building simulators is expensive (requires manual effort for each and every scene) and creates challenges in transferring learned policies to robotic platforms in the real-world, due to the sim-to-real domain gap. In this paper, we pose a simple question: Do we really need active interaction, ground-truth maps or even reinforcement-learning (RL) in order to solve the image-goal navigation task? We propose a self-supervised approach to learn to navigate from only passive videos of roaming. Our approach, No RL, No Simulator (NRNS), is simple and scalable, yet highly effective. NRNS outperforms RL-based formulations by a significant margin. We present NRNS as a strong baseline for any future image-based navigation tasks that use RL or Simulation.
[ { "created": "Mon, 18 Oct 2021 17:04:06 GMT", "version": "v1" }, { "created": "Fri, 22 Oct 2021 15:35:03 GMT", "version": "v2" } ]
2021-10-25
[ [ "Hahn", "Meera", "" ], [ "Chaplot", "Devendra", "" ], [ "Tulsiani", "Shubham", "" ], [ "Mukadam", "Mustafa", "" ], [ "Rehg", "James M.", "" ], [ "Gupta", "Abhinav", "" ] ]
Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards. However, building simulators is expensive (requires manual effort for each and every scene) and creates challenges in transferring learned policies to robotic platforms in the real-world, due to the sim-to-real domain gap. In this paper, we pose a simple question: Do we really need active interaction, ground-truth maps or even reinforcement-learning (RL) in order to solve the image-goal navigation task? We propose a self-supervised approach to learn to navigate from only passive videos of roaming. Our approach, No RL, No Simulator (NRNS), is simple and scalable, yet highly effective. NRNS outperforms RL-based formulations by a significant margin. We present NRNS as a strong baseline for any future image-based navigation tasks that use RL or Simulation.
2305.11667
Jochen Hoenicke
Elisabeth Henkel, Jochen Hoenicke, Tanja Schindler
Choose your Colour: Tree Interpolation for Quantified Formulas in SMT
This is the preprint for the submission published in CADE-29 and also includes the proofs in the appendix. It has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this contribution will be published in CADE-29
null
null
null
cs.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present a generic tree-interpolation algorithm in the SMT context with quantifiers. The algorithm takes a proof of unsatisfiability using resolution and quantifier instantiation and computes interpolants (which may contain quantifiers). Arbitrary SMT theories are supported, as long as each theory itself supports tree interpolation for its lemmas. In particular, we show this for the theory combination of equality with uninterpreted functions and linear arithmetic. The interpolants can be tweaked by virtually assigning each literal in the proof to interpolation partitions (colouring the literals) in arbitrary ways. The algorithm is implemented in SMTInterpol.
[ { "created": "Fri, 19 May 2023 13:32:13 GMT", "version": "v1" } ]
2023-05-22
[ [ "Henkel", "Elisabeth", "" ], [ "Hoenicke", "Jochen", "" ], [ "Schindler", "Tanja", "" ] ]
We present a generic tree-interpolation algorithm in the SMT context with quantifiers. The algorithm takes a proof of unsatisfiability using resolution and quantifier instantiation and computes interpolants (which may contain quantifiers). Arbitrary SMT theories are supported, as long as each theory itself supports tree interpolation for its lemmas. In particular, we show this for the theory combination of equality with uninterpreted functions and linear arithmetic. The interpolants can be tweaked by virtually assigning each literal in the proof to interpolation partitions (colouring the literals) in arbitrary ways. The algorithm is implemented in SMTInterpol.