id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1801.01383
Sheng Zhang
Sheng Zhang, Bo Liao, and Fei Liao
Computation of Optimal Control Problems with Terminal Constraint via Variation Evolution
arXiv admin note: substantial text overlap with arXiv:1709.02242, arXiv:1712.09702, arXiv:1711.02998, arXiv:1703.10263
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enlightened from the inverse consideration of the stable continuous-time dynamics evolution, the Variation Evolving Method (VEM) analogizes the optimal solution to the equilibrium point of an infinite-dimensional dynamic system and solves it in an asymptotically evolving way. In this paper, the compact version of the VEM is further developed for the computation of Optimal Control Problems (OCPs) with terminal constraint. The corresponding Evolution Partial Differential Equation (EPDE), which describes the variation motion towards the optimal solution, is derived, and the costate-free optimality conditions are established. The explicit analytic expressions of the costates and the Lagrange multipliers adjoining the terminal constraint, related to the states and the control variables, are presented. With the semi-discrete method in the field of PDE numerical calculation, the EPDE is discretized as finite-dimensional Initial-value Problems (IVPs) to be solved, with common Ordinary Differential Equation (ODE) numerical integration methods.
[ { "created": "Tue, 2 Jan 2018 23:01:42 GMT", "version": "v1" } ]
2018-01-08
[ [ "Zhang", "Sheng", "" ], [ "Liao", "Bo", "" ], [ "Liao", "Fei", "" ] ]
Enlightened from the inverse consideration of the stable continuous-time dynamics evolution, the Variation Evolving Method (VEM) analogizes the optimal solution to the equilibrium point of an infinite-dimensional dynamic system and solves it in an asymptotically evolving way. In this paper, the compact version of the VEM is further developed for the computation of Optimal Control Problems (OCPs) with terminal constraint. The corresponding Evolution Partial Differential Equation (EPDE), which describes the variation motion towards the optimal solution, is derived, and the costate-free optimality conditions are established. The explicit analytic expressions of the costates and the Lagrange multipliers adjoining the terminal constraint, related to the states and the control variables, are presented. With the semi-discrete method in the field of PDE numerical calculation, the EPDE is discretized as finite-dimensional Initial-value Problems (IVPs) to be solved, with common Ordinary Differential Equation (ODE) numerical integration methods.
2407.12710
Mohammad-Amin Charusaie
Mohammad-Amin Charusaie, Samira Samadi
A Unifying Post-Processing Framework for Multi-Objective Learn-to-Defer Problems
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Learn-to-Defer is a paradigm that enables learning algorithms to work not in isolation but as a team with human experts. In this paradigm, we permit the system to defer a subset of its tasks to the expert. Although there are currently systems that follow this paradigm and are designed to optimize the accuracy of the final human-AI team, the general methodology for developing such systems under a set of constraints (e.g., algorithmic fairness, expert intervention budget, defer of anomaly, etc.) remains largely unexplored. In this paper, using a $d$-dimensional generalization to the fundamental lemma of Neyman and Pearson (d-GNP), we obtain the Bayes optimal solution for learn-to-defer systems under various constraints. Furthermore, we design a generalizable algorithm to estimate that solution and apply this algorithm to the COMPAS and ACSIncome datasets. Our algorithm shows improvements in terms of constraint violation over a set of baselines.
[ { "created": "Wed, 17 Jul 2024 16:32:30 GMT", "version": "v1" } ]
2024-07-18
[ [ "Charusaie", "Mohammad-Amin", "" ], [ "Samadi", "Samira", "" ] ]
Learn-to-Defer is a paradigm that enables learning algorithms to work not in isolation but as a team with human experts. In this paradigm, we permit the system to defer a subset of its tasks to the expert. Although there are currently systems that follow this paradigm and are designed to optimize the accuracy of the final human-AI team, the general methodology for developing such systems under a set of constraints (e.g., algorithmic fairness, expert intervention budget, defer of anomaly, etc.) remains largely unexplored. In this paper, using a $d$-dimensional generalization to the fundamental lemma of Neyman and Pearson (d-GNP), we obtain the Bayes optimal solution for learn-to-defer systems under various constraints. Furthermore, we design a generalizable algorithm to estimate that solution and apply this algorithm to the COMPAS and ACSIncome datasets. Our algorithm shows improvements in terms of constraint violation over a set of baselines.
2006.11607
Marco Molinaro
Thomas Kesselheim and Marco Molinaro
Knapsack Secretary with Bursty Adversary
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While it avoids the pessimism of the traditional adversarial model, in practice we cannot expect the input to be presented in perfectly random order. This has motivated research on ``best of both worlds'' (algorithms with good performance on both purely stochastic and purely adversarial inputs), or even better, on inputs that are a mix of both stochastic and adversarial parts. Unfortunately the latter seems much harder to achieve and very few results of this type are known. Towards advancing our understanding of designing such robust algorithms, we propose a random-order model with bursts of adversarial time steps. The assumption of burstiness of unexpected patterns is reasonable in many contexts, since changes (e.g. spike in a demand for a good) are often triggered by a common external event. We then consider the Knapsack Secretary problem in this model: there is a knapsack of size $k$ (e.g., available quantity of a good), and in each of the $n$ time steps an item comes with its value and size in $[0,1]$ and the algorithm needs to make an irrevocable decision whether to accept or reject the item. We design an algorithm that gives an approximation of $1 - \tilde{O}(\Gamma/k)$ when the adversarial time steps can be covered by $\Gamma \ge \sqrt{k}$ intervals of size $\tilde{O}(\frac{n}{k})$. In particular, setting $\Gamma = \sqrt{k}$ gives a $(1 - O(\frac{\ln^2 k}{\sqrt{k}}))$-approximation that is resistant to up to a $\frac{\ln^2 k}{\sqrt{k}}$-fraction of the items being adversarial, which is almost optimal even in the absence of adversarial items. Also, setting $\Gamma = \tilde{\Omega}(k)$ gives a constant approximation that is resistant to up to a constant fraction of items being adversarial.
[ { "created": "Sat, 20 Jun 2020 16:24:22 GMT", "version": "v1" } ]
2020-06-23
[ [ "Kesselheim", "Thomas", "" ], [ "Molinaro", "Marco", "" ] ]
The random-order or secretary model is one of the most popular beyond-worst case model for online algorithms. While it avoids the pessimism of the traditional adversarial model, in practice we cannot expect the input to be presented in perfectly random order. This has motivated research on ``best of both worlds'' (algorithms with good performance on both purely stochastic and purely adversarial inputs), or even better, on inputs that are a mix of both stochastic and adversarial parts. Unfortunately the latter seems much harder to achieve and very few results of this type are known. Towards advancing our understanding of designing such robust algorithms, we propose a random-order model with bursts of adversarial time steps. The assumption of burstiness of unexpected patterns is reasonable in many contexts, since changes (e.g. spike in a demand for a good) are often triggered by a common external event. We then consider the Knapsack Secretary problem in this model: there is a knapsack of size $k$ (e.g., available quantity of a good), and in each of the $n$ time steps an item comes with its value and size in $[0,1]$ and the algorithm needs to make an irrevocable decision whether to accept or reject the item. We design an algorithm that gives an approximation of $1 - \tilde{O}(\Gamma/k)$ when the adversarial time steps can be covered by $\Gamma \ge \sqrt{k}$ intervals of size $\tilde{O}(\frac{n}{k})$. In particular, setting $\Gamma = \sqrt{k}$ gives a $(1 - O(\frac{\ln^2 k}{\sqrt{k}}))$-approximation that is resistant to up to a $\frac{\ln^2 k}{\sqrt{k}}$-fraction of the items being adversarial, which is almost optimal even in the absence of adversarial items. Also, setting $\Gamma = \tilde{\Omega}(k)$ gives a constant approximation that is resistant to up to a constant fraction of items being adversarial.
2111.09917
Davood Rafiei
Arif Hasnat, Davood Rafiei
Interactive Set Discovery
To appear in the Proceedings of the EDBT 2023 Conference
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
We study the problem of set discovery where given a few example tuples of a desired set, we want to find the set in a collection of sets. A challenge is that the example tuples may not uniquely identify a set, and a large number of candidate sets may be returned. Our focus is on interactive exploration to set discovery where additional example tuples from the candidate sets are shown and the user either accepts or rejects them as members of the target set. The goal is to find the target set with the least number of user interactions. The problem can be cast as an optimization problem where we want to find a decision tree that can guide the search to the target set with the least number of questions to be answered by the user. We propose a general algorithm that is capable of reaching an optimal solution and two variations of it that strike a balance between the quality of a solution and the running time. We also propose a novel pruning strategy that safely reduces the search space without introducing false negatives. We evaluate the efficiency and the effectiveness of our algorithms through an extensive experimental study using both real and synthetic datasets and comparing them to previous approaches in the literature. We show that our pruning strategy reduces the running time of the search algorithms by 2-5 orders of magnitude.
[ { "created": "Thu, 18 Nov 2021 19:21:23 GMT", "version": "v1" }, { "created": "Mon, 3 Oct 2022 23:43:07 GMT", "version": "v2" } ]
2022-10-05
[ [ "Hasnat", "Arif", "" ], [ "Rafiei", "Davood", "" ] ]
We study the problem of set discovery where given a few example tuples of a desired set, we want to find the set in a collection of sets. A challenge is that the example tuples may not uniquely identify a set, and a large number of candidate sets may be returned. Our focus is on interactive exploration to set discovery where additional example tuples from the candidate sets are shown and the user either accepts or rejects them as members of the target set. The goal is to find the target set with the least number of user interactions. The problem can be cast as an optimization problem where we want to find a decision tree that can guide the search to the target set with the least number of questions to be answered by the user. We propose a general algorithm that is capable of reaching an optimal solution and two variations of it that strike a balance between the quality of a solution and the running time. We also propose a novel pruning strategy that safely reduces the search space without introducing false negatives. We evaluate the efficiency and the effectiveness of our algorithms through an extensive experimental study using both real and synthetic datasets and comparing them to previous approaches in the literature. We show that our pruning strategy reduces the running time of the search algorithms by 2-5 orders of magnitude.
2205.00033
Jakob Schoeffer
Jakob Schoeffer
A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making
CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '22 Extended Abstracts), April 29--May 5, 2022, New Orleans, LA, USA
null
10.1145/3491101.3503811
null
cs.AI cs.HC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place -- which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.
[ { "created": "Fri, 29 Apr 2022 18:31:04 GMT", "version": "v1" } ]
2022-05-03
[ [ "Schoeffer", "Jakob", "" ] ]
Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place -- which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.
2011.02574
Andrei Cramariuc
Le Chen, Yunke Ao, Florian Tschopp, Andrei Cramariuc, Michel Breyer, Jen Jen Chung, Roland Siegwart, Cesar Cadena
Learning Trajectories for Visual-Inertial System Calibration via Model-based Heuristic Deep Reinforcement Learning
null
Proceedings of the 4th Conference on Robot Learning (CoRL) 2020
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual-inertial systems rely on precise calibrations of both camera intrinsics and inter-sensor extrinsics, which typically require manually performing complex motions in front of a calibration target. In this work we present a novel approach to obtain favorable trajectories for visual-inertial system calibration, using model-based deep reinforcement learning. Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm. Our experiments show that while maintaining similar or shorter path lengths, the trajectories generated by our learned policy result in lower calibration errors compared to random or handcrafted trajectories.
[ { "created": "Wed, 4 Nov 2020 23:20:15 GMT", "version": "v1" } ]
2021-02-17
[ [ "Chen", "Le", "" ], [ "Ao", "Yunke", "" ], [ "Tschopp", "Florian", "" ], [ "Cramariuc", "Andrei", "" ], [ "Breyer", "Michel", "" ], [ "Chung", "Jen Jen", "" ], [ "Siegwart", "Roland", "" ], [ "Cadena", "Cesar", "" ] ]
Visual-inertial systems rely on precise calibrations of both camera intrinsics and inter-sensor extrinsics, which typically require manually performing complex motions in front of a calibration target. In this work we present a novel approach to obtain favorable trajectories for visual-inertial system calibration, using model-based deep reinforcement learning. Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm. Our experiments show that while maintaining similar or shorter path lengths, the trajectories generated by our learned policy result in lower calibration errors compared to random or handcrafted trajectories.
2402.18807
Chenglei Shen
Chenglei Shen and Guofu Xie and Xiao Zhang and Jun Xu
On the Decision-Making Abilities in Role-Playing using Large Language Models
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) are now increasingly utilized for role-playing tasks, especially in impersonating domain-specific experts, primarily through role-playing prompts. When interacting in real-world scenarios, the decision-making abilities of a role significantly shape its behavioral patterns. In this paper, we concentrate on evaluating the decision-making abilities of LLMs post role-playing thereby validating the efficacy of role-playing. Our goal is to provide metrics and guidance for enhancing the decision-making abilities of LLMs in role-playing tasks. Specifically, we first use LLMs to generate virtual role descriptions corresponding to the 16 personality types of Myers-Briggs Type Indicator (abbreviated as MBTI) representing a segmentation of the population. Then we design specific quantitative operations to evaluate the decision-making abilities of LLMs post role-playing from four aspects: adaptability, exploration$\&$exploitation trade-off ability, reasoning ability, and safety. Finally, we analyze the association between the performance of decision-making and the corresponding MBTI types through GPT-4. Extensive experiments demonstrate stable differences in the four aspects of decision-making abilities across distinct roles, signifying a robust correlation between decision-making abilities and the roles emulated by LLMs. These results underscore that LLMs can effectively impersonate varied roles while embodying their genuine sociological characteristics.
[ { "created": "Thu, 29 Feb 2024 02:22:23 GMT", "version": "v1" } ]
2024-03-01
[ [ "Shen", "Chenglei", "" ], [ "Xie", "Guofu", "" ], [ "Zhang", "Xiao", "" ], [ "Xu", "Jun", "" ] ]
Large language models (LLMs) are now increasingly utilized for role-playing tasks, especially in impersonating domain-specific experts, primarily through role-playing prompts. When interacting in real-world scenarios, the decision-making abilities of a role significantly shape its behavioral patterns. In this paper, we concentrate on evaluating the decision-making abilities of LLMs post role-playing thereby validating the efficacy of role-playing. Our goal is to provide metrics and guidance for enhancing the decision-making abilities of LLMs in role-playing tasks. Specifically, we first use LLMs to generate virtual role descriptions corresponding to the 16 personality types of Myers-Briggs Type Indicator (abbreviated as MBTI) representing a segmentation of the population. Then we design specific quantitative operations to evaluate the decision-making abilities of LLMs post role-playing from four aspects: adaptability, exploration$\&$exploitation trade-off ability, reasoning ability, and safety. Finally, we analyze the association between the performance of decision-making and the corresponding MBTI types through GPT-4. Extensive experiments demonstrate stable differences in the four aspects of decision-making abilities across distinct roles, signifying a robust correlation between decision-making abilities and the roles emulated by LLMs. These results underscore that LLMs can effectively impersonate varied roles while embodying their genuine sociological characteristics.
2208.07304
Di Wu
D. Wu
Vehicle-road Cooperative Simulation and 3D Visualization System
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The safety of single-vehicle autonomous driving technology is limited due to the limits of perception capability of on-board sensors. In contrast, vehicle-road collaboration technology can overcome those limits and improve the traffic safety and efficiency, by expanding the sensing range, improving the perception accuracy, and reducing the response time. However, such a technology is still under development; it requires rigorous testing and verification methods to ensure the reliability and trustworthiness of the technology. In this thesis, we focus on three major tasks: (1) analyze the functional characteristics related to the scenarios of vehicle-road cooperations, highlightening the differences between vehicle-road cooperative systems and traditional single-vehicle autonomous driving systems; (2) refine and classifiy the functional characteristics of vehicle-road cooperative systems; (3) design and develop a simulation system, and provide a visual interface to facilitate development and analysis. The efficiency and effectiveness the proposed method are verfied by experiments.
[ { "created": "Thu, 14 Jul 2022 04:53:54 GMT", "version": "v1" } ]
2022-08-16
[ [ "Wu", "D.", "" ] ]
The safety of single-vehicle autonomous driving technology is limited due to the limits of perception capability of on-board sensors. In contrast, vehicle-road collaboration technology can overcome those limits and improve the traffic safety and efficiency, by expanding the sensing range, improving the perception accuracy, and reducing the response time. However, such a technology is still under development; it requires rigorous testing and verification methods to ensure the reliability and trustworthiness of the technology. In this thesis, we focus on three major tasks: (1) analyze the functional characteristics related to the scenarios of vehicle-road cooperations, highlightening the differences between vehicle-road cooperative systems and traditional single-vehicle autonomous driving systems; (2) refine and classifiy the functional characteristics of vehicle-road cooperative systems; (3) design and develop a simulation system, and provide a visual interface to facilitate development and analysis. The efficiency and effectiveness the proposed method are verfied by experiments.
2210.02821
Igor Korkin
Denis Pogonin, Igor Korkin
Microsoft Defender Will Be Defended: MemoryRanger Prevents Blinding Windows AV
29 pages, 17 figures, 1 table, In Proceedings of the ADFSL 2022, USA
null
null
null
cs.CR cs.OS
http://creativecommons.org/licenses/by/4.0/
Windows OS is facing a huge rise in kernel attacks. An overview of popular techniques that result in loading kernel drivers will be presented. One of the key targets of modern threats is disabling and blinding Microsoft Defender, a default Windows AV. The analysis of recent driver-based attacks will be given, the challenge is to block them. The survey of user- and kernel-level attacks on Microsoft Defender will be given. One of the recently published attackers techniques abuses Mandatory Integrity Control (MIC) and Security Reference Monitor (SRM) by modifying Integrity Level and Debug Privileges for the Microsoft Defender via syscalls. However, this user-mode attack can be blocked via the Windows 'trust labels' mechanism. The presented paper discovered the internals of MIC and SRM, including the analysis of Microsoft Defender during malware detection. We show how attackers can attack Microsoft Defender using a kernel-mode driver. This driver modifies the fields of the Token structure allocated for the Microsoft Defender application. The presented attack resulted in disabling Microsoft Defender, without terminating any of its processes and without triggering any Windows security features, such as PatchGuard. The customized hypervisor-based solution named MemoryRanger was used to protect the Windows Defender kernel structures. The experiments show that MemoryRanger successfully restricts access to the sensitive kernel data from illegal access attempts with affordable performance degradation.
[ { "created": "Thu, 6 Oct 2022 11:25:05 GMT", "version": "v1" } ]
2022-10-07
[ [ "Pogonin", "Denis", "" ], [ "Korkin", "Igor", "" ] ]
Windows OS is facing a huge rise in kernel attacks. An overview of popular techniques that result in loading kernel drivers will be presented. One of the key targets of modern threats is disabling and blinding Microsoft Defender, a default Windows AV. The analysis of recent driver-based attacks will be given, the challenge is to block them. The survey of user- and kernel-level attacks on Microsoft Defender will be given. One of the recently published attackers techniques abuses Mandatory Integrity Control (MIC) and Security Reference Monitor (SRM) by modifying Integrity Level and Debug Privileges for the Microsoft Defender via syscalls. However, this user-mode attack can be blocked via the Windows 'trust labels' mechanism. The presented paper discovered the internals of MIC and SRM, including the analysis of Microsoft Defender during malware detection. We show how attackers can attack Microsoft Defender using a kernel-mode driver. This driver modifies the fields of the Token structure allocated for the Microsoft Defender application. The presented attack resulted in disabling Microsoft Defender, without terminating any of its processes and without triggering any Windows security features, such as PatchGuard. The customized hypervisor-based solution named MemoryRanger was used to protect the Windows Defender kernel structures. The experiments show that MemoryRanger successfully restricts access to the sensitive kernel data from illegal access attempts with affordable performance degradation.
1304.3179
Seok-Hwan Park
Seok-Hwan Park, Osvaldo Simeone, Onur Sahin and Shlomo Shamai (Shitz)
Joint Precoding and Multivariate Backhaul Compression for the Downlink of Cloud Radio Access Networks
Submitted to IEEE Transactions on Signal Processing
null
10.1109/TSP.2013.2280111
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies the joint design of precoding and backhaul compression strategies for the downlink of cloud radio access networks. In these systems, a central encoder is connected to multiple multi-antenna base stations (BSs) via finite-capacity backhaul links. At the central encoder, precoding is followed by compression in order to produce the rate-limited bit streams delivered to each BS over the corresponding backhaul link. In current state-of-the-art approaches, the signals intended for different BSs are compressed independently. In contrast, this work proposes to leverage joint compression, also referred to as multivariate compression, of the signals of different BSs in order to better control the effect of the additive quantization noises at the mobile stations (MSs). The problem of maximizing the weighted sum-rate with respect to both the precoding matrix and the joint correlation matrix of the quantization noises is formulated subject to power and backhaul capacity constraints. An iterative algorithm is proposed that achieves a stationary point of the problem. Moreover, in order to enable the practical implementation of multivariate compression across BSs, a novel architecture is proposed based on successive steps of minimum mean-squared error (MMSE) estimation and per-BS compression. Robust design with respect to imperfect channel state information is also discussed. From numerical results, it is confirmed that the proposed joint precoding and compression strategy outperforms conventional approaches based on the separate design of precoding and compression or independent compression across the BSs.
[ { "created": "Thu, 11 Apr 2013 02:15:18 GMT", "version": "v1" } ]
2015-06-15
[ [ "Park", "Seok-Hwan", "", "Shitz" ], [ "Simeone", "Osvaldo", "", "Shitz" ], [ "Sahin", "Onur", "", "Shitz" ], [ "Shamai", "Shlomo", "", "Shitz" ] ]
This work studies the joint design of precoding and backhaul compression strategies for the downlink of cloud radio access networks. In these systems, a central encoder is connected to multiple multi-antenna base stations (BSs) via finite-capacity backhaul links. At the central encoder, precoding is followed by compression in order to produce the rate-limited bit streams delivered to each BS over the corresponding backhaul link. In current state-of-the-art approaches, the signals intended for different BSs are compressed independently. In contrast, this work proposes to leverage joint compression, also referred to as multivariate compression, of the signals of different BSs in order to better control the effect of the additive quantization noises at the mobile stations (MSs). The problem of maximizing the weighted sum-rate with respect to both the precoding matrix and the joint correlation matrix of the quantization noises is formulated subject to power and backhaul capacity constraints. An iterative algorithm is proposed that achieves a stationary point of the problem. Moreover, in order to enable the practical implementation of multivariate compression across BSs, a novel architecture is proposed based on successive steps of minimum mean-squared error (MMSE) estimation and per-BS compression. Robust design with respect to imperfect channel state information is also discussed. From numerical results, it is confirmed that the proposed joint precoding and compression strategy outperforms conventional approaches based on the separate design of precoding and compression or independent compression across the BSs.
2310.12727
Johann-Mattis List
Johann-Mattis List, Nathan W. Hill, Robert Forkel, Frederic Blum
Representing and Computing Uncertainty in Phonological Reconstruction
To appear in: Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Despite the inherently fuzzy nature of reconstructions in historical linguistics, most scholars do not represent their uncertainty when proposing proto-forms. With the increasing success of recently proposed approaches to automating certain aspects of the traditional comparative method, the formal representation of proto-forms has also improved. This formalization makes it possible to address both the representation and the computation of uncertainty. Building on recent advances in supervised phonological reconstruction, during which an algorithm learns how to reconstruct words in a given proto-language relying on previously annotated data, and inspired by improved methods for automated word prediction from cognate sets, we present a new framework that allows for the representation of uncertainty in linguistic reconstruction and also includes a workflow for the computation of fuzzy reconstructions from linguistic data.
[ { "created": "Thu, 19 Oct 2023 13:27:42 GMT", "version": "v1" } ]
2023-10-20
[ [ "List", "Johann-Mattis", "" ], [ "Hill", "Nathan W.", "" ], [ "Forkel", "Robert", "" ], [ "Blum", "Frederic", "" ] ]
Despite the inherently fuzzy nature of reconstructions in historical linguistics, most scholars do not represent their uncertainty when proposing proto-forms. With the increasing success of recently proposed approaches to automating certain aspects of the traditional comparative method, the formal representation of proto-forms has also improved. This formalization makes it possible to address both the representation and the computation of uncertainty. Building on recent advances in supervised phonological reconstruction, during which an algorithm learns how to reconstruct words in a given proto-language relying on previously annotated data, and inspired by improved methods for automated word prediction from cognate sets, we present a new framework that allows for the representation of uncertainty in linguistic reconstruction and also includes a workflow for the computation of fuzzy reconstructions from linguistic data.
1801.05768
Zhen Chen
Zhen Chen, Zhiying Wang and Syed Jafar
The Asymptotic Capacity of Private Search
null
null
null
null
cs.IT cs.CR cs.DS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The private search problem is introduced, where a dataset comprised of $L$ i.i.d. records is replicated across $N$ non-colluding servers, each record takes values uniformly from an alphabet of size $K$, and a user wishes to search for all records that match a privately chosen value, without revealing any information about the chosen value to any individual server. The capacity of private search is the maximum number of bits of desired information that can be retrieved per bit of download. The asymptotic (large $K$) capacity of private search is shown to be $1-1/N$, even as the scope of private search is further generalized to allow approximate (OR) search over a number of realizations that grows with $K$. The results are based on the asymptotic behavior of a new converse bound for private information retrieval with arbitrarily dependent messages.
[ { "created": "Fri, 12 Jan 2018 05:29:38 GMT", "version": "v1" }, { "created": "Thu, 18 Jan 2018 18:17:36 GMT", "version": "v2" } ]
2018-01-19
[ [ "Chen", "Zhen", "" ], [ "Wang", "Zhiying", "" ], [ "Jafar", "Syed", "" ] ]
The private search problem is introduced, where a dataset comprised of $L$ i.i.d. records is replicated across $N$ non-colluding servers, each record takes values uniformly from an alphabet of size $K$, and a user wishes to search for all records that match a privately chosen value, without revealing any information about the chosen value to any individual server. The capacity of private search is the maximum number of bits of desired information that can be retrieved per bit of download. The asymptotic (large $K$) capacity of private search is shown to be $1-1/N$, even as the scope of private search is further generalized to allow approximate (OR) search over a number of realizations that grows with $K$. The results are based on the asymptotic behavior of a new converse bound for private information retrieval with arbitrarily dependent messages.
2403.01157
Lorenz Graf-Vlachy
Lorenz Graf-Vlachy, Stefan Wagner
Different Debt: An Addition to the Technical Debt Dataset and a Demonstration Using Developer Personality
null
7th International Conference on Technical Debt (TechDebt) 2024
10.1145/3644384.3644475
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Background: The "Technical Debt Dataset" (TDD) is a comprehensive dataset on technical debt (TD) in the main branches of more than 30 Java projects. However, some TD items produced by SonarQube are not included for many commits, for instance because the commits failed to compile. This has limited previous studies using the dataset. Aims and Method: In this paper, we provide an addition to the dataset that includes an analysis of 278,320 commits of all branches in a superset of 37 projects using Teamscale. We then demonstrate the utility of the dataset by exploring the relationship between developer personality by replicating a prior study. Results: The new dataset allows us to use a larger sample than prior work could, and we analyze the personality of 111 developers and 5,497 of their commits. The relationships we find between developer personality and the introduction and removal of TD differ from those found in prior work. Conclusions: We offer a dataset that may enable future studies into the topic of TD and we provide additional insights on how developer personality relates to TD.
[ { "created": "Sat, 2 Mar 2024 10:11:07 GMT", "version": "v1" } ]
2024-03-05
[ [ "Graf-Vlachy", "Lorenz", "" ], [ "Wagner", "Stefan", "" ] ]
Background: The "Technical Debt Dataset" (TDD) is a comprehensive dataset on technical debt (TD) in the main branches of more than 30 Java projects. However, some TD items produced by SonarQube are not included for many commits, for instance because the commits failed to compile. This has limited previous studies using the dataset. Aims and Method: In this paper, we provide an addition to the dataset that includes an analysis of 278,320 commits of all branches in a superset of 37 projects using Teamscale. We then demonstrate the utility of the dataset by exploring the relationship between developer personality by replicating a prior study. Results: The new dataset allows us to use a larger sample than prior work could, and we analyze the personality of 111 developers and 5,497 of their commits. The relationships we find between developer personality and the introduction and removal of TD differ from those found in prior work. Conclusions: We offer a dataset that may enable future studies into the topic of TD and we provide additional insights on how developer personality relates to TD.
1411.7191
S{\o}ren Dahlgaard
S{\o}ren Dahlgaard, Mathias B{\ae}k Tejs Knudsen, Eva Rotenberg, Mikkel Thorup
Hashing for statistics over k-partitions
Appear at FOCS'15
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we analyze a hash function for $k$-partitioning a set into bins, obtaining strong concentration bounds for standard algorithms combining statistics from each bin. This generic method was originally introduced by Flajolet and Martin~[FOCS'83] in order to save a factor $\Omega(k)$ of time per element over $k$ independent samples when estimating the number of distinct elements in a data stream. It was also used in the widely used HyperLogLog algorithm of Flajolet et al.~[AOFA'97] and in large-scale machine learning by Li et al.~[NIPS'12] for minwise estimation of set similarity. The main issue of $k$-partition, is that the contents of different bins may be highly correlated when using popular hash functions. This means that methods of analyzing the marginal distribution for a single bin do not apply. Here we show that a tabulation based hash function, mixed tabulation, does yield strong concentration bounds on the most popular applications of $k$-partitioning similar to those we would get using a truly random hash function. The analysis is very involved and implies several new results of independent interest for both simple and double tabulation, e.g. a simple and efficient construction for invertible bloom filters and uniform hashing on a given set.
[ { "created": "Wed, 26 Nov 2014 11:36:15 GMT", "version": "v1" }, { "created": "Sun, 26 Apr 2015 14:27:46 GMT", "version": "v2" }, { "created": "Mon, 15 Feb 2016 16:06:53 GMT", "version": "v3" } ]
2016-02-16
[ [ "Dahlgaard", "Søren", "" ], [ "Knudsen", "Mathias Bæk Tejs", "" ], [ "Rotenberg", "Eva", "" ], [ "Thorup", "Mikkel", "" ] ]
In this paper we analyze a hash function for $k$-partitioning a set into bins, obtaining strong concentration bounds for standard algorithms combining statistics from each bin. This generic method was originally introduced by Flajolet and Martin~[FOCS'83] in order to save a factor $\Omega(k)$ of time per element over $k$ independent samples when estimating the number of distinct elements in a data stream. It was also used in the widely used HyperLogLog algorithm of Flajolet et al.~[AOFA'97] and in large-scale machine learning by Li et al.~[NIPS'12] for minwise estimation of set similarity. The main issue of $k$-partition, is that the contents of different bins may be highly correlated when using popular hash functions. This means that methods of analyzing the marginal distribution for a single bin do not apply. Here we show that a tabulation based hash function, mixed tabulation, does yield strong concentration bounds on the most popular applications of $k$-partitioning similar to those we would get using a truly random hash function. The analysis is very involved and implies several new results of independent interest for both simple and double tabulation, e.g. a simple and efficient construction for invertible bloom filters and uniform hashing on a given set.
2201.12489
Zhijian Duan
Zhijian Duan, Jingwu Tang, Yutong Yin, Zhe Feng, Xiang Yan, Manzil Zaheer, Xiaotie Deng
A Context-Integrated Transformer-Based Neural Network for Auction Design
Accepted by ICML 2022. Code is available at https://github.com/zjduan/CITransNet
null
null
null
cs.GT cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the central problems in auction design is developing an incentive-compatible mechanism that maximizes the auctioneer's expected revenue. While theoretical approaches have encountered bottlenecks in multi-item auctions, recently, there has been much progress on finding the optimal mechanism through deep learning. However, these works either focus on a fixed set of bidders and items, or restrict the auction to be symmetric. In this work, we overcome such limitations by factoring \emph{public} contextual information of bidders and items into the auction learning framework. We propose $\mathtt{CITransNet}$, a context-integrated transformer-based neural network for optimal auction design, which maintains permutation-equivariance over bids and contexts while being able to find asymmetric solutions. We show by extensive experiments that $\mathtt{CITransNet}$ can recover the known optimal solutions in single-item settings, outperform strong baselines in multi-item auctions, and generalize well to cases other than those in training.
[ { "created": "Sat, 29 Jan 2022 03:47:00 GMT", "version": "v1" }, { "created": "Wed, 22 Jun 2022 07:34:51 GMT", "version": "v2" }, { "created": "Sun, 22 Jan 2023 07:26:18 GMT", "version": "v3" } ]
2023-01-24
[ [ "Duan", "Zhijian", "" ], [ "Tang", "Jingwu", "" ], [ "Yin", "Yutong", "" ], [ "Feng", "Zhe", "" ], [ "Yan", "Xiang", "" ], [ "Zaheer", "Manzil", "" ], [ "Deng", "Xiaotie", "" ] ]
One of the central problems in auction design is developing an incentive-compatible mechanism that maximizes the auctioneer's expected revenue. While theoretical approaches have encountered bottlenecks in multi-item auctions, recently, there has been much progress on finding the optimal mechanism through deep learning. However, these works either focus on a fixed set of bidders and items, or restrict the auction to be symmetric. In this work, we overcome such limitations by factoring \emph{public} contextual information of bidders and items into the auction learning framework. We propose $\mathtt{CITransNet}$, a context-integrated transformer-based neural network for optimal auction design, which maintains permutation-equivariance over bids and contexts while being able to find asymmetric solutions. We show by extensive experiments that $\mathtt{CITransNet}$ can recover the known optimal solutions in single-item settings, outperform strong baselines in multi-item auctions, and generalize well to cases other than those in training.
1906.09962
Richard Olaniyan
Muthucumaru Maheswaran, Robert Wenger, Richard Olaniyan, Salman Memon, Olamilekan Fadahunsi and Richboy Echomgbe
A Language for Programming Edge Clouds for Next Generation IoT Applications
22 pages, 9 figures, journal
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For effective use of edge computing in an IoT application, we need to partition the application into tasks and map them into the cloud, fog (edge server), device levels such that the resources at the different levels are optimally used to meet the overall quality of service requirements. In this paper, we consider four concerns about application-to-fog mapping: task placement at different levels, data filtering to limit network loading, fog fail-over, and data consistency, and reacting to hotspots at the edge. We describe a programming language and middleware we created for edge computing that addresses the above four concerns. The language has a distributed-node programming model that allows programs to be written for a collection of nodes organized into a cloud, fog, device hierarchy. The paper describes the major design elements of the language and explains the prototype implementation. The unique distributed-node programming model embodied in the language enables new edge-oriented programming patterns that are highly suitable for cognitive or data-intensive edge computing workloads. The paper presents result from an initial evaluation of the language prototype and also a distributed shell and a smart parking app that were developed using the programming language.
[ { "created": "Fri, 21 Jun 2019 01:33:55 GMT", "version": "v1" } ]
2019-06-25
[ [ "Maheswaran", "Muthucumaru", "" ], [ "Wenger", "Robert", "" ], [ "Olaniyan", "Richard", "" ], [ "Memon", "Salman", "" ], [ "Fadahunsi", "Olamilekan", "" ], [ "Echomgbe", "Richboy", "" ] ]
For effective use of edge computing in an IoT application, we need to partition the application into tasks and map them into the cloud, fog (edge server), device levels such that the resources at the different levels are optimally used to meet the overall quality of service requirements. In this paper, we consider four concerns about application-to-fog mapping: task placement at different levels, data filtering to limit network loading, fog fail-over, and data consistency, and reacting to hotspots at the edge. We describe a programming language and middleware we created for edge computing that addresses the above four concerns. The language has a distributed-node programming model that allows programs to be written for a collection of nodes organized into a cloud, fog, device hierarchy. The paper describes the major design elements of the language and explains the prototype implementation. The unique distributed-node programming model embodied in the language enables new edge-oriented programming patterns that are highly suitable for cognitive or data-intensive edge computing workloads. The paper presents result from an initial evaluation of the language prototype and also a distributed shell and a smart parking app that were developed using the programming language.
1711.08534
William Wang
William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel
Safer Classification by Synthesis
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The discriminative approach to classification using deep neural networks has become the de-facto standard in various fields. Complementing recent reservations about safety against adversarial examples, we show that conventional discriminative methods can easily be fooled to provide incorrect labels with very high confidence to out of distribution examples. We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models. At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately "knows when it does not know," and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.
[ { "created": "Wed, 22 Nov 2017 23:32:20 GMT", "version": "v1" }, { "created": "Mon, 23 Jul 2018 23:47:59 GMT", "version": "v2" } ]
2018-07-25
[ [ "Wang", "William", "" ], [ "Wang", "Angelina", "" ], [ "Tamar", "Aviv", "" ], [ "Chen", "Xi", "" ], [ "Abbeel", "Pieter", "" ] ]
The discriminative approach to classification using deep neural networks has become the de-facto standard in various fields. Complementing recent reservations about safety against adversarial examples, we show that conventional discriminative methods can easily be fooled to provide incorrect labels with very high confidence to out of distribution examples. We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models. At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately "knows when it does not know," and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.
2404.00962
Haokai Hong
Haokai Hong, Wanyu Lin, and Kay Chen Tan
Diffusion-Driven Domain Adaptation for Generating 3D Molecules
11 pages, 3 figures, and 3 tables
null
null
null
cs.LG physics.chem-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Can we train a molecule generator that can generate 3D molecules from a new domain, circumventing the need to collect data? This problem can be cast as the problem of domain adaptive molecule generation. This work presents a novel and principled diffusion-based approach, called GADM, that allows shifting a generative model to desired new domains without the need to collect even a single molecule. As the domain shift is typically caused by the structure variations of molecules, e.g., scaffold variations, we leverage a designated equivariant masked autoencoder (MAE) along with various masking strategies to capture the structural-grained representations of the in-domain varieties. In particular, with an asymmetric encoder-decoder module, the MAE can generalize to unseen structure variations from the target domains. These structure variations are encoded with an equivariant encoder and treated as domain supervisors to control denoising. We show that, with these encoded structural-grained domain supervisors, GADM can generate effective molecules within the desired new domains. We conduct extensive experiments across various domain adaptation tasks over benchmarking datasets. We show that our approach can improve up to 65.6% in terms of success rate defined based on molecular validity, uniqueness, and novelty compared to alternative baselines.
[ { "created": "Mon, 1 Apr 2024 07:12:27 GMT", "version": "v1" } ]
2024-04-02
[ [ "Hong", "Haokai", "" ], [ "Lin", "Wanyu", "" ], [ "Tan", "Kay Chen", "" ] ]
Can we train a molecule generator that can generate 3D molecules from a new domain, circumventing the need to collect data? This problem can be cast as the problem of domain adaptive molecule generation. This work presents a novel and principled diffusion-based approach, called GADM, that allows shifting a generative model to desired new domains without the need to collect even a single molecule. As the domain shift is typically caused by the structure variations of molecules, e.g., scaffold variations, we leverage a designated equivariant masked autoencoder (MAE) along with various masking strategies to capture the structural-grained representations of the in-domain varieties. In particular, with an asymmetric encoder-decoder module, the MAE can generalize to unseen structure variations from the target domains. These structure variations are encoded with an equivariant encoder and treated as domain supervisors to control denoising. We show that, with these encoded structural-grained domain supervisors, GADM can generate effective molecules within the desired new domains. We conduct extensive experiments across various domain adaptation tasks over benchmarking datasets. We show that our approach can improve up to 65.6% in terms of success rate defined based on molecular validity, uniqueness, and novelty compared to alternative baselines.
2107.14551
Thilanka Munasinghe
Thilanka Munasinghe, HR Pasindu
Sensing and Mapping for Better Roads: Initial Plan for Using Federated Learning and Implementing a Digital Twin to Identify the Road Conditions in a Developing Country -- Sri Lanka
4 pages, KDD Workshop on Data-driven Humanitarian Mapping held with the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, August 14, 2021
null
null
null
cs.CY cs.DC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose how a developing country like Sri Lanka can benefit from privacy-enabled machine learning techniques such as Federated Learning to detect road conditions using crowd-sourced data collection and proposed the idea of implementing a Digital Twin for the national road system in Sri Lanka. Developing countries such as Sri Lanka are far behind in implementing smart road systems and smart cities compared to the developed countries. The proposed work discussed in this paper matches the UN Sustainable Development Goal (SDG) 9: "Build Resilient Infrastructure, Promote Inclusive and Sustainable Industrialization and Foster Innovation". Our proposed work discusses how the government and private sector vehicles that conduct routine trips to collect crowd-sourced data using smartphone devices to identify the road conditions and detect where the potholes, surface unevenness (roughness), and other major distresses are located on the roads. We explore Mobile Edge Computing (MEC) techniques that can bring machine learning intelligence closer to the edge devices where produced data is stored and show how the applications of Federated Learning can be made to detect and improve road conditions. During the second phase of this study, we plan to implement a Digital Twin for the road system in Sri Lanka. We intend to use data provided by both Dedicated and Non-Dedicated systems in the proposed Digital Twin for the road system. As of writing this paper, and best to our knowledge, there is no Digital Twin system implemented for roads and other infrastructure systems in Sri Lanka. The proposed Digital Twin will be one of the first implementations of such systems in Sri Lanka. Lessons learned from this pilot project will benefit other developing countries who wish to follow the same path and make data-driven decisions.
[ { "created": "Fri, 30 Jul 2021 11:06:32 GMT", "version": "v1" } ]
2021-08-02
[ [ "Munasinghe", "Thilanka", "" ], [ "Pasindu", "HR", "" ] ]
We propose how a developing country like Sri Lanka can benefit from privacy-enabled machine learning techniques such as Federated Learning to detect road conditions using crowd-sourced data collection and proposed the idea of implementing a Digital Twin for the national road system in Sri Lanka. Developing countries such as Sri Lanka are far behind in implementing smart road systems and smart cities compared to the developed countries. The proposed work discussed in this paper matches the UN Sustainable Development Goal (SDG) 9: "Build Resilient Infrastructure, Promote Inclusive and Sustainable Industrialization and Foster Innovation". Our proposed work discusses how the government and private sector vehicles that conduct routine trips to collect crowd-sourced data using smartphone devices to identify the road conditions and detect where the potholes, surface unevenness (roughness), and other major distresses are located on the roads. We explore Mobile Edge Computing (MEC) techniques that can bring machine learning intelligence closer to the edge devices where produced data is stored and show how the applications of Federated Learning can be made to detect and improve road conditions. During the second phase of this study, we plan to implement a Digital Twin for the road system in Sri Lanka. We intend to use data provided by both Dedicated and Non-Dedicated systems in the proposed Digital Twin for the road system. As of writing this paper, and best to our knowledge, there is no Digital Twin system implemented for roads and other infrastructure systems in Sri Lanka. The proposed Digital Twin will be one of the first implementations of such systems in Sri Lanka. Lessons learned from this pilot project will benefit other developing countries who wish to follow the same path and make data-driven decisions.
2005.12662
Zihao Wang
Zihao Wang, Clair Vandersteen, Thomas Demarcy, Dan Gnansia, Charles Raffaelli, Nicolas Guevara, Herv\'e Delingette
A Deep Learning based Fast Signed Distance Map Generation
null
null
null
MIDL/2020/ExtendedAbstract/b2N5ZuEouu
cs.GR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signed distance map (SDM) is a common representation of surfaces in medical image analysis and machine learning. The computational complexity of SDM for 3D parametric shapes is often a bottleneck in many applications, thus limiting their interest. In this paper, we propose a learning based SDM generation neural network which is demonstrated on a tridimensional cochlea shape model parameterized by 4 shape parameters. The proposed SDM Neural Network generates a cochlea signed distance map depending on four input parameters and we show that the deep learning approach leads to a 60 fold improvement in the time of computation compared to more classical SDM generation methods. Therefore, the proposed approach achieves a good trade-off between accuracy and efficiency.
[ { "created": "Tue, 26 May 2020 12:36:19 GMT", "version": "v1" } ]
2022-12-01
[ [ "Wang", "Zihao", "" ], [ "Vandersteen", "Clair", "" ], [ "Demarcy", "Thomas", "" ], [ "Gnansia", "Dan", "" ], [ "Raffaelli", "Charles", "" ], [ "Guevara", "Nicolas", "" ], [ "Delingette", "Hervé", "" ] ]
Signed distance map (SDM) is a common representation of surfaces in medical image analysis and machine learning. The computational complexity of SDM for 3D parametric shapes is often a bottleneck in many applications, thus limiting their interest. In this paper, we propose a learning based SDM generation neural network which is demonstrated on a tridimensional cochlea shape model parameterized by 4 shape parameters. The proposed SDM Neural Network generates a cochlea signed distance map depending on four input parameters and we show that the deep learning approach leads to a 60 fold improvement in the time of computation compared to more classical SDM generation methods. Therefore, the proposed approach achieves a good trade-off between accuracy and efficiency.
2308.04798
Qiushi Guo
Qiushi Guo
Enhancing Mobile Privacy and Security: A Face Skin Patch-Based Anti-Spoofing Approach
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As Facial Recognition System(FRS) is widely applied in areas such as access control and mobile payments due to its convenience and high accuracy. The security of facial recognition is also highly regarded. The Face anti-spoofing system(FAS) for face recognition is an important component used to enhance the security of face recognition systems. Traditional FAS used images containing identity information to detect spoofing traces, however there is a risk of privacy leakage during the transmission and storage of these images. Besides, the encryption and decryption of these privacy-sensitive data takes too long compared to inference time by FAS model. To address the above issues, we propose a face anti-spoofing algorithm based on facial skin patches leveraging pure facial skin patch images as input, which contain no privacy information, no encryption or decryption is needed for these images. We conduct experiments on several public datasets, the results prove that our algorithm has demonstrated superiority in both accuracy and speed.
[ { "created": "Wed, 9 Aug 2023 08:36:13 GMT", "version": "v1" } ]
2023-08-10
[ [ "Guo", "Qiushi", "" ] ]
As Facial Recognition System(FRS) is widely applied in areas such as access control and mobile payments due to its convenience and high accuracy. The security of facial recognition is also highly regarded. The Face anti-spoofing system(FAS) for face recognition is an important component used to enhance the security of face recognition systems. Traditional FAS used images containing identity information to detect spoofing traces, however there is a risk of privacy leakage during the transmission and storage of these images. Besides, the encryption and decryption of these privacy-sensitive data takes too long compared to inference time by FAS model. To address the above issues, we propose a face anti-spoofing algorithm based on facial skin patches leveraging pure facial skin patch images as input, which contain no privacy information, no encryption or decryption is needed for these images. We conduct experiments on several public datasets, the results prove that our algorithm has demonstrated superiority in both accuracy and speed.
1503.01416
Bulent Abali
Bulent Abali, Richard J. Eickemeyer, Hubertus Franke, Chung-Sheng Li, Marc A. Taubenblatt
Disaggregated and optically interconnected memory: when will it be cost effective?
9 pages, 7 figures
null
null
null
cs.DC cs.AR cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The "Disaggregated Server" concept has been proposed for datacenters where the same type server resources are aggregated in their respective pools, for example a compute pool, memory pool, network pool, and a storage pool. Each server is constructed dynamically by allocating the right amount of resources from these pools according to the workload's requirements. Modularity, higher packaging and cooling efficiencies, and higher resource utilization are among the suggested benefits. With the emergence of very large datacenters, "clouds" containing tens of thousands of servers, datacenter efficiency has become an important topic. Few computer chip and systems vendors are working on and making frequent announcements on silicon photonics and disaggregated memory systems. In this paper we study the trade-off between cost and performance of building a disaggregated memory system where DRAM modules in the datacenter are pooled, for example in memory-only chassis and racks. The compute pool and the memory pool are interconnected by an optical interconnect to overcome the distance and bandwidth issues of electrical fabrics. We construct a simple cost model that includes the cost of latency, cost of bandwidth and the savings expected from a disaggregated memory system. We then identify the level at which a disaggregated memory system becomes cost competitive with a traditional direct attached memory system. Our analysis shows that a rack-scale disaggregated memory system will have a non-trivial performance penalty, and at the datacenter scale the penalty is impractically high, and the optical interconnect costs are at least a factor of 10 more expensive than where they should be when compared to the traditional direct attached memory systems.
[ { "created": "Tue, 3 Mar 2015 18:38:33 GMT", "version": "v1" } ]
2015-03-09
[ [ "Abali", "Bulent", "" ], [ "Eickemeyer", "Richard J.", "" ], [ "Franke", "Hubertus", "" ], [ "Li", "Chung-Sheng", "" ], [ "Taubenblatt", "Marc A.", "" ] ]
The "Disaggregated Server" concept has been proposed for datacenters where the same type server resources are aggregated in their respective pools, for example a compute pool, memory pool, network pool, and a storage pool. Each server is constructed dynamically by allocating the right amount of resources from these pools according to the workload's requirements. Modularity, higher packaging and cooling efficiencies, and higher resource utilization are among the suggested benefits. With the emergence of very large datacenters, "clouds" containing tens of thousands of servers, datacenter efficiency has become an important topic. Few computer chip and systems vendors are working on and making frequent announcements on silicon photonics and disaggregated memory systems. In this paper we study the trade-off between cost and performance of building a disaggregated memory system where DRAM modules in the datacenter are pooled, for example in memory-only chassis and racks. The compute pool and the memory pool are interconnected by an optical interconnect to overcome the distance and bandwidth issues of electrical fabrics. We construct a simple cost model that includes the cost of latency, cost of bandwidth and the savings expected from a disaggregated memory system. We then identify the level at which a disaggregated memory system becomes cost competitive with a traditional direct attached memory system. Our analysis shows that a rack-scale disaggregated memory system will have a non-trivial performance penalty, and at the datacenter scale the penalty is impractically high, and the optical interconnect costs are at least a factor of 10 more expensive than where they should be when compared to the traditional direct attached memory systems.
2011.07778
Ji Woong Kim
Ji Woong Kim, Peiyao Zhang, Peter Gehlbach, Iulian Iordachita, Marin Kobilarov
Towards Autonomous Eye Surgery by Combining Deep Imitation Learning with Optimal Control
Accepted to Conference on Robot Learning (CoRL) 2020
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
During retinal microsurgery, precise manipulation of the delicate retinal tissue is required for positive surgical outcome. However, accurate manipulation and navigation of surgical tools remain difficult due to a constrained workspace and the top-down view during the surgery, which limits the surgeon's ability to estimate depth. To alleviate such difficulty, we propose to automate the tool-navigation task by learning to predict relative goal position on the retinal surface from the current tool-tip position. Given an estimated target on the retina, we generate an optimal trajectory leading to the predicted goal while imposing safety-related physical constraints aimed to minimize tissue damage. As an extended task, we generate goal predictions to various points across the retina to localize eye geometry and further generate safe trajectories within the estimated confines. Through experiments in both simulation and with several eye phantoms, we demonstrate that our framework can permit navigation to various points on the retina within 0.089mm and 0.118mm in xy error which is less than the human's surgeon mean tremor at the tool-tip of 0.180mm. All safety constraints were fulfilled and the algorithm was robust to previously unseen eyes as well as unseen objects in the scene. Live video demonstration is available here: https://youtu.be/n5j5jCCelXk
[ { "created": "Mon, 16 Nov 2020 08:20:16 GMT", "version": "v1" } ]
2020-11-17
[ [ "Kim", "Ji Woong", "" ], [ "Zhang", "Peiyao", "" ], [ "Gehlbach", "Peter", "" ], [ "Iordachita", "Iulian", "" ], [ "Kobilarov", "Marin", "" ] ]
During retinal microsurgery, precise manipulation of the delicate retinal tissue is required for positive surgical outcome. However, accurate manipulation and navigation of surgical tools remain difficult due to a constrained workspace and the top-down view during the surgery, which limits the surgeon's ability to estimate depth. To alleviate such difficulty, we propose to automate the tool-navigation task by learning to predict relative goal position on the retinal surface from the current tool-tip position. Given an estimated target on the retina, we generate an optimal trajectory leading to the predicted goal while imposing safety-related physical constraints aimed to minimize tissue damage. As an extended task, we generate goal predictions to various points across the retina to localize eye geometry and further generate safe trajectories within the estimated confines. Through experiments in both simulation and with several eye phantoms, we demonstrate that our framework can permit navigation to various points on the retina within 0.089mm and 0.118mm in xy error which is less than the human's surgeon mean tremor at the tool-tip of 0.180mm. All safety constraints were fulfilled and the algorithm was robust to previously unseen eyes as well as unseen objects in the scene. Live video demonstration is available here: https://youtu.be/n5j5jCCelXk
2106.05825
Mohammad Samavatian
Mohammad Hossein Samavatian, Saikat Majumdar, Kristin Barber, Radu Teodorescu
HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks
null
Secure and Private Systems for Machine Learning Workshop 2021
null
null
cs.CR cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep Neural Networks (DNNs) are employed in an increasing number of applications, some of which are safety critical. Unfortunately, DNNs are known to be vulnerable to so-called adversarial attacks that manipulate inputs to cause incorrect results that can be beneficial to an attacker or damaging to the victim. Multiple defenses have been proposed to increase the robustness of DNNs. In general, these defenses have high overhead, some require attack-specific re-training of the model or careful tuning to adapt to different attacks. This paper presents HASI, a hardware-accelerated defense that uses a process we call stochastic inference to detect adversarial inputs. We show that by carefully injecting noise into the model at inference time, we can differentiate adversarial inputs from benign ones. HASI uses the output distribution characteristics of noisy inference compared to a non-noisy reference to detect adversarial inputs. We show an adversarial detection rate of 86% when applied to VGG16 and 93% when applied to ResNet50, which exceeds the detection rate of the state of the art approaches, with a much lower overhead. We demonstrate two software/hardware-accelerated co-designs, which reduces the performance impact of stochastic inference to 1.58X-2X relative to the unprotected baseline, compared to 15X-20X overhead for a software-only GPU implementation.
[ { "created": "Wed, 9 Jun 2021 14:31:28 GMT", "version": "v1" }, { "created": "Thu, 15 Jul 2021 14:01:49 GMT", "version": "v2" }, { "created": "Fri, 6 Aug 2021 16:03:11 GMT", "version": "v3" } ]
2021-09-29
[ [ "Samavatian", "Mohammad Hossein", "" ], [ "Majumdar", "Saikat", "" ], [ "Barber", "Kristin", "" ], [ "Teodorescu", "Radu", "" ] ]
Deep Neural Networks (DNNs) are employed in an increasing number of applications, some of which are safety critical. Unfortunately, DNNs are known to be vulnerable to so-called adversarial attacks that manipulate inputs to cause incorrect results that can be beneficial to an attacker or damaging to the victim. Multiple defenses have been proposed to increase the robustness of DNNs. In general, these defenses have high overhead, some require attack-specific re-training of the model or careful tuning to adapt to different attacks. This paper presents HASI, a hardware-accelerated defense that uses a process we call stochastic inference to detect adversarial inputs. We show that by carefully injecting noise into the model at inference time, we can differentiate adversarial inputs from benign ones. HASI uses the output distribution characteristics of noisy inference compared to a non-noisy reference to detect adversarial inputs. We show an adversarial detection rate of 86% when applied to VGG16 and 93% when applied to ResNet50, which exceeds the detection rate of the state of the art approaches, with a much lower overhead. We demonstrate two software/hardware-accelerated co-designs, which reduces the performance impact of stochastic inference to 1.58X-2X relative to the unprotected baseline, compared to 15X-20X overhead for a software-only GPU implementation.
1303.5723
Daniel Hunter
Daniel Hunter
Non-monotonic Reasoning and the Reversibility of Belief Change
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-159-164
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief.
[ { "created": "Wed, 20 Mar 2013 15:31:06 GMT", "version": "v1" } ]
2013-03-26
[ [ "Hunter", "Daniel", "" ] ]
Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief.
1709.05374
Frank Ong
Frank Ong, Joseph Cheng, and Michael Lustig
General Phase Regularized Reconstruction using Phase Cycling
null
null
null
null
cs.CV physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. Theory and Methods: The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Results: Phase cycling reconstructions showed reduction of artifacts compared to reconstructions with- out phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. Conclusion: The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications.
[ { "created": "Fri, 15 Sep 2017 19:17:13 GMT", "version": "v1" } ]
2017-09-26
[ [ "Ong", "Frank", "" ], [ "Cheng", "Joseph", "" ], [ "Lustig", "Michael", "" ] ]
Purpose: To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. Theory and Methods: The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Results: Phase cycling reconstructions showed reduction of artifacts compared to reconstructions with- out phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. Conclusion: The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications.
2012.07175
Chuqing Hu
Lang Su, Chuqing Hu, Guofa Li, Dongpu Cao
MSAF: Multimodal Split Attention Fusion
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal learning mimics the reasoning process of the human multi-sensory system, which is used to perceive the surrounding world. While making a prediction, the human brain tends to relate crucial cues from multiple sources of information. In this work, we propose a novel multimodal fusion module that learns to emphasize more contributive features across all modalities. Specifically, the proposed Multimodal Split Attention Fusion (MSAF) module splits each modality into channel-wise equal feature blocks and creates a joint representation that is used to generate soft attention for each channel across the feature blocks. Further, the MSAF module is designed to be compatible with features of various spatial dimensions and sequence lengths, suitable for both CNNs and RNNs. Thus, MSAF can be easily added to fuse features of any unimodal networks and utilize existing pretrained unimodal model weights. To demonstrate the effectiveness of our fusion module, we design three multimodal networks with MSAF for emotion recognition, sentiment analysis, and action recognition tasks. Our approach achieves competitive results in each task and outperforms other application-specific networks and multimodal fusion benchmarks.
[ { "created": "Sun, 13 Dec 2020 22:42:41 GMT", "version": "v1" }, { "created": "Sat, 26 Jun 2021 14:24:23 GMT", "version": "v2" } ]
2021-06-29
[ [ "Su", "Lang", "" ], [ "Hu", "Chuqing", "" ], [ "Li", "Guofa", "" ], [ "Cao", "Dongpu", "" ] ]
Multimodal learning mimics the reasoning process of the human multi-sensory system, which is used to perceive the surrounding world. While making a prediction, the human brain tends to relate crucial cues from multiple sources of information. In this work, we propose a novel multimodal fusion module that learns to emphasize more contributive features across all modalities. Specifically, the proposed Multimodal Split Attention Fusion (MSAF) module splits each modality into channel-wise equal feature blocks and creates a joint representation that is used to generate soft attention for each channel across the feature blocks. Further, the MSAF module is designed to be compatible with features of various spatial dimensions and sequence lengths, suitable for both CNNs and RNNs. Thus, MSAF can be easily added to fuse features of any unimodal networks and utilize existing pretrained unimodal model weights. To demonstrate the effectiveness of our fusion module, we design three multimodal networks with MSAF for emotion recognition, sentiment analysis, and action recognition tasks. Our approach achieves competitive results in each task and outperforms other application-specific networks and multimodal fusion benchmarks.
1301.3299
Wanwei Liu
Wanwei Liu and Rui Wang and Xianjin Fu and Ji Wang and Wei Dong and Xiaoguang Mao
Counterexample-Preserving Reduction for Symbolic Model Checking
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cost of LTL model checking is highly sensitive to the length of the formula under verification. We observe that, under some specific conditions, the input LTL formula can be reduced to an easier-to-handle one before model checking. In our reduction, these two formulae need not to be logically equivalent, but they share the same counterexample set w.r.t the model. In the case that the model is symbolically represented, the condition enabling such reduction can be detected with a lightweight effort (e.g., with SAT-solving). In this paper, we tentatively name such technique "Counterexample-Preserving Reduction" (CePRe for short), and finally the proposed technquie is experimentally evaluated by adapting NuSMV.
[ { "created": "Tue, 15 Jan 2013 10:53:51 GMT", "version": "v1" } ]
2013-01-16
[ [ "Liu", "Wanwei", "" ], [ "Wang", "Rui", "" ], [ "Fu", "Xianjin", "" ], [ "Wang", "Ji", "" ], [ "Dong", "Wei", "" ], [ "Mao", "Xiaoguang", "" ] ]
The cost of LTL model checking is highly sensitive to the length of the formula under verification. We observe that, under some specific conditions, the input LTL formula can be reduced to an easier-to-handle one before model checking. In our reduction, these two formulae need not to be logically equivalent, but they share the same counterexample set w.r.t the model. In the case that the model is symbolically represented, the condition enabling such reduction can be detected with a lightweight effort (e.g., with SAT-solving). In this paper, we tentatively name such technique "Counterexample-Preserving Reduction" (CePRe for short), and finally the proposed technquie is experimentally evaluated by adapting NuSMV.
1701.01717
Joshua Grochow
Joshua A. Grochow and Mrinal Kumar and Michael Saks and Shubhangi Saraf
Towards an algebraic natural proofs barrier via polynomial identity testing
null
null
null
null
cs.CC math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We observe that a certain kind of algebraic proof - which covers essentially all known algebraic circuit lower bounds to date - cannot be used to prove lower bounds against VP if and only if what we call succinct hitting sets exist for VP. This is analogous to the Razborov-Rudich natural proofs barrier in Boolean circuit complexity, in that we rule out a large class of lower bound techniques under a derandomization assumption. We also discuss connections between this algebraic natural proofs barrier, geometric complexity theory, and (algebraic) proof complexity.
[ { "created": "Fri, 6 Jan 2017 18:27:48 GMT", "version": "v1" } ]
2017-01-09
[ [ "Grochow", "Joshua A.", "" ], [ "Kumar", "Mrinal", "" ], [ "Saks", "Michael", "" ], [ "Saraf", "Shubhangi", "" ] ]
We observe that a certain kind of algebraic proof - which covers essentially all known algebraic circuit lower bounds to date - cannot be used to prove lower bounds against VP if and only if what we call succinct hitting sets exist for VP. This is analogous to the Razborov-Rudich natural proofs barrier in Boolean circuit complexity, in that we rule out a large class of lower bound techniques under a derandomization assumption. We also discuss connections between this algebraic natural proofs barrier, geometric complexity theory, and (algebraic) proof complexity.
2406.19589
Luke Dzwonczyk
Luke Dzwonczyk and Carmine Emanuele Cella and David Ban
Network Bending of Diffusion Models for Audio-Visual Generation
8 pages, 5 figures, to be published in the proceedings of the 27th International Conference on Digital Audio Effects (DAFx24), for additional image and video examples see https://dzluke.github.io/DAFX2024/
null
null
null
cs.SD cs.LG cs.MM eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper we present the first steps towards the creation of a tool which enables artists to create music visualizations using pre-trained, generative, machine learning models. First, we investigate the application of network bending, the process of applying transforms within the layers of a generative network, to image generation diffusion models by utilizing a range of point-wise, tensor-wise, and morphological operators. We identify a number of visual effects that result from various operators, including some that are not easily recreated with standard image editing tools. We find that this process allows for continuous, fine-grain control of image generation which can be helpful for creative applications. Next, we generate music-reactive videos using Stable Diffusion by passing audio features as parameters to network bending operators. Finally, we comment on certain transforms which radically shift the image and the possibilities of learning more about the latent space of Stable Diffusion based on these transforms.
[ { "created": "Fri, 28 Jun 2024 00:39:17 GMT", "version": "v1" } ]
2024-07-01
[ [ "Dzwonczyk", "Luke", "" ], [ "Cella", "Carmine Emanuele", "" ], [ "Ban", "David", "" ] ]
In this paper we present the first steps towards the creation of a tool which enables artists to create music visualizations using pre-trained, generative, machine learning models. First, we investigate the application of network bending, the process of applying transforms within the layers of a generative network, to image generation diffusion models by utilizing a range of point-wise, tensor-wise, and morphological operators. We identify a number of visual effects that result from various operators, including some that are not easily recreated with standard image editing tools. We find that this process allows for continuous, fine-grain control of image generation which can be helpful for creative applications. Next, we generate music-reactive videos using Stable Diffusion by passing audio features as parameters to network bending operators. Finally, we comment on certain transforms which radically shift the image and the possibilities of learning more about the latent space of Stable Diffusion based on these transforms.
2108.09897
Khoi Nguyen
Khoi Nguyen, Sinisa Todorovic
A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation
Accepted to ICCV 2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper addresses weakly supervised amodal instance segmentation, where the goal is to segment both visible and occluded (amodal) object parts, while training provides only ground-truth visible (modal) segmentations. Following prior work, we use data manipulation to generate occlusions in training images and thus train a segmenter to predict amodal segmentations of the manipulated data. The resulting predictions on training images are taken as the pseudo-ground truth for the standard training of Mask-RCNN, which we use for amodal instance segmentation of test images. For generating the pseudo-ground truth, we specify a new Amodal Segmenter based on Boundary Uncertainty estimation (ASBU) and make two contributions. First, while prior work uses the occluder's mask, our ASBU uses the occlusion boundary as input. Second, ASBU estimates an uncertainty map of the prediction. The estimated uncertainty regularizes learning such that lower segmentation loss is incurred on regions with high uncertainty. ASBU achieves significant performance improvement relative to the state of the art on the COCOA and KINS datasets in three tasks: amodal instance segmentation, amodal completion, and ordering recovery.
[ { "created": "Mon, 23 Aug 2021 02:27:29 GMT", "version": "v1" }, { "created": "Mon, 30 Aug 2021 02:17:00 GMT", "version": "v2" } ]
2021-09-01
[ [ "Nguyen", "Khoi", "" ], [ "Todorovic", "Sinisa", "" ] ]
This paper addresses weakly supervised amodal instance segmentation, where the goal is to segment both visible and occluded (amodal) object parts, while training provides only ground-truth visible (modal) segmentations. Following prior work, we use data manipulation to generate occlusions in training images and thus train a segmenter to predict amodal segmentations of the manipulated data. The resulting predictions on training images are taken as the pseudo-ground truth for the standard training of Mask-RCNN, which we use for amodal instance segmentation of test images. For generating the pseudo-ground truth, we specify a new Amodal Segmenter based on Boundary Uncertainty estimation (ASBU) and make two contributions. First, while prior work uses the occluder's mask, our ASBU uses the occlusion boundary as input. Second, ASBU estimates an uncertainty map of the prediction. The estimated uncertainty regularizes learning such that lower segmentation loss is incurred on regions with high uncertainty. ASBU achieves significant performance improvement relative to the state of the art on the COCOA and KINS datasets in three tasks: amodal instance segmentation, amodal completion, and ordering recovery.
2211.05627
Alexander K\"uchler
Alexander K\"uchler and Christian Banse
Representing LLVM-IR in a Code Property Graph
null
Information Security (ISC) 2022
10.1007/978-3-031-22390-7_21
null
cs.SE cs.CR cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past years, a number of static application security testing tools have been proposed which make use of so-called code property graphs, a graph model which keeps rich information about the source code while enabling its user to write language-agnostic analyses. However, they suffer from several shortcomings. They work mostly on source code and exclude the analysis of third-party dependencies if they are only available as compiled binaries. Furthermore, they are limited in their analysis to whether an individual programming language is supported or not. While often support for well-established languages such as C/C++ or Java is included, languages that are still heavily evolving, such as Rust, are not considered because of the constant changes in the language design. To overcome these limitations, we extend an open source implementation of a code property graph to support LLVM-IR which can be used as output by many compilers and binary lifters. In this paper, we discuss how we address challenges that arise when mapping concepts of an intermediate representation to a CPG. At the same time, we optimize the resulting graph to be minimal and close to the representation of equivalent source code. Our evaluation indicates that existing analyses can be reused without modifications and that the performance requirements are comparable to operating on source code. This makes the approach suitable for an analysis of large-scale projects.
[ { "created": "Wed, 9 Nov 2022 09:37:30 GMT", "version": "v1" }, { "created": "Fri, 9 Dec 2022 07:00:31 GMT", "version": "v2" } ]
2022-12-12
[ [ "Küchler", "Alexander", "" ], [ "Banse", "Christian", "" ] ]
In the past years, a number of static application security testing tools have been proposed which make use of so-called code property graphs, a graph model which keeps rich information about the source code while enabling its user to write language-agnostic analyses. However, they suffer from several shortcomings. They work mostly on source code and exclude the analysis of third-party dependencies if they are only available as compiled binaries. Furthermore, they are limited in their analysis to whether an individual programming language is supported or not. While often support for well-established languages such as C/C++ or Java is included, languages that are still heavily evolving, such as Rust, are not considered because of the constant changes in the language design. To overcome these limitations, we extend an open source implementation of a code property graph to support LLVM-IR which can be used as output by many compilers and binary lifters. In this paper, we discuss how we address challenges that arise when mapping concepts of an intermediate representation to a CPG. At the same time, we optimize the resulting graph to be minimal and close to the representation of equivalent source code. Our evaluation indicates that existing analyses can be reused without modifications and that the performance requirements are comparable to operating on source code. This makes the approach suitable for an analysis of large-scale projects.
2302.10257
Md Ibrahim
Moloy Kumar Ghosh, Milton Kumar Kundu, Md Ibrahim, A. S. M. Badrudduza, Md. Shamim Anower, Imran Shafique Ansari, Ali A. Shaikhi, Mohammed A. Mohandes
Secrecy Outage Analysis of Energy Harvesting Relay-based Mixed UOWC-RF Network with Multiple Eavesdroppers
No
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This work deals with the physical layer security performance of a dual-hop underwater optical communication (UOWC)-radio frequency (RF) network under the intruding attempts of multiple eavesdroppers via RF links. The intermediate decode and forward relay node between the underwater source and the destination transforms the optical signal into electrical form and re-transmits it to the destination node with the help of harvested energy by the relay from an integrated power beacon within the system. The source-to-relay link (UOWC) follows a mixture exponential generalized Gamma turbulence with pointing error impairments whereas all the remaining links (RF) undergo $\kappa-\mu$ shadowed fading. With regards to the types of intruders, herein two scenarios are considered, i.e., colluding (\textit{Scenario-I}) and non-colluding (\textit{Scenario-II}) eavesdroppers and the analytical expressions of secure outage probability, probability of strictly positive secrecy capacity, and effective secrecy throughput are derived in closed form for each scenario. Furthermore, the impacts of UOWC and RF channel parameters as well as detection techniques on secrecy capacity are demonstrated, and following this a comparison between the two considered scenarios is demonstrated that reveals the collusion between the eavesdroppers imposes the most harmful threat on secrecy throughput but a better secrecy level can be attained adopting diversity at the destination and power beacon nodes along with heterodyne detection rather than intensity modulation and direct detection technique. Finally, all the derived expressions are corroborated via Monte Carlo simulations.
[ { "created": "Mon, 20 Feb 2023 19:40:40 GMT", "version": "v1" } ]
2023-02-22
[ [ "Ghosh", "Moloy Kumar", "" ], [ "Kundu", "Milton Kumar", "" ], [ "Ibrahim", "Md", "" ], [ "Badrudduza", "A. S. M.", "" ], [ "Anower", "Md. Shamim", "" ], [ "Ansari", "Imran Shafique", "" ], [ "Shaikhi", "Ali A.", "" ], [ "Mohandes", "Mohammed A.", "" ] ]
This work deals with the physical layer security performance of a dual-hop underwater optical communication (UOWC)-radio frequency (RF) network under the intruding attempts of multiple eavesdroppers via RF links. The intermediate decode and forward relay node between the underwater source and the destination transforms the optical signal into electrical form and re-transmits it to the destination node with the help of harvested energy by the relay from an integrated power beacon within the system. The source-to-relay link (UOWC) follows a mixture exponential generalized Gamma turbulence with pointing error impairments whereas all the remaining links (RF) undergo $\kappa-\mu$ shadowed fading. With regards to the types of intruders, herein two scenarios are considered, i.e., colluding (\textit{Scenario-I}) and non-colluding (\textit{Scenario-II}) eavesdroppers and the analytical expressions of secure outage probability, probability of strictly positive secrecy capacity, and effective secrecy throughput are derived in closed form for each scenario. Furthermore, the impacts of UOWC and RF channel parameters as well as detection techniques on secrecy capacity are demonstrated, and following this a comparison between the two considered scenarios is demonstrated that reveals the collusion between the eavesdroppers imposes the most harmful threat on secrecy throughput but a better secrecy level can be attained adopting diversity at the destination and power beacon nodes along with heterodyne detection rather than intensity modulation and direct detection technique. Finally, all the derived expressions are corroborated via Monte Carlo simulations.
2202.12855
Krishnasuri Narayanam
Krishnasuri Narayanam, Venkatraman Ramakrishna, Dhinakaran Vinayagamurthy and Sandeep Nishad
Atomic cross-chain exchanges of shared assets
null
null
10.1145/3558535.3559786
null
cs.CR cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
A core enabler for blockchain or DLT interoperability is the ability to atomically exchange assets held by mutually untrusting owners on different ledgers. This atomic swap problem has been well-studied, with the Hash Time Locked Contract (HTLC) emerging as a canonical solution. HTLC ensures atomicity of exchange, albeit with caveats for node failure and timeliness of claims. But a bigger limitation of HTLC is that it only applies to a model consisting of two adversarial parties having sole ownership of a single asset in each ledger. Realistic extensions of the model in which assets may be jointly owned by multiple parties, all of whose consents are required for exchanges, or where multiple assets must be exchanged for one, are susceptible to collusion attacks and hence cannot be handled by HTLC. In this paper, we generalize the model of asset exchanges across DLT networks and present a taxonomy of use cases, describe the threat model, and propose MPHTLC, an augmented HTLC protocol for atomic multi-owner-and-asset exchanges. We analyze the correctness, safety, and application scope of MPHTLC. As proof-of-concept, we show how MPHTLC primitives can be implemented in networks built on Hyperledger Fabric and Corda, and how MPHTLC can be implemented in the Hyperledger Labs Weaver framework by augmenting its existing HTLC protocol.
[ { "created": "Fri, 25 Feb 2022 18:04:30 GMT", "version": "v1" }, { "created": "Tue, 31 May 2022 12:33:04 GMT", "version": "v2" }, { "created": "Sat, 10 Sep 2022 19:50:03 GMT", "version": "v3" } ]
2022-09-13
[ [ "Narayanam", "Krishnasuri", "" ], [ "Ramakrishna", "Venkatraman", "" ], [ "Vinayagamurthy", "Dhinakaran", "" ], [ "Nishad", "Sandeep", "" ] ]
A core enabler for blockchain or DLT interoperability is the ability to atomically exchange assets held by mutually untrusting owners on different ledgers. This atomic swap problem has been well-studied, with the Hash Time Locked Contract (HTLC) emerging as a canonical solution. HTLC ensures atomicity of exchange, albeit with caveats for node failure and timeliness of claims. But a bigger limitation of HTLC is that it only applies to a model consisting of two adversarial parties having sole ownership of a single asset in each ledger. Realistic extensions of the model in which assets may be jointly owned by multiple parties, all of whose consents are required for exchanges, or where multiple assets must be exchanged for one, are susceptible to collusion attacks and hence cannot be handled by HTLC. In this paper, we generalize the model of asset exchanges across DLT networks and present a taxonomy of use cases, describe the threat model, and propose MPHTLC, an augmented HTLC protocol for atomic multi-owner-and-asset exchanges. We analyze the correctness, safety, and application scope of MPHTLC. As proof-of-concept, we show how MPHTLC primitives can be implemented in networks built on Hyperledger Fabric and Corda, and how MPHTLC can be implemented in the Hyperledger Labs Weaver framework by augmenting its existing HTLC protocol.
1902.06914
Adil Rajput
Adil E. Rajput, Akila Sarirete and Tamer F. Desouky
Using Crowdsourcing to Identify a Proxy of Socio-Economic status
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Social Media provides researchers with an unprecedented opportunity to gain insight into various facets of human life. Health practitioners put a great emphasis on pinpointing socioeconomic status (SES) of individuals as they can use to it to predict certain diseases. Crowdsourcing is a term coined that entails gathering intelligence from a user community online. In order to group the users online into communities, researchers have made use of hashtags that will cull the interest of a community of users. In this paper, we propose a mechanism to group a certain group of users based on their geographic background and build a corpus for such users. Specifically, we have looked at discussion forums for some vehi-cles where the site has established communities for different areas to air their grievances or sing the praises of the vehicle. From such a discussion, it was pos-sible to glean the vocabulary that these group of users adheres to. We compared the corpus of different communities and noted the difference in the choice of language. This provided us with the groundwork for predicting the socio-eco-nomic status of such communities that can be particularly helpful to health prac-titioners and in turn used in smart cities to provide better services to the commu-nity members. More work is underway to take words and emojis out of vo-cablary(OOV) and assessing the average score as special cases.
[ { "created": "Tue, 19 Feb 2019 06:25:23 GMT", "version": "v1" } ]
2019-02-20
[ [ "Rajput", "Adil E.", "" ], [ "Sarirete", "Akila", "" ], [ "Desouky", "Tamer F.", "" ] ]
Social Media provides researchers with an unprecedented opportunity to gain insight into various facets of human life. Health practitioners put a great emphasis on pinpointing socioeconomic status (SES) of individuals as they can use to it to predict certain diseases. Crowdsourcing is a term coined that entails gathering intelligence from a user community online. In order to group the users online into communities, researchers have made use of hashtags that will cull the interest of a community of users. In this paper, we propose a mechanism to group a certain group of users based on their geographic background and build a corpus for such users. Specifically, we have looked at discussion forums for some vehi-cles where the site has established communities for different areas to air their grievances or sing the praises of the vehicle. From such a discussion, it was pos-sible to glean the vocabulary that these group of users adheres to. We compared the corpus of different communities and noted the difference in the choice of language. This provided us with the groundwork for predicting the socio-eco-nomic status of such communities that can be particularly helpful to health prac-titioners and in turn used in smart cities to provide better services to the commu-nity members. More work is underway to take words and emojis out of vo-cablary(OOV) and assessing the average score as special cases.
2407.05461
Mohamed Elmahallawy
Md Sazedur Rahman, Mohamed Elmahallawy, Sanjay Madria, Samuel Frimpong
CAV-AD: A Robust Framework for Detection of Anomalous Data and Malicious Sensors in CAV Networks
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
The adoption of connected and automated vehicles (CAVs) has sparked considerable interest across diverse industries, including public transportation, underground mining, and agriculture sectors. However, CAVs' reliance on sensor readings makes them vulnerable to significant threats. Manipulating these readings can compromise CAV network security, posing serious risks for malicious activities. Although several anomaly detection (AD) approaches for CAV networks are proposed, they often fail to: i) detect multiple anomalies in specific sensor(s) with high accuracy or F1 score, and ii) identify the specific sensor being attacked. In response, this paper proposes a novel framework tailored to CAV networks, called CAV-AD, for distinguishing abnormal readings amidst multiple anomaly data while identifying malicious sensors. Specifically, CAV-AD comprises two main components: i) A novel CNN model architecture called optimized omni-scale CNN (O-OS-CNN), which optimally selects the time scale by generating all possible kernel sizes for input time series data; ii) An amplification block to increase the values of anomaly readings, enhancing sensitivity for detecting anomalies. Not only that, but CAV-AD integrates the proposed O-OS-CNN with a Kalman filter to instantly identify the malicious sensors. We extensively train CAV-AD using real-world datasets containing both instant and constant attacks, evaluating its performance in detecting intrusions from multiple anomalies, which presents a more challenging scenario. Our results demonstrate that CAV-AD outperforms state-of-the-art methods, achieving an average accuracy of 98% and an average F1 score of 89\%, while accurately identifying the malicious sensors.
[ { "created": "Sun, 7 Jul 2024 18:19:03 GMT", "version": "v1" } ]
2024-07-09
[ [ "Rahman", "Md Sazedur", "" ], [ "Elmahallawy", "Mohamed", "" ], [ "Madria", "Sanjay", "" ], [ "Frimpong", "Samuel", "" ] ]
The adoption of connected and automated vehicles (CAVs) has sparked considerable interest across diverse industries, including public transportation, underground mining, and agriculture sectors. However, CAVs' reliance on sensor readings makes them vulnerable to significant threats. Manipulating these readings can compromise CAV network security, posing serious risks for malicious activities. Although several anomaly detection (AD) approaches for CAV networks are proposed, they often fail to: i) detect multiple anomalies in specific sensor(s) with high accuracy or F1 score, and ii) identify the specific sensor being attacked. In response, this paper proposes a novel framework tailored to CAV networks, called CAV-AD, for distinguishing abnormal readings amidst multiple anomaly data while identifying malicious sensors. Specifically, CAV-AD comprises two main components: i) A novel CNN model architecture called optimized omni-scale CNN (O-OS-CNN), which optimally selects the time scale by generating all possible kernel sizes for input time series data; ii) An amplification block to increase the values of anomaly readings, enhancing sensitivity for detecting anomalies. Not only that, but CAV-AD integrates the proposed O-OS-CNN with a Kalman filter to instantly identify the malicious sensors. We extensively train CAV-AD using real-world datasets containing both instant and constant attacks, evaluating its performance in detecting intrusions from multiple anomalies, which presents a more challenging scenario. Our results demonstrate that CAV-AD outperforms state-of-the-art methods, achieving an average accuracy of 98% and an average F1 score of 89\%, while accurately identifying the malicious sensors.
1809.08613
Namiko Saito
Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata and Shigeki Sugano
Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning
7 pages, 9 figures
null
null
null
cs.RO cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot generates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.
[ { "created": "Sun, 23 Sep 2018 15:24:21 GMT", "version": "v1" } ]
2018-09-25
[ [ "Saito", "Namiko", "" ], [ "Kim", "Kitae", "" ], [ "Murata", "Shingo", "" ], [ "Ogata", "Tetsuya", "" ], [ "Sugano", "Shigeki", "" ] ]
We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot generates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.
2004.12480
Najma Mathema
Najma Mathema, Michael A. Goodrich, and Jacob W. Crandall
Predicting Plans and Actions in Two-Player Repeated Games
Accepted in The AAAI 2020 Workshop on Plan, Activity, and Intent Recognition
null
null
null
cs.AI cs.GT cs.HC cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. Creating models of associates help to predict the modeled agents' actions, plans, and intentions. This work introduces algorithms that predict actions, plans and intentions in repeated play games, with providing an exploration of algorithms. We form a generative Bayesian approach to model S#. S# is designed as a robust algorithm that learns to cooperate with its associate in 2 by 2 matrix games. The actions, plans and intentions associated with each S# expert are identified from the literature, grouping the S# experts accordingly, and thus predicting actions, plans, and intentions based on their state probabilities. Two prediction methods are explored for Prisoners Dilemma: the Maximum A Posteriori (MAP) and an Aggregation approach. MAP (~89% accuracy) performed the best for action prediction. Both methods predicted plans of S# with ~88% accuracy. Paired T-test shows that MAP performs significantly better than Aggregation for predicting S#'s actions without cheap talk. Intention is explored based on the goals of the S# experts; results show that goals are predicted precisely when modeling S#. The obtained results show that the proposed Bayesian approach is well suited for modeling agents in two-player repeated games.
[ { "created": "Sun, 26 Apr 2020 21:03:28 GMT", "version": "v1" } ]
2020-04-28
[ [ "Mathema", "Najma", "" ], [ "Goodrich", "Michael A.", "" ], [ "Crandall", "Jacob W.", "" ] ]
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. Creating models of associates help to predict the modeled agents' actions, plans, and intentions. This work introduces algorithms that predict actions, plans and intentions in repeated play games, with providing an exploration of algorithms. We form a generative Bayesian approach to model S#. S# is designed as a robust algorithm that learns to cooperate with its associate in 2 by 2 matrix games. The actions, plans and intentions associated with each S# expert are identified from the literature, grouping the S# experts accordingly, and thus predicting actions, plans, and intentions based on their state probabilities. Two prediction methods are explored for Prisoners Dilemma: the Maximum A Posteriori (MAP) and an Aggregation approach. MAP (~89% accuracy) performed the best for action prediction. Both methods predicted plans of S# with ~88% accuracy. Paired T-test shows that MAP performs significantly better than Aggregation for predicting S#'s actions without cheap talk. Intention is explored based on the goals of the S# experts; results show that goals are predicted precisely when modeling S#. The obtained results show that the proposed Bayesian approach is well suited for modeling agents in two-player repeated games.
2004.10596
Amit Saha
Arpita Sanyal (Bhaduri), Amit Saha, Debasri Saha, Banani Saha and Amlan Chakrabarti
Circuit Design for Clique Problem and Its Implementation on Quantum Computer
25 pages, 18 figures. arXiv admin note: text overlap with arXiv:1805.10224 by other authors
IET Quantum Communication, 2021
10.1049/qtc2.12029
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Finding cliques in a graph has several applications for its pattern matching ability. $k$-clique problem, a special case of clique problem, determines whether an arbitrary graph contains a clique of size $k$, has already been addressed in quantum domain. A variant of $k$-clique problem that lists all cliques of size $k$, has also popular modern-day applications. Albeit, the implementation of such variant of $k$-clique problem in quantum setting still remains untouched. In this paper, apart from theoretical solution of such $k$-clique problem, practical quantum gate-based implementation has been addressed using Grover's algorithm. This approach is further extended to design circuit for the maximum clique problem in classical-quantum hybrid architecture. The algorithm automatically generates the circuit for any given undirected and unweighted graph and any given $k$, which makes our approach generalized in nature. The proposed approach of solving $k$-clique problem has exhibited a reduction of qubit cost and circuit depth as compared to the state-of-the-art approach, for a small $k$ with respect to a large graph. A framework that can map the automated generated circuit for clique problem to quantum devices is also proposed. An analysis of the experimental results is demonstrated using IBM's Qiskit.
[ { "created": "Tue, 10 Mar 2020 04:29:35 GMT", "version": "v1" }, { "created": "Fri, 15 Jan 2021 11:03:36 GMT", "version": "v2" }, { "created": "Wed, 20 Jan 2021 18:20:17 GMT", "version": "v3" }, { "created": "Wed, 7 Jul 2021 18:59:30 GMT", "version": "v4" } ]
2022-02-23
[ [ "Sanyal", "Arpita", "", "Bhaduri" ], [ "Saha", "Amit", "" ], [ "Saha", "Debasri", "" ], [ "Saha", "Banani", "" ], [ "Chakrabarti", "Amlan", "" ] ]
Finding cliques in a graph has several applications for its pattern matching ability. $k$-clique problem, a special case of clique problem, determines whether an arbitrary graph contains a clique of size $k$, has already been addressed in quantum domain. A variant of $k$-clique problem that lists all cliques of size $k$, has also popular modern-day applications. Albeit, the implementation of such variant of $k$-clique problem in quantum setting still remains untouched. In this paper, apart from theoretical solution of such $k$-clique problem, practical quantum gate-based implementation has been addressed using Grover's algorithm. This approach is further extended to design circuit for the maximum clique problem in classical-quantum hybrid architecture. The algorithm automatically generates the circuit for any given undirected and unweighted graph and any given $k$, which makes our approach generalized in nature. The proposed approach of solving $k$-clique problem has exhibited a reduction of qubit cost and circuit depth as compared to the state-of-the-art approach, for a small $k$ with respect to a large graph. A framework that can map the automated generated circuit for clique problem to quantum devices is also proposed. An analysis of the experimental results is demonstrated using IBM's Qiskit.
2105.14835
Christoph Hertrich
Christoph Hertrich, Amitabh Basu, Marco Di Summa, Martin Skutella
Towards Lower Bounds on the Depth of ReLU Neural Networks
Authors' accepted manuscript for SIAM Journal on Discrete Mathematics. A preliminary conference version appeared at NeurIPS 2021
SIAM Journal on Discrete Mathematics 2023 37:2, 997-1029
10.1137/22M1489332
null
cs.LG cs.DM cs.NE math.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We contribute to a better understanding of the class of functions that can be represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning any function. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). As a by-product of our investigations, we settle an old conjecture about piecewise linear functions by Wang and Sun (2005) in the affirmative. We also present upper bounds on the sizes of neural networks required to represent functions with logarithmic depth.
[ { "created": "Mon, 31 May 2021 09:49:14 GMT", "version": "v1" }, { "created": "Tue, 26 Oct 2021 08:46:28 GMT", "version": "v2" }, { "created": "Fri, 7 Jan 2022 16:15:27 GMT", "version": "v3" }, { "created": "Thu, 16 Mar 2023 16:22:13 GMT", "version": "v4" }, { "created": "Wed, 17 Jul 2024 16:15:49 GMT", "version": "v5" } ]
2024-07-18
[ [ "Hertrich", "Christoph", "" ], [ "Basu", "Amitabh", "" ], [ "Di Summa", "Marco", "" ], [ "Skutella", "Martin", "" ] ]
We contribute to a better understanding of the class of functions that can be represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning any function. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). As a by-product of our investigations, we settle an old conjecture about piecewise linear functions by Wang and Sun (2005) in the affirmative. We also present upper bounds on the sizes of neural networks required to represent functions with logarithmic depth.
2302.02083
Michal Kosinski
Michal Kosinski
Evaluating Large Language Models in Theory of Mind Tasks
TRY RUNNING ToM EXPERIMENTS ON YOUR OWN: The code and tasks used in this study are available at Colab (https://colab.research.google.com/drive/1ZRtmw87CdA4xp24DNS_Ik_uA2ypaRnoU). Don't worry if you are not an expert coder, you should be able to run this code with no-to-minimum Python skills. Or copy-paste the tasks to ChatGPT's web interface
null
null
null
cs.CL cs.CY cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Eleven Large Language Models (LLMs) were assessed using a custom-made battery of false-belief tasks, considered a gold standard in testing Theory of Mind (ToM) in humans. The battery included 640 prompts spread across 40 diverse tasks, each one including a false-belief scenario, three closely matched true-belief control scenarios, and the reversed versions of all four. To solve a single task, a model needed to correctly answer 16 prompts across all eight scenarios. Smaller and older models solved no tasks; GPT-3-davinci-003 (from November 2022) and ChatGPT-3.5-turbo (from March 2023) solved 20% of the tasks; ChatGPT-4 (from June 2023) solved 75% of the tasks, matching the performance of six-year-old children observed in past studies. We explore the potential interpretation of these findings, including the intriguing possibility that ToM, previously considered exclusive to humans, may have spontaneously emerged as a byproduct of LLMs' improving language skills.
[ { "created": "Sat, 4 Feb 2023 03:50:01 GMT", "version": "v1" }, { "created": "Fri, 10 Feb 2023 19:01:49 GMT", "version": "v2" }, { "created": "Tue, 14 Mar 2023 18:49:26 GMT", "version": "v3" }, { "created": "Tue, 29 Aug 2023 14:55:37 GMT", "version": "v4" }, { "created": "Sat, 11 Nov 2023 23:05:44 GMT", "version": "v5" }, { "created": "Sat, 17 Feb 2024 02:05:32 GMT", "version": "v6" } ]
2024-02-20
[ [ "Kosinski", "Michal", "" ] ]
Eleven Large Language Models (LLMs) were assessed using a custom-made battery of false-belief tasks, considered a gold standard in testing Theory of Mind (ToM) in humans. The battery included 640 prompts spread across 40 diverse tasks, each one including a false-belief scenario, three closely matched true-belief control scenarios, and the reversed versions of all four. To solve a single task, a model needed to correctly answer 16 prompts across all eight scenarios. Smaller and older models solved no tasks; GPT-3-davinci-003 (from November 2022) and ChatGPT-3.5-turbo (from March 2023) solved 20% of the tasks; ChatGPT-4 (from June 2023) solved 75% of the tasks, matching the performance of six-year-old children observed in past studies. We explore the potential interpretation of these findings, including the intriguing possibility that ToM, previously considered exclusive to humans, may have spontaneously emerged as a byproduct of LLMs' improving language skills.
2404.11209
Hongzhao Li
Hongzhao Li, Hongyu Wang, Xia Sun, Hua He, Jun Feng
Prompt-Guided Generation of Structured Chest X-Ray Report Using a Pre-trained LLM
Accepted by IEEE Conference on Multimedia Expo 2024
null
null
null
cs.AI cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical report generation automates radiology descriptions from images, easing the burden on physicians and minimizing errors. However, current methods lack structured outputs and physician interactivity for clear, clinically relevant reports. Our method introduces a prompt-guided approach to generate structured chest X-ray reports using a pre-trained large language model (LLM). First, we identify anatomical regions in chest X-rays to generate focused sentences that center on key visual elements, thereby establishing a structured report foundation with anatomy-based sentences. We also convert the detected anatomy into textual prompts conveying anatomical comprehension to the LLM. Additionally, the clinical context prompts guide the LLM to emphasize interactivity and clinical requirements. By integrating anatomy-focused sentences and anatomy/clinical prompts, the pre-trained LLM can generate structured chest X-ray reports tailored to prompted anatomical regions and clinical contexts. We evaluate using language generation and clinical effectiveness metrics, demonstrating strong performance.
[ { "created": "Wed, 17 Apr 2024 09:45:43 GMT", "version": "v1" } ]
2024-04-18
[ [ "Li", "Hongzhao", "" ], [ "Wang", "Hongyu", "" ], [ "Sun", "Xia", "" ], [ "He", "Hua", "" ], [ "Feng", "Jun", "" ] ]
Medical report generation automates radiology descriptions from images, easing the burden on physicians and minimizing errors. However, current methods lack structured outputs and physician interactivity for clear, clinically relevant reports. Our method introduces a prompt-guided approach to generate structured chest X-ray reports using a pre-trained large language model (LLM). First, we identify anatomical regions in chest X-rays to generate focused sentences that center on key visual elements, thereby establishing a structured report foundation with anatomy-based sentences. We also convert the detected anatomy into textual prompts conveying anatomical comprehension to the LLM. Additionally, the clinical context prompts guide the LLM to emphasize interactivity and clinical requirements. By integrating anatomy-focused sentences and anatomy/clinical prompts, the pre-trained LLM can generate structured chest X-ray reports tailored to prompted anatomical regions and clinical contexts. We evaluate using language generation and clinical effectiveness metrics, demonstrating strong performance.
2207.00477
Yang Xing
Karan Kheta, Claire Delgove, Ruolin Liu, Adeola Aderogba, Marc-Olivier Pokam, Muhammed Mehmet Unal, Yang Xing, Weisi Guo
Vision-based Conflict Detection within Crowds based on High-Resolution Human Pose Estimation for Smart and Safe Airport
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Future airports are becoming more complex and congested with the increasing number of travellers. While the airports are more likely to become hotspots for potential conflicts to break out which can cause serious delays to flights and several safety issues. An intelligent algorithm which renders security surveillance more effective in detecting conflicts would bring many benefits to the passengers in terms of their safety, finance, and travelling efficiency. This paper details the development of a machine learning model to classify conflicting behaviour in a crowd. HRNet is used to segment the images and then two approaches are taken to classify the poses of people in the frame via multiple classifiers. Among them, it was found that the support vector machine (SVM) achieved the most performant achieving precision of 94.37%. Where the model falls short is against ambiguous behaviour such as a hug or losing track of a subject in the frame. The resulting model has potential for deployment within an airport if improvements are made to cope with the vast number of potential passengers in view as well as training against further ambiguous behaviours which will arise in an airport setting. In turn, will provide the capability to enhance security surveillance and improve airport safety.
[ { "created": "Fri, 1 Jul 2022 14:54:12 GMT", "version": "v1" } ]
2022-07-04
[ [ "Kheta", "Karan", "" ], [ "Delgove", "Claire", "" ], [ "Liu", "Ruolin", "" ], [ "Aderogba", "Adeola", "" ], [ "Pokam", "Marc-Olivier", "" ], [ "Unal", "Muhammed Mehmet", "" ], [ "Xing", "Yang", "" ], [ "Guo", "Weisi", "" ] ]
Future airports are becoming more complex and congested with the increasing number of travellers. While the airports are more likely to become hotspots for potential conflicts to break out which can cause serious delays to flights and several safety issues. An intelligent algorithm which renders security surveillance more effective in detecting conflicts would bring many benefits to the passengers in terms of their safety, finance, and travelling efficiency. This paper details the development of a machine learning model to classify conflicting behaviour in a crowd. HRNet is used to segment the images and then two approaches are taken to classify the poses of people in the frame via multiple classifiers. Among them, it was found that the support vector machine (SVM) achieved the most performant achieving precision of 94.37%. Where the model falls short is against ambiguous behaviour such as a hug or losing track of a subject in the frame. The resulting model has potential for deployment within an airport if improvements are made to cope with the vast number of potential passengers in view as well as training against further ambiguous behaviours which will arise in an airport setting. In turn, will provide the capability to enhance security surveillance and improve airport safety.
1808.03387
Janardan Misra
Janardan Misra
Computational Complexity of Observing Evolution in Artificial-Life Forms
arXiv admin note: substantial text overlap with arXiv:0901.1610
null
null
null
cs.NE cs.AI cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Observations are an essential component of the simulation based studies on artificial-evolutionary systems (AES) by which entities are identified and their behavior is observed to uncover higher-level "emergent" phenomena. Because of the heterogeneity of AES models and implicit nature of observations, precise characterization of the observation process, independent of the underlying micro-level reaction semantics of the model, is a difficult problem. Building upon the multiset based algebraic framework to characterize state-space trajectory of AES model simulations, we estimate bounds on computational resource requirements of the process of automatically discovering life-like evolutionary behavior in AES models during simulations. For illustration, we consider the case of Langton's Cellular Automata model and characterize the worst case computational complexity bounds for identifying entity and population level reproduction.
[ { "created": "Sun, 24 Jun 2018 04:18:55 GMT", "version": "v1" } ]
2018-08-13
[ [ "Misra", "Janardan", "" ] ]
Observations are an essential component of the simulation based studies on artificial-evolutionary systems (AES) by which entities are identified and their behavior is observed to uncover higher-level "emergent" phenomena. Because of the heterogeneity of AES models and implicit nature of observations, precise characterization of the observation process, independent of the underlying micro-level reaction semantics of the model, is a difficult problem. Building upon the multiset based algebraic framework to characterize state-space trajectory of AES model simulations, we estimate bounds on computational resource requirements of the process of automatically discovering life-like evolutionary behavior in AES models during simulations. For illustration, we consider the case of Langton's Cellular Automata model and characterize the worst case computational complexity bounds for identifying entity and population level reproduction.
2404.03088
Bin Han
Zexin Fang, Bin Han, and Hans D. Schotten
Robust Federated Learning for Wireless Networks: A Demonstration with Channel Estimation
Submitted to IEEE GLOBECOM 2024
null
null
null
cs.LG cs.AI cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) offers a privacy-preserving collaborative approach for training models in wireless networks, with channel estimation emerging as a promising application. Despite extensive studies on FL-empowered channel estimation, the security concerns associated with FL require meticulous attention. In a scenario where small base stations (SBSs) serve as local models trained on cached data, and a macro base station (MBS) functions as the global model setting, an attacker can exploit the vulnerability of FL, launching attacks with various adversarial attacks or deployment tactics. In this paper, we analyze such vulnerabilities, corresponding solutions were brought forth, and validated through simulation.
[ { "created": "Wed, 3 Apr 2024 22:03:28 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 08:19:53 GMT", "version": "v2" } ]
2024-07-31
[ [ "Fang", "Zexin", "" ], [ "Han", "Bin", "" ], [ "Schotten", "Hans D.", "" ] ]
Federated learning (FL) offers a privacy-preserving collaborative approach for training models in wireless networks, with channel estimation emerging as a promising application. Despite extensive studies on FL-empowered channel estimation, the security concerns associated with FL require meticulous attention. In a scenario where small base stations (SBSs) serve as local models trained on cached data, and a macro base station (MBS) functions as the global model setting, an attacker can exploit the vulnerability of FL, launching attacks with various adversarial attacks or deployment tactics. In this paper, we analyze such vulnerabilities, corresponding solutions were brought forth, and validated through simulation.
cs/0402008
Richard McClatchey
Mohammed Odeh, Tamas Hauer, Richard McClatchey & Tony Solomonides
A Use-Case Driven Approach in Requirements Engineering : The Mammogrid Project
6 pages, 3 figures. Presented at the 7th IASTED Int Conf on Software Engineering Applications. Marina del Rey, USA November 2003
null
null
null
cs.DB cs.SE
null
We report on the application of the use-case modeling technique to identify and specify the user requirements of the MammoGrid project in an incremental and controlled iterative approach. Modeling has been carried out in close collaboration with clinicians and radiologists with no prior experience of use cases. The study reveals the advantages and limitations of applying this technique to requirements specification in the domains of breast cancer screening and mammography research, with implications for medical imaging more generally. In addition, this research has shown a return on investment in use-case modeling in shorter gaps between phases of the requirements engineering process. The qualitative result of this analysis leads us to propose that a use-case modeling approach may result in reducing the cycle of the requirements engineering process for medical imaging.
[ { "created": "Mon, 2 Feb 2004 20:18:23 GMT", "version": "v1" } ]
2009-09-29
[ [ "Odeh", "Mohammed", "" ], [ "Hauer", "Tamas", "" ], [ "McClatchey", "Richard", "" ], [ "Solomonides", "Tony", "" ] ]
We report on the application of the use-case modeling technique to identify and specify the user requirements of the MammoGrid project in an incremental and controlled iterative approach. Modeling has been carried out in close collaboration with clinicians and radiologists with no prior experience of use cases. The study reveals the advantages and limitations of applying this technique to requirements specification in the domains of breast cancer screening and mammography research, with implications for medical imaging more generally. In addition, this research has shown a return on investment in use-case modeling in shorter gaps between phases of the requirements engineering process. The qualitative result of this analysis leads us to propose that a use-case modeling approach may result in reducing the cycle of the requirements engineering process for medical imaging.
2404.04718
Prasun Tripathi
Prasun C Tripathi, Sina Tabakhi, Mohammod N I Suvon, Lawrence Sch\"ob, Samer Alabed, Andrew J Swift, Shuo Zhou, and Haiping Lu
Interpretable Multimodal Learning for Cardiovascular Hemodynamics Assessment
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Pulmonary Arterial Wedge Pressure (PAWP) is an essential cardiovascular hemodynamics marker to detect heart failure. In clinical practice, Right Heart Catheterization is considered a gold standard for assessing cardiac hemodynamics while non-invasive methods are often needed to screen high-risk patients from a large population. In this paper, we propose a multimodal learning pipeline to predict PAWP marker. We utilize complementary information from Cardiac Magnetic Resonance Imaging (CMR) scans (short-axis and four-chamber) and Electronic Health Records (EHRs). We extract spatio-temporal features from CMR scans using tensor-based learning. We propose a graph attention network to select important EHR features for prediction, where we model subjects as graph nodes and feature relationships as graph edges using the attention mechanism. We design four feature fusion strategies: early, intermediate, late, and hybrid fusion. With a linear classifier and linear fusion strategies, our pipeline is interpretable. We validate our pipeline on a large dataset of $2,641$ subjects from our ASPIRE registry. The comparative study against state-of-the-art methods confirms the superiority of our pipeline. The decision curve analysis further validates that our pipeline can be applied to screen a large population. The code is available at https://github.com/prasunc/hemodynamics.
[ { "created": "Sat, 6 Apr 2024 19:42:25 GMT", "version": "v1" } ]
2024-04-09
[ [ "Tripathi", "Prasun C", "" ], [ "Tabakhi", "Sina", "" ], [ "Suvon", "Mohammod N I", "" ], [ "Schöb", "Lawrence", "" ], [ "Alabed", "Samer", "" ], [ "Swift", "Andrew J", "" ], [ "Zhou", "Shuo", "" ], [ "Lu", "Haiping", "" ] ]
Pulmonary Arterial Wedge Pressure (PAWP) is an essential cardiovascular hemodynamics marker to detect heart failure. In clinical practice, Right Heart Catheterization is considered a gold standard for assessing cardiac hemodynamics while non-invasive methods are often needed to screen high-risk patients from a large population. In this paper, we propose a multimodal learning pipeline to predict PAWP marker. We utilize complementary information from Cardiac Magnetic Resonance Imaging (CMR) scans (short-axis and four-chamber) and Electronic Health Records (EHRs). We extract spatio-temporal features from CMR scans using tensor-based learning. We propose a graph attention network to select important EHR features for prediction, where we model subjects as graph nodes and feature relationships as graph edges using the attention mechanism. We design four feature fusion strategies: early, intermediate, late, and hybrid fusion. With a linear classifier and linear fusion strategies, our pipeline is interpretable. We validate our pipeline on a large dataset of $2,641$ subjects from our ASPIRE registry. The comparative study against state-of-the-art methods confirms the superiority of our pipeline. The decision curve analysis further validates that our pipeline can be applied to screen a large population. The code is available at https://github.com/prasunc/hemodynamics.
1703.06117
John Prpi\'c
J. Prpi\'c
Unpacking Blockchains
Collective Intelligence 2017. NYU Tandon School of Engineering. June 15-16, 2017
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Bitcoin digital currency appeared in 2009. Since this time, researchers and practitioners have looked under the hood of the open source Bitcoin currency, and discovered that Bitcoins Blockchain software architecture is useful for non-monetary purposes too. By coalescing the research and practice on Blockchains, this work begins to unpack Blockchains as a general phenomenon, therein, arguing that all Blockchain phenomena can be conceived as being comprised of transaction platforms and digital ledgers, and illustrating where public key encryption plays a differential role in facilitating these features of Blockchains.
[ { "created": "Mon, 13 Mar 2017 22:03:09 GMT", "version": "v1" } ]
2017-03-20
[ [ "Prpić", "J.", "" ] ]
The Bitcoin digital currency appeared in 2009. Since this time, researchers and practitioners have looked under the hood of the open source Bitcoin currency, and discovered that Bitcoins Blockchain software architecture is useful for non-monetary purposes too. By coalescing the research and practice on Blockchains, this work begins to unpack Blockchains as a general phenomenon, therein, arguing that all Blockchain phenomena can be conceived as being comprised of transaction platforms and digital ledgers, and illustrating where public key encryption plays a differential role in facilitating these features of Blockchains.
2205.13098
Yifei Wang
Yifei Wang, Peng Chen, Mert Pilanci, Wuchen Li
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization
null
null
null
null
cs.LG math.OC stat.ML
http://creativecommons.org/licenses/by/4.0/
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.
[ { "created": "Thu, 26 May 2022 00:51:12 GMT", "version": "v1" } ]
2022-05-27
[ [ "Wang", "Yifei", "" ], [ "Chen", "Peng", "" ], [ "Pilanci", "Mert", "" ], [ "Li", "Wuchen", "" ] ]
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.
2304.13664
Luisa Coheur
Hugo Rodrigues, Eric Nyberg, Luisa Coheur
Using Implicit Feedback to Improve Question Generation
27 pages, 8 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Question Generation (QG) is a task of Natural Language Processing (NLP) that aims at automatically generating questions from text. Many applications can benefit from automatically generated questions, but often it is necessary to curate those questions, either by selecting or editing them. This task is informative on its own, but it is typically done post-generation, and, thus, the effort is wasted. In addition, most existing systems cannot incorporate this feedback back into them easily. In this work, we present a system, GEN, that learns from such (implicit) feedback. Following a pattern-based approach, it takes as input a small set of sentence/question pairs and creates patterns which are then applied to new unseen sentences. Each generated question, after being corrected by the user, is used as a new seed in the next iteration, so more patterns are created each time. We also take advantage of the corrections made by the user to score the patterns and therefore rank the generated questions. Results show that GEN is able to improve by learning from both levels of implicit feedback when compared to the version with no learning, considering the top 5, 10, and 20 questions. Improvements go up from 10%, depending on the metric and strategy used.
[ { "created": "Wed, 26 Apr 2023 16:37:47 GMT", "version": "v1" } ]
2023-04-27
[ [ "Rodrigues", "Hugo", "" ], [ "Nyberg", "Eric", "" ], [ "Coheur", "Luisa", "" ] ]
Question Generation (QG) is a task of Natural Language Processing (NLP) that aims at automatically generating questions from text. Many applications can benefit from automatically generated questions, but often it is necessary to curate those questions, either by selecting or editing them. This task is informative on its own, but it is typically done post-generation, and, thus, the effort is wasted. In addition, most existing systems cannot incorporate this feedback back into them easily. In this work, we present a system, GEN, that learns from such (implicit) feedback. Following a pattern-based approach, it takes as input a small set of sentence/question pairs and creates patterns which are then applied to new unseen sentences. Each generated question, after being corrected by the user, is used as a new seed in the next iteration, so more patterns are created each time. We also take advantage of the corrections made by the user to score the patterns and therefore rank the generated questions. Results show that GEN is able to improve by learning from both levels of implicit feedback when compared to the version with no learning, considering the top 5, 10, and 20 questions. Improvements go up from 10%, depending on the metric and strategy used.
1904.10504
Li Chen
Li Chen
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions
Report
null
null
null
cs.CR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
My research lies in the intersection of security and machine learning. This overview summarizes one component of my research: combining computer vision with malware exploit detection for enhanced security solutions. I will present the perspectives of efficacy, reliability and resiliency to formulate threat detection as computer vision problems and develop state-of-the-art image-based malware classification. Representing malware binary as images provides a direct visualization of data samples, reduces the efforts for feature extraction, and consumes the whole binary for holistic structural analysis. Employing transfer learning of deep neural networks effective for large scale image classification to malware classification demonstrates superior classification efficacy compared with classical machine learning algorithms. To enhance reliability of these vision-based malware detectors, interpretation frameworks can be constructed on the malware visual representations and useful for extracting faithful explanation, so that security practitioners have confidence in the model before deployment. In cyber-security applications, we should always assume that a malware writer constantly modifies code to bypass detection. Addressing the resiliency of the malware detectors is equivalently important as efficacy and reliability. Via understanding the attack surfaces of machine learning models used for malware detection, we can greatly improve the robustness of the algorithms to combat malware adversaries in the wild. Finally I will discuss future research directions worth pursuing in this research community.
[ { "created": "Wed, 3 Apr 2019 18:34:20 GMT", "version": "v1" } ]
2019-04-25
[ [ "Chen", "Li", "" ] ]
My research lies in the intersection of security and machine learning. This overview summarizes one component of my research: combining computer vision with malware exploit detection for enhanced security solutions. I will present the perspectives of efficacy, reliability and resiliency to formulate threat detection as computer vision problems and develop state-of-the-art image-based malware classification. Representing malware binary as images provides a direct visualization of data samples, reduces the efforts for feature extraction, and consumes the whole binary for holistic structural analysis. Employing transfer learning of deep neural networks effective for large scale image classification to malware classification demonstrates superior classification efficacy compared with classical machine learning algorithms. To enhance reliability of these vision-based malware detectors, interpretation frameworks can be constructed on the malware visual representations and useful for extracting faithful explanation, so that security practitioners have confidence in the model before deployment. In cyber-security applications, we should always assume that a malware writer constantly modifies code to bypass detection. Addressing the resiliency of the malware detectors is equivalently important as efficacy and reliability. Via understanding the attack surfaces of machine learning models used for malware detection, we can greatly improve the robustness of the algorithms to combat malware adversaries in the wild. Finally I will discuss future research directions worth pursuing in this research community.
1810.13337
Pengcheng Yin
Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt
Learning to Represent Edits
ICLR 2019
null
null
null
cs.LG cs.SE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.
[ { "created": "Wed, 31 Oct 2018 15:29:30 GMT", "version": "v1" }, { "created": "Fri, 22 Feb 2019 05:16:03 GMT", "version": "v2" } ]
2019-02-25
[ [ "Yin", "Pengcheng", "" ], [ "Neubig", "Graham", "" ], [ "Allamanis", "Miltiadis", "" ], [ "Brockschmidt", "Marc", "" ], [ "Gaunt", "Alexander L.", "" ] ]
We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.
1312.7645
Rob van Glabbeek
Ansgar Fehnker, Rob van Glabbeek, Peter H\"ofner, Annabelle McIver, Marius Portmann and Wee Lum Tan
A Process Algebra for Wireless Mesh Networks used for Modelling, Verifying and Analysing AODV
null
null
null
Technical Report 5513, NICTA, 2013
cs.NI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose AWN (Algebra for Wireless Networks), a process algebra tailored to the modelling of Mobile Ad hoc Network (MANET) and Wireless Mesh Network (WMN) protocols. It combines novel treatments of local broadcast, conditional unicast and data structures. In this framework we present a rigorous analysis of the Ad hoc On-Demand Distance Vector (AODV) protocol, a popular routing protocol designed for MANETs and WMNs, and one of the four protocols currently standardised by the IETF MANET working group. We give a complete and unambiguous specification of this protocol, thereby formalising the RFC of AODV, the de facto standard specification, given in English prose. In doing so, we had to make non-evident assumptions to resolve ambiguities occurring in that specification. Our formalisation models the exact details of the core functionality of AODV, such as route maintenance and error handling, and only omits timing aspects. The process algebra allows us to formalise and (dis)prove crucial properties of mesh network routing protocols such as loop freedom and packet delivery. We are the first to provide a detailed proof of loop freedom of AODV. In contrast to evaluations using simulation or model checking, our proof is generic and holds for any possible network scenario in terms of network topology, node mobility, etc. Due to ambiguities and contradictions the RFC specification allows several interpretations; we show for more than 5000 of them whether they are loop free or not, thereby demonstrating how the reasoning and proofs can relatively easily be adapted to protocol variants. Using our formal and unambiguous specification, we find shortcomings of AODV that affect performance, e.g. the establishment of non-optimal routes, and some routes not being found at all. We formalise improvements in the same process algebra; carrying over the proofs is again easy.
[ { "created": "Mon, 30 Dec 2013 07:18:04 GMT", "version": "v1" } ]
2013-12-31
[ [ "Fehnker", "Ansgar", "" ], [ "van Glabbeek", "Rob", "" ], [ "Höfner", "Peter", "" ], [ "McIver", "Annabelle", "" ], [ "Portmann", "Marius", "" ], [ "Tan", "Wee Lum", "" ] ]
We propose AWN (Algebra for Wireless Networks), a process algebra tailored to the modelling of Mobile Ad hoc Network (MANET) and Wireless Mesh Network (WMN) protocols. It combines novel treatments of local broadcast, conditional unicast and data structures. In this framework we present a rigorous analysis of the Ad hoc On-Demand Distance Vector (AODV) protocol, a popular routing protocol designed for MANETs and WMNs, and one of the four protocols currently standardised by the IETF MANET working group. We give a complete and unambiguous specification of this protocol, thereby formalising the RFC of AODV, the de facto standard specification, given in English prose. In doing so, we had to make non-evident assumptions to resolve ambiguities occurring in that specification. Our formalisation models the exact details of the core functionality of AODV, such as route maintenance and error handling, and only omits timing aspects. The process algebra allows us to formalise and (dis)prove crucial properties of mesh network routing protocols such as loop freedom and packet delivery. We are the first to provide a detailed proof of loop freedom of AODV. In contrast to evaluations using simulation or model checking, our proof is generic and holds for any possible network scenario in terms of network topology, node mobility, etc. Due to ambiguities and contradictions the RFC specification allows several interpretations; we show for more than 5000 of them whether they are loop free or not, thereby demonstrating how the reasoning and proofs can relatively easily be adapted to protocol variants. Using our formal and unambiguous specification, we find shortcomings of AODV that affect performance, e.g. the establishment of non-optimal routes, and some routes not being found at all. We formalise improvements in the same process algebra; carrying over the proofs is again easy.
2110.05422
Rose Wang
Rose E. Wang, Julia White, Jesse Mu, Noah D. Goodman
Calibrate your listeners! Robust communication-based training for pragmatic speakers
Findings of EMNLP 2021 Code: https://github.com/rosewang2008/calibrate_your_listeners
null
null
null
cs.CL cs.AI cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
To be good conversational partners, natural language processing (NLP) systems should be trained to produce contextually useful utterances. Prior work has investigated training NLP systems with communication-based objectives, where a neural listener stands in as a communication partner. However, these systems commonly suffer from semantic drift where the learned language diverges radically from natural language. We propose a method that uses a population of neural listeners to regularize speaker training. We first show that language drift originates from the poor uncertainty calibration of a neural listener, which makes high-certainty predictions on novel sentences. We explore ensemble- and dropout-based populations of listeners and find that the former results in better uncertainty quantification. We evaluate both population-based objectives on reference games, and show that the ensemble method with better calibration enables the speaker to generate pragmatic utterances while scaling to a large vocabulary and generalizing to new games and listeners.
[ { "created": "Mon, 11 Oct 2021 17:07:38 GMT", "version": "v1" } ]
2021-10-12
[ [ "Wang", "Rose E.", "" ], [ "White", "Julia", "" ], [ "Mu", "Jesse", "" ], [ "Goodman", "Noah D.", "" ] ]
To be good conversational partners, natural language processing (NLP) systems should be trained to produce contextually useful utterances. Prior work has investigated training NLP systems with communication-based objectives, where a neural listener stands in as a communication partner. However, these systems commonly suffer from semantic drift where the learned language diverges radically from natural language. We propose a method that uses a population of neural listeners to regularize speaker training. We first show that language drift originates from the poor uncertainty calibration of a neural listener, which makes high-certainty predictions on novel sentences. We explore ensemble- and dropout-based populations of listeners and find that the former results in better uncertainty quantification. We evaluate both population-based objectives on reference games, and show that the ensemble method with better calibration enables the speaker to generate pragmatic utterances while scaling to a large vocabulary and generalizing to new games and listeners.
1306.0865
Jinkyu Kang
Jinkyu Kang and Osvaldo Simeone and Joonhyuk Kang and Shlomo Shamai (Shitz)
Joint Signal and Channel State Information Compression for the Backhaul of Uplink Network MIMO Systems
34 pages, 6 figures. Submitted to IEEE Transactions on Wireless Communication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In network MIMO cellular systems, subsets of base stations (BSs), or remote radio heads, are connected via backhaul links to central units (CUs) that perform joint encoding in the downlink and joint decoding in the uplink. Focusing on the uplink, an effective solution for the communication between BSs and the corresponding CU on the backhaul links is based on compressing and forwarding the baseband received signal from each BS. In the presence of ergodic fading, communicating the channel state information (CSI) from the BSs to the CU may require a sizable part of the backhaul capacity. In a prior work, this aspect was studied by assuming a Compress-Forward-Estimate (CFE) approach, whereby the BSs compress the training signal and CSI estimation takes place at the CU. In this work, instead, an Estimate-Compress-Forward (ECF) approach is investigated, whereby the BSs perform CSI estimation and forward a compressed version of the CSI to the CU. This choice is motivated by the information theoretic optimality of separate estimation and compression. Various ECF strategies are proposed that perform either separate or joint compression of estimated CSI and received signal. Moreover, the proposed strategies are combined with distributed source coding when considering multiple BSs. "Semi-coherent" strategies are also proposed that do not convey any CSI or training information on the backhaul links. Via numerical results, it is shown that a proper design of ECF strategies based on joint received signal and estimated CSI compression or of semi-coherent schemes leads to substantial performance gains compared to more conventional approaches based on non-coherent transmission or the CFE approach.
[ { "created": "Tue, 4 Jun 2013 18:08:13 GMT", "version": "v1" }, { "created": "Wed, 23 Oct 2013 11:01:29 GMT", "version": "v2" } ]
2013-10-24
[ [ "Kang", "Jinkyu", "", "Shitz" ], [ "Simeone", "Osvaldo", "", "Shitz" ], [ "Kang", "Joonhyuk", "", "Shitz" ], [ "Shamai", "Shlomo", "", "Shitz" ] ]
In network MIMO cellular systems, subsets of base stations (BSs), or remote radio heads, are connected via backhaul links to central units (CUs) that perform joint encoding in the downlink and joint decoding in the uplink. Focusing on the uplink, an effective solution for the communication between BSs and the corresponding CU on the backhaul links is based on compressing and forwarding the baseband received signal from each BS. In the presence of ergodic fading, communicating the channel state information (CSI) from the BSs to the CU may require a sizable part of the backhaul capacity. In a prior work, this aspect was studied by assuming a Compress-Forward-Estimate (CFE) approach, whereby the BSs compress the training signal and CSI estimation takes place at the CU. In this work, instead, an Estimate-Compress-Forward (ECF) approach is investigated, whereby the BSs perform CSI estimation and forward a compressed version of the CSI to the CU. This choice is motivated by the information theoretic optimality of separate estimation and compression. Various ECF strategies are proposed that perform either separate or joint compression of estimated CSI and received signal. Moreover, the proposed strategies are combined with distributed source coding when considering multiple BSs. "Semi-coherent" strategies are also proposed that do not convey any CSI or training information on the backhaul links. Via numerical results, it is shown that a proper design of ECF strategies based on joint received signal and estimated CSI compression or of semi-coherent schemes leads to substantial performance gains compared to more conventional approaches based on non-coherent transmission or the CFE approach.
0711.0618
Wim Vanhoof
Jan Wielemaker, Anjo Anjewierden
PIDoc: Wiki style Literate Programming for Prolog
Paper presented at the 17th Workshop on Logic-based Methods in Programming Environments (WLPE2007)
null
null
null
cs.PL cs.SE
null
This document introduces PlDoc, a literate programming system for Prolog. Starting point for PlDoc was minimal distraction from the programming task and maximal immediate reward, attempting to seduce the programmer to use the system. Minimal distraction is achieved using structured comments that are as closely as possible related to common Prolog documentation practices. Immediate reward is provided by a web interface powered from the Prolog development environment that integrates searching and browsing application and system documentation. When accessed from localhost, it is possible to go from documentation shown in a browser to the source code displayed in the user's editor of choice.
[ { "created": "Mon, 5 Nov 2007 12:13:12 GMT", "version": "v1" } ]
2007-11-06
[ [ "Wielemaker", "Jan", "" ], [ "Anjewierden", "Anjo", "" ] ]
This document introduces PlDoc, a literate programming system for Prolog. Starting point for PlDoc was minimal distraction from the programming task and maximal immediate reward, attempting to seduce the programmer to use the system. Minimal distraction is achieved using structured comments that are as closely as possible related to common Prolog documentation practices. Immediate reward is provided by a web interface powered from the Prolog development environment that integrates searching and browsing application and system documentation. When accessed from localhost, it is possible to go from documentation shown in a browser to the source code displayed in the user's editor of choice.
2102.10196
Romain Cosson
Romain Cosson, Devavrat Shah
Quantifying Variational Approximation for the Log-Partition Function
null
null
null
null
cs.DS cs.LG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variational approximation, such as mean-field (MF) and tree-reweighted (TRW), provide a computationally efficient approximation of the log-partition function for a generic graphical model. TRW provably provides an upper bound, but the approximation ratio is generally not quantified. As the primary contribution of this work, we provide an approach to quantify the approximation ratio through the property of the underlying graph structure. Specifically, we argue that (a variant of) TRW produces an estimate that is within factor $\frac{1}{\sqrt{\kappa(G)}}$ of the true log-partition function for any discrete pairwise graphical model over graph $G$, where $\kappa(G) \in (0,1]$ captures how far $G$ is from tree structure with $\kappa(G) = 1$ for trees and $2/N$ for the complete graph over $N$ vertices. As a consequence, the approximation ratio is $1$ for trees, $\sqrt{(d+1)/2}$ for any graph with maximum average degree $d$, and $\stackrel{\beta\to\infty}{\approx} 1+1/(2\beta)$ for graphs with girth (shortest cycle) at least $\beta \log N$. In general, $\kappa(G)$ is the solution of a max-min problem associated with $G$ that can be evaluated in polynomial time for any graph. Using samples from the uniform distribution over the spanning trees of G, we provide a near linear-time variant that achieves an approximation ratio equal to the inverse of square-root of minimal (across edges) effective resistance of the graph. We connect our results to the graph partition-based approximation method and thus provide a unified perspective. Keywords: variational inference, log-partition function, spanning tree polytope, minimum effective resistance, min-max spanning tree, local inference
[ { "created": "Fri, 19 Feb 2021 22:57:32 GMT", "version": "v1" }, { "created": "Thu, 19 Aug 2021 22:10:39 GMT", "version": "v2" } ]
2021-08-23
[ [ "Cosson", "Romain", "" ], [ "Shah", "Devavrat", "" ] ]
Variational approximation, such as mean-field (MF) and tree-reweighted (TRW), provide a computationally efficient approximation of the log-partition function for a generic graphical model. TRW provably provides an upper bound, but the approximation ratio is generally not quantified. As the primary contribution of this work, we provide an approach to quantify the approximation ratio through the property of the underlying graph structure. Specifically, we argue that (a variant of) TRW produces an estimate that is within factor $\frac{1}{\sqrt{\kappa(G)}}$ of the true log-partition function for any discrete pairwise graphical model over graph $G$, where $\kappa(G) \in (0,1]$ captures how far $G$ is from tree structure with $\kappa(G) = 1$ for trees and $2/N$ for the complete graph over $N$ vertices. As a consequence, the approximation ratio is $1$ for trees, $\sqrt{(d+1)/2}$ for any graph with maximum average degree $d$, and $\stackrel{\beta\to\infty}{\approx} 1+1/(2\beta)$ for graphs with girth (shortest cycle) at least $\beta \log N$. In general, $\kappa(G)$ is the solution of a max-min problem associated with $G$ that can be evaluated in polynomial time for any graph. Using samples from the uniform distribution over the spanning trees of G, we provide a near linear-time variant that achieves an approximation ratio equal to the inverse of square-root of minimal (across edges) effective resistance of the graph. We connect our results to the graph partition-based approximation method and thus provide a unified perspective. Keywords: variational inference, log-partition function, spanning tree polytope, minimum effective resistance, min-max spanning tree, local inference
2205.12701
Qinyuan Ye
Qinyuan Ye, Juan Zha, Xiang Ren
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Accepted to EMNLP 2022 Findings. Camera-ready version
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapting to new tasks efficiently. However, the potential of these multi-task models may be limited as they use the same set of parameters for all tasks. In contrast, humans tackle tasks in a more flexible way, by making proper presumptions on what skills and knowledge are relevant and executing only the necessary computations. Inspired by this, we propose to use task-level mixture-of-expert models, which has a collection of transformer layers (i.e., experts) and a router component that chooses from these experts dynamically and flexibly. We find that these models help improve the average performance gain (ARG) metric by 2.6% when adapting to unseen tasks in the few-shot setting and by 5.6% in the zero-shot generalization setting. Further, we show that the learned routing decisions partly rediscover human categorization of NLP tasks -- certain experts are strongly associated with extractive tasks, some with classification tasks, and some with tasks requiring world knowledge.
[ { "created": "Wed, 25 May 2022 11:59:05 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2022 00:15:25 GMT", "version": "v2" } ]
2022-11-23
[ [ "Ye", "Qinyuan", "" ], [ "Zha", "Juan", "" ], [ "Ren", "Xiang", "" ] ]
Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapting to new tasks efficiently. However, the potential of these multi-task models may be limited as they use the same set of parameters for all tasks. In contrast, humans tackle tasks in a more flexible way, by making proper presumptions on what skills and knowledge are relevant and executing only the necessary computations. Inspired by this, we propose to use task-level mixture-of-expert models, which has a collection of transformer layers (i.e., experts) and a router component that chooses from these experts dynamically and flexibly. We find that these models help improve the average performance gain (ARG) metric by 2.6% when adapting to unseen tasks in the few-shot setting and by 5.6% in the zero-shot generalization setting. Further, we show that the learned routing decisions partly rediscover human categorization of NLP tasks -- certain experts are strongly associated with extractive tasks, some with classification tasks, and some with tasks requiring world knowledge.
1507.01443
Erik Ferragut
Erik M. Ferragut, Jason Laska
Nonparametric Bayesian Modeling for Automated Database Schema Matching
null
null
null
null
cs.IR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
[ { "created": "Mon, 6 Jul 2015 13:26:02 GMT", "version": "v1" } ]
2015-07-07
[ [ "Ferragut", "Erik M.", "" ], [ "Laska", "Jason", "" ] ]
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
1902.03683
Xiaojiang Du
Caidan Zhao, Mingxian Shi, MinMin Huang, Xiaojiang Du
Authentication Scheme Based on Hashchain for Space-Air-Ground Integrated Network
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of artificial intelligence and self-driving, vehicular ad-hoc network (VANET) has become an irreplaceable part of the Intelligent Transportation Systems (ITSs). However, the traditional network of the ground cannot meet the requirements of transmission, processing, and storage among vehicles. Under this circumstance, integrating space and air nodes into the whole network can provide comprehensive traffic information and reduce the transmission delay. The high mobility and low latency in the Space-Air-Ground Integrated Network (SAGIN) put forward higher requirements for security issues such as identity authentication, privacy protection, and data security. This paper simplifies the Blockchain and proposes an identity authentication and privacy protection scheme based on the Hashchain in the SAGIN. The scheme focuses on the characteristics of the wireless signal to identify and authenticate the nodes. The verification and backup of the records on the block are implemented with the distributed streaming platform, Kafka algorithm, instead of the consensus. Furthermore, this paper analyzes the security of this scheme. Afterward, the experimental results reveal the delay brought by the scheme using the simulation of SUMO, OMNeT++, and Veins.
[ { "created": "Sun, 10 Feb 2019 23:22:23 GMT", "version": "v1" } ]
2019-02-12
[ [ "Zhao", "Caidan", "" ], [ "Shi", "Mingxian", "" ], [ "Huang", "MinMin", "" ], [ "Du", "Xiaojiang", "" ] ]
With the development of artificial intelligence and self-driving, vehicular ad-hoc network (VANET) has become an irreplaceable part of the Intelligent Transportation Systems (ITSs). However, the traditional network of the ground cannot meet the requirements of transmission, processing, and storage among vehicles. Under this circumstance, integrating space and air nodes into the whole network can provide comprehensive traffic information and reduce the transmission delay. The high mobility and low latency in the Space-Air-Ground Integrated Network (SAGIN) put forward higher requirements for security issues such as identity authentication, privacy protection, and data security. This paper simplifies the Blockchain and proposes an identity authentication and privacy protection scheme based on the Hashchain in the SAGIN. The scheme focuses on the characteristics of the wireless signal to identify and authenticate the nodes. The verification and backup of the records on the block are implemented with the distributed streaming platform, Kafka algorithm, instead of the consensus. Furthermore, this paper analyzes the security of this scheme. Afterward, the experimental results reveal the delay brought by the scheme using the simulation of SUMO, OMNeT++, and Veins.
2204.01065
Roni Stern
A.A. Snarskii, I.V. Bezsudnov
Forward and backward mapping of image to 2D vector field using fiber bundle color space
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the concept of a fiber bundle color space, which acts according to the psychophysiological rules of trichromacy perception of colors by a human. The image resides in the fiber bundle base space and the fiber color space contains color vectors. Further we propose the decomposition of color vectors into spectral and achromatic parts. A homomorphism of a color image and constructed two-dimensional vector field is demonstrated that allows us to apply well-known advanced methods of vector analysis to a color image, i.e. ultimately give new numerical characteristics of the image. Appropriate image to vector field forward mapping is constructed. The proposed backward mapping algorithm converts a two-dimensional vector field to color image. The type of image filter is described using sequential forward and backward mapping algorithms. An example of the color image formation on the base of two-dimensional magnetic vector field scattered by a typical pipe line defect is given.
[ { "created": "Sun, 3 Apr 2022 12:53:16 GMT", "version": "v1" } ]
2022-04-05
[ [ "Snarskii", "A. A.", "" ], [ "Bezsudnov", "I. V.", "" ] ]
We introduce the concept of a fiber bundle color space, which acts according to the psychophysiological rules of trichromacy perception of colors by a human. The image resides in the fiber bundle base space and the fiber color space contains color vectors. Further we propose the decomposition of color vectors into spectral and achromatic parts. A homomorphism of a color image and constructed two-dimensional vector field is demonstrated that allows us to apply well-known advanced methods of vector analysis to a color image, i.e. ultimately give new numerical characteristics of the image. Appropriate image to vector field forward mapping is constructed. The proposed backward mapping algorithm converts a two-dimensional vector field to color image. The type of image filter is described using sequential forward and backward mapping algorithms. An example of the color image formation on the base of two-dimensional magnetic vector field scattered by a typical pipe line defect is given.
2003.07892
Shrey Desai
Shrey Desai and Greg Durrett
Calibration of Pre-trained Transformers
Accepted to EMNLP 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated. Specifically, do these models' posterior probabilities provide an accurate empirical measure of how likely the model is to be correct on a given example? We focus on BERT and RoBERTa in this work, and analyze their calibration across three tasks: natural language inference, paraphrase detection, and commonsense reasoning. For each task, we consider in-domain as well as challenging out-of-domain settings, where models face more examples they should be uncertain about. We show that: (1) when used out-of-the-box, pre-trained models are calibrated in-domain, and compared to baselines, their calibration error out-of-domain can be as much as 3.5x lower; (2) temperature scaling is effective at further reducing calibration error in-domain, and using label smoothing to deliberately increase empirical uncertainty helps calibrate posteriors out-of-domain.
[ { "created": "Tue, 17 Mar 2020 18:58:44 GMT", "version": "v1" }, { "created": "Fri, 20 Mar 2020 21:35:54 GMT", "version": "v2" }, { "created": "Thu, 15 Oct 2020 17:04:21 GMT", "version": "v3" } ]
2020-10-16
[ [ "Desai", "Shrey", "" ], [ "Durrett", "Greg", "" ] ]
Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated. Specifically, do these models' posterior probabilities provide an accurate empirical measure of how likely the model is to be correct on a given example? We focus on BERT and RoBERTa in this work, and analyze their calibration across three tasks: natural language inference, paraphrase detection, and commonsense reasoning. For each task, we consider in-domain as well as challenging out-of-domain settings, where models face more examples they should be uncertain about. We show that: (1) when used out-of-the-box, pre-trained models are calibrated in-domain, and compared to baselines, their calibration error out-of-domain can be as much as 3.5x lower; (2) temperature scaling is effective at further reducing calibration error in-domain, and using label smoothing to deliberately increase empirical uncertainty helps calibrate posteriors out-of-domain.
2004.05909
Tao Zhang
Tao Zhang, Wei Li
kDecay: Just adding k-decay items on Learning-Rate Schedule to improve Neural Networks
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown that optimizing the Learning Rate (LR) schedule can be a very accurate and efficient way to train deep neural networks. We observe that the rate of change (ROC) of LR has correlation with the training process, but how to use this relationship to control the training to achieve the purpose of improving accuracy? We propose a new method, k-decay, just add an extra item to the commonly used and easy LR schedule(exp, cosine and polynomial), is effectively improves the performance of these schedule, also better than the state-of-the-art algorithms of LR shcedule such as SGDR, CLR and AutoLRS. In the k-decay, by adjusting the hyper-parameter \(k\), to generate different LR schedule, when k increases, the performance is improved. We evaluate the k-decay method on CIFAR And ImageNet datasets with different neural networks (ResNet, Wide ResNet). Our experiments show that this method can improve on most of them. The accuracy has been improved by 1.08\% on the CIFAR-10 dataset and by 2.07 \% on the CIFAR-100 dataset. On the ImageNet, accuracy is improved by 1.25\%. Our method is not only a general method to be applied other LR Shcedule, but also has no additional computational cost.
[ { "created": "Mon, 13 Apr 2020 12:58:45 GMT", "version": "v1" }, { "created": "Mon, 20 Apr 2020 06:47:54 GMT", "version": "v2" }, { "created": "Mon, 29 Jun 2020 13:03:36 GMT", "version": "v3" }, { "created": "Fri, 2 Oct 2020 10:17:13 GMT", "version": "v4" }, { "created": "Tue, 22 Mar 2022 02:05:24 GMT", "version": "v5" } ]
2022-03-23
[ [ "Zhang", "Tao", "" ], [ "Li", "Wei", "" ] ]
Recent work has shown that optimizing the Learning Rate (LR) schedule can be a very accurate and efficient way to train deep neural networks. We observe that the rate of change (ROC) of LR has correlation with the training process, but how to use this relationship to control the training to achieve the purpose of improving accuracy? We propose a new method, k-decay, just add an extra item to the commonly used and easy LR schedule(exp, cosine and polynomial), is effectively improves the performance of these schedule, also better than the state-of-the-art algorithms of LR shcedule such as SGDR, CLR and AutoLRS. In the k-decay, by adjusting the hyper-parameter \(k\), to generate different LR schedule, when k increases, the performance is improved. We evaluate the k-decay method on CIFAR And ImageNet datasets with different neural networks (ResNet, Wide ResNet). Our experiments show that this method can improve on most of them. The accuracy has been improved by 1.08\% on the CIFAR-10 dataset and by 2.07 \% on the CIFAR-100 dataset. On the ImageNet, accuracy is improved by 1.25\%. Our method is not only a general method to be applied other LR Shcedule, but also has no additional computational cost.
2306.08906
Lukas Daniel Klausner
Dagmar Gromann, Manuel Lardelli, Katta Spiel, Sabrina Burtscher, Lukas Daniel Klausner, Arthur Mettinger, Igor Miladinovic, Sigrid Schefer-Wenzl, Daniela Duh, Katharina B\"uhn
Participatory Research as a Path to Community-Informed, Gender-Fair Machine Translation
11 pages, 4 figures
Proceedings of the First Workshop on Gender-Inclusive Translation Technologies (GITT 2023), 2023, 49-59
null
null
cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen a strongly increased visibility of non-binary people in public discourse. Accordingly, considerations of gender-fair language go beyond a binary conception of male/female. However, language technology, especially machine translation (MT), still suffers from binary gender bias. Proposing a solution for gender-fair MT beyond the binary from a purely technological perspective might fall short to accommodate different target user groups and in the worst case might lead to misgendering. To address this challenge, we propose a method and case study building on participatory action research to include experiential experts, i.e., queer and non-binary people, translators, and MT experts, in the MT design process. The case study focuses on German, where central findings are the importance of context dependency to avoid identity invalidation and a desire for customizable MT solutions.
[ { "created": "Thu, 15 Jun 2023 07:20:14 GMT", "version": "v1" } ]
2023-09-12
[ [ "Gromann", "Dagmar", "" ], [ "Lardelli", "Manuel", "" ], [ "Spiel", "Katta", "" ], [ "Burtscher", "Sabrina", "" ], [ "Klausner", "Lukas Daniel", "" ], [ "Mettinger", "Arthur", "" ], [ "Miladinovic", "Igor", "" ], [ "Schefer-Wenzl", "Sigrid", "" ], [ "Duh", "Daniela", "" ], [ "Bühn", "Katharina", "" ] ]
Recent years have seen a strongly increased visibility of non-binary people in public discourse. Accordingly, considerations of gender-fair language go beyond a binary conception of male/female. However, language technology, especially machine translation (MT), still suffers from binary gender bias. Proposing a solution for gender-fair MT beyond the binary from a purely technological perspective might fall short to accommodate different target user groups and in the worst case might lead to misgendering. To address this challenge, we propose a method and case study building on participatory action research to include experiential experts, i.e., queer and non-binary people, translators, and MT experts, in the MT design process. The case study focuses on German, where central findings are the importance of context dependency to avoid identity invalidation and a desire for customizable MT solutions.
2109.00471
Ruiqi Zhao
Ruiqi Zhao, Tianyi Wu and Guodong Guo
Sparse to Dense Motion Transfer for Face Image Animation
Accepted by ICCV 2021 Advances in Image Manipulation Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Face image animation from a single image has achieved remarkable progress. However, it remains challenging when only sparse landmarks are available as the driving signal. Given a source face image and a sequence of sparse face landmarks, our goal is to generate a video of the face imitating the motion of landmarks. We develop an efficient and effective method for motion transfer from sparse landmarks to the face image. We then combine global and local motion estimation in a unified model to faithfully transfer the motion. The model can learn to segment the moving foreground from the background and generate not only global motion, such as rotation and translation of the face, but also subtle local motion such as the gaze change. We further improve face landmark detection on videos. With temporally better aligned landmark sequences for training, our method can generate temporally coherent videos with higher visual quality. Experiments suggest we achieve results comparable to the state-of-the-art image driven method on the same identity testing and better results on cross identity testing.
[ { "created": "Wed, 1 Sep 2021 16:23:57 GMT", "version": "v1" }, { "created": "Fri, 3 Sep 2021 04:05:08 GMT", "version": "v2" } ]
2021-09-06
[ [ "Zhao", "Ruiqi", "" ], [ "Wu", "Tianyi", "" ], [ "Guo", "Guodong", "" ] ]
Face image animation from a single image has achieved remarkable progress. However, it remains challenging when only sparse landmarks are available as the driving signal. Given a source face image and a sequence of sparse face landmarks, our goal is to generate a video of the face imitating the motion of landmarks. We develop an efficient and effective method for motion transfer from sparse landmarks to the face image. We then combine global and local motion estimation in a unified model to faithfully transfer the motion. The model can learn to segment the moving foreground from the background and generate not only global motion, such as rotation and translation of the face, but also subtle local motion such as the gaze change. We further improve face landmark detection on videos. With temporally better aligned landmark sequences for training, our method can generate temporally coherent videos with higher visual quality. Experiments suggest we achieve results comparable to the state-of-the-art image driven method on the same identity testing and better results on cross identity testing.
2302.07074
Matthieu Doutreligne Mr.
Matthieu Doutreligne, Adeline Degremont, Pierre-Alain Jachiet, Antoine Lamer, Xavier Tannier
Good practices for clinical data warehouse implementation: a case study in France
16 pages
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Real World Data (RWD) bears great promises to improve the quality of care. However, specific infrastructures and methodologies are required to derive robust knowledge and brings innovations to the patient. Drawing upon the national case study of the 32 French regional and university hospitals governance, we highlight key aspects of modern Clinical Data Warehouses (CDWs): governance, transparency, types of data, data reuse, technical tools, documentation and data quality control processes. Semi-structured interviews as well as a review of reported studies on French CDWs were conducted in a semi-structured manner from March to November 2022. Out of 32 regional and university hospitals in France, 14 have a CDW in production, 5 are experimenting, 5 have a prospective CDW project, 8 did not have any CDW project at the time of writing. The implementation of CDW in France dates from 2011 and accelerated in the late 2020. From this case study, we draw some general guidelines for CDWs. The actual orientation of CDWs towards research requires efforts in governance stabilization, standardization of data schema and development in data quality and data documentation. Particular attention must be paid to the sustainability of the warehouse teams and to the multi-level governance. The transparency of the studies and the tools of transformation of the data must improve to allow successful multi-centric data reuses as well as innovations in routine care.
[ { "created": "Mon, 6 Feb 2023 13:38:12 GMT", "version": "v1" }, { "created": "Tue, 7 Mar 2023 08:35:36 GMT", "version": "v2" } ]
2023-03-08
[ [ "Doutreligne", "Matthieu", "" ], [ "Degremont", "Adeline", "" ], [ "Jachiet", "Pierre-Alain", "" ], [ "Lamer", "Antoine", "" ], [ "Tannier", "Xavier", "" ] ]
Real World Data (RWD) bears great promises to improve the quality of care. However, specific infrastructures and methodologies are required to derive robust knowledge and brings innovations to the patient. Drawing upon the national case study of the 32 French regional and university hospitals governance, we highlight key aspects of modern Clinical Data Warehouses (CDWs): governance, transparency, types of data, data reuse, technical tools, documentation and data quality control processes. Semi-structured interviews as well as a review of reported studies on French CDWs were conducted in a semi-structured manner from March to November 2022. Out of 32 regional and university hospitals in France, 14 have a CDW in production, 5 are experimenting, 5 have a prospective CDW project, 8 did not have any CDW project at the time of writing. The implementation of CDW in France dates from 2011 and accelerated in the late 2020. From this case study, we draw some general guidelines for CDWs. The actual orientation of CDWs towards research requires efforts in governance stabilization, standardization of data schema and development in data quality and data documentation. Particular attention must be paid to the sustainability of the warehouse teams and to the multi-level governance. The transparency of the studies and the tools of transformation of the data must improve to allow successful multi-centric data reuses as well as innovations in routine care.
2308.07473
Neel Patel
Ramiro Deo-Campo Vuong and Shaddin Dughmi and Neel Patel and Aditya Prasad
On Supermodular Contracts and Dense Subgraphs
31 pages, 2 figures
null
null
null
cs.GT
http://creativecommons.org/licenses/by/4.0/
We study the combinatorial contract design problem, introduced and studied by Dutting et. al. (2021, 2022), in both the single and multi-agent settings. Prior work has examined the problem when the principal's utility function is submodular in the actions chosen by the agent(s). We complement this emerging literature with an examination of the problem when the principal's utility is supermodular. In the single-agent setting, we obtain a strongly polynomial time algorithm for the optimal contract. This stands in contrast to the NP-hardness of the problem with submodular principal utility due to Dutting et. al. (2021). This result has two technical components, the first of which applies beyond supermodular or submodular utilities. This result strengthens and simplifies analogous enumeration algorithms from Dutting et. al. (2021), and applies to any nondecreasing valuation function for the principal. Second, we show that supermodular valuations lead to a polynomial number of breakpoints, analogous to a similar result by Dutting et. al. (2021) for gross substitutes valuations. In the multi-agent setting, we obtain a mixed bag of positive and negative results. First, we show that it is NP-hard to obtain any finite multiplicative approximation, or an additive FPTAS. This stands in contrast to the submodular case, where efficient computation of approximately optimal contracts was shown by Dutting et. al. (2022). Second, we derive an additive PTAS for the problem in the instructive special case of graph-based supermodular valuations, and equal costs. En-route to this result, we discover an intimate connection between the multi-agent contract problem and the notorious k-densest subgraph problem. We build on and combine techniques from the literature on dense subgraph problems to obtain our additive PTAS.
[ { "created": "Mon, 14 Aug 2023 21:57:25 GMT", "version": "v1" } ]
2023-08-16
[ [ "Vuong", "Ramiro Deo-Campo", "" ], [ "Dughmi", "Shaddin", "" ], [ "Patel", "Neel", "" ], [ "Prasad", "Aditya", "" ] ]
We study the combinatorial contract design problem, introduced and studied by Dutting et. al. (2021, 2022), in both the single and multi-agent settings. Prior work has examined the problem when the principal's utility function is submodular in the actions chosen by the agent(s). We complement this emerging literature with an examination of the problem when the principal's utility is supermodular. In the single-agent setting, we obtain a strongly polynomial time algorithm for the optimal contract. This stands in contrast to the NP-hardness of the problem with submodular principal utility due to Dutting et. al. (2021). This result has two technical components, the first of which applies beyond supermodular or submodular utilities. This result strengthens and simplifies analogous enumeration algorithms from Dutting et. al. (2021), and applies to any nondecreasing valuation function for the principal. Second, we show that supermodular valuations lead to a polynomial number of breakpoints, analogous to a similar result by Dutting et. al. (2021) for gross substitutes valuations. In the multi-agent setting, we obtain a mixed bag of positive and negative results. First, we show that it is NP-hard to obtain any finite multiplicative approximation, or an additive FPTAS. This stands in contrast to the submodular case, where efficient computation of approximately optimal contracts was shown by Dutting et. al. (2022). Second, we derive an additive PTAS for the problem in the instructive special case of graph-based supermodular valuations, and equal costs. En-route to this result, we discover an intimate connection between the multi-agent contract problem and the notorious k-densest subgraph problem. We build on and combine techniques from the literature on dense subgraph problems to obtain our additive PTAS.
2403.20132
Michael F\"arber
Michael F\"arber
A formal specification of the jq language
null
null
null
null
cs.LO cs.PL
http://creativecommons.org/licenses/by/4.0/
jq is a widely used tool that provides a programming language to manipulate JSON data. However, the jq language is currently only specified by its implementation, making it difficult to reason about its behaviour. To this end, we provide a formal syntax and denotational semantics for a large subset of the jq language. Our most significant contribution is to provide a new way to interpret updates that allows for more predictable and performant execution.
[ { "created": "Fri, 29 Mar 2024 11:49:42 GMT", "version": "v1" } ]
2024-04-01
[ [ "Färber", "Michael", "" ] ]
jq is a widely used tool that provides a programming language to manipulate JSON data. However, the jq language is currently only specified by its implementation, making it difficult to reason about its behaviour. To this end, we provide a formal syntax and denotational semantics for a large subset of the jq language. Our most significant contribution is to provide a new way to interpret updates that allows for more predictable and performant execution.
1510.04440
Silvia Crafa
Silvia Crafa
Modelling the Evolution of Programming Languages
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programming languages are engineered languages that allow to instruct a machine and share algorithmic information; they have a great influence on the society since they underlie almost every information technology artefact, and they are at the core of the current explosion of software technology. The history of programming languages is marked by innovations, diversifications, lateral transfers and social influences; moreover, it represents an intermediate case study between the evolution of human languages and the evolution of technology. In this paper we study the application of the Darwinian explanation to the programming languages evolution by discussing to what extent the evolutionary mechanisms distinctive of biology can be applied to this area. We show that a number of evolutionary building blocks can be recognised in the realm of computer languages, but we also identify critical issues. Far from being crystal clear, this fine-grained study shows to be a useful tool to assess recent results about programming languages phylogenies. Finally, we show that rich evolutionary patterns, such as co-evolution, macro-evolutionary trends, niche construction and exaptation, can be effectively applied to programming languages and provide for interesting explanatory tools.
[ { "created": "Thu, 15 Oct 2015 08:18:54 GMT", "version": "v1" } ]
2015-10-16
[ [ "Crafa", "Silvia", "" ] ]
Programming languages are engineered languages that allow to instruct a machine and share algorithmic information; they have a great influence on the society since they underlie almost every information technology artefact, and they are at the core of the current explosion of software technology. The history of programming languages is marked by innovations, diversifications, lateral transfers and social influences; moreover, it represents an intermediate case study between the evolution of human languages and the evolution of technology. In this paper we study the application of the Darwinian explanation to the programming languages evolution by discussing to what extent the evolutionary mechanisms distinctive of biology can be applied to this area. We show that a number of evolutionary building blocks can be recognised in the realm of computer languages, but we also identify critical issues. Far from being crystal clear, this fine-grained study shows to be a useful tool to assess recent results about programming languages phylogenies. Finally, we show that rich evolutionary patterns, such as co-evolution, macro-evolutionary trends, niche construction and exaptation, can be effectively applied to programming languages and provide for interesting explanatory tools.
2012.06780
Hao Zhang
Fuzhao Xue, Aixin Sun, Hao Zhang, Eng Siong Chng
GDPNet: Refining Latent Multi-View Graph for Relation Extraction
To appear at AAAI 2021
null
10.1609/aaai.v35i16.17670
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relation Extraction (RE) is to predict the relation type of two entities that are mentioned in a piece of text, e.g., a sentence or a dialogue. When the given text is long, it is challenging to identify indicative words for the relation prediction. Recent advances on RE task are from BERT-based sequence modeling and graph-based modeling of relationships among the tokens in the sequence. In this paper, we propose to construct a latent multi-view graph to capture various possible relationships among tokens. We then refine this graph to select important words for relation prediction. Finally, the representation of the refined graph and the BERT-based sequence representation are concatenated for relation extraction. Specifically, in our proposed GDPNet (Gaussian Dynamic Time Warping Pooling Net), we utilize Gaussian Graph Generator (GGG) to generate edges of the multi-view graph. The graph is then refined by Dynamic Time Warping Pooling (DTWPool). On DialogRE and TACRED, we show that GDPNet achieves the best performance on dialogue-level RE, and comparable performance with the state-of-the-arts on sentence-level RE.
[ { "created": "Sat, 12 Dec 2020 10:43:41 GMT", "version": "v1" } ]
2023-04-26
[ [ "Xue", "Fuzhao", "" ], [ "Sun", "Aixin", "" ], [ "Zhang", "Hao", "" ], [ "Chng", "Eng Siong", "" ] ]
Relation Extraction (RE) is to predict the relation type of two entities that are mentioned in a piece of text, e.g., a sentence or a dialogue. When the given text is long, it is challenging to identify indicative words for the relation prediction. Recent advances on RE task are from BERT-based sequence modeling and graph-based modeling of relationships among the tokens in the sequence. In this paper, we propose to construct a latent multi-view graph to capture various possible relationships among tokens. We then refine this graph to select important words for relation prediction. Finally, the representation of the refined graph and the BERT-based sequence representation are concatenated for relation extraction. Specifically, in our proposed GDPNet (Gaussian Dynamic Time Warping Pooling Net), we utilize Gaussian Graph Generator (GGG) to generate edges of the multi-view graph. The graph is then refined by Dynamic Time Warping Pooling (DTWPool). On DialogRE and TACRED, we show that GDPNet achieves the best performance on dialogue-level RE, and comparable performance with the state-of-the-arts on sentence-level RE.
1404.4785
Olegs Verhodubs
Olegs Verhodubs
Ontology as a Source for Rule Generation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discloses the potential of OWL (Web Ontology Language) ontologies for generation of rules. The main purpose of this paper is to identify new types of rules, which may be generated from OWL ontologies. Rules, generated from OWL ontologies, are necessary for the functioning of the Semantic Web Expert System. It is expected that the Semantic Web Expert System (SWES) will be able to process ontologies from the Web with the purpose to supplement or even to develop its knowledge base.
[ { "created": "Fri, 18 Apr 2014 13:36:17 GMT", "version": "v1" } ]
2014-04-21
[ [ "Verhodubs", "Olegs", "" ] ]
This paper discloses the potential of OWL (Web Ontology Language) ontologies for generation of rules. The main purpose of this paper is to identify new types of rules, which may be generated from OWL ontologies. Rules, generated from OWL ontologies, are necessary for the functioning of the Semantic Web Expert System. It is expected that the Semantic Web Expert System (SWES) will be able to process ontologies from the Web with the purpose to supplement or even to develop its knowledge base.
2303.06149
Marcel Matha
Marcel Matha and Christian Morsbach
Improved self-consistency of the Reynolds stress tensor eigenspace perturbation for uncertainty quantification
This article may be downloaded for personal use only. Any other use requires prior permission of the author and AIP Publishing. This article appeared in Physics of Fluids (Vol.35, Issue 6) and may be found at https://doi.org/10.1063/5.0149747
Physics of Fluids, Vol.35, Issue 6, 2023
10.1063/5.0149747
null
cs.CE physics.flu-dyn
http://creativecommons.org/licenses/by/4.0/
The limitations of turbulence closure models in the context of Reynolds-averaged NavierStokes (RANS) simulations play a significant part in contributing to the uncertainty of Computational Fluid Dynamics (CFD). Perturbing the spectral representation of the Reynolds stress tensor within physical limits is common practice in several commercial and open-source CFD solvers, in order to obtain estimates for the epistemic uncertainties of RANS turbulence models. Recent research revealed, that there is a need for moderating the amount of perturbed Reynolds stress tensor tensor to be considered due to upcoming stability issues of the solver. In this paper we point out that the consequent common implementation can lead to unintended states of the resulting perturbed Reynolds stress tensor. The combination of eigenvector perturbation and moderation factor may actually result in moderated eigenvalues, which are not linearly dependent on the originally unperturbed and fully perturbed eigenvalues anymore. Hence, the computational implementation is no longer in accordance with the conceptual idea of the Eigenspace Perturbation Framework. We verify the implementation of the conceptual description with respect to its self-consistency. Adequately representing the basic concept results in formulating a computational implementation to improve self-consistency of the Reynolds stress tensor perturbation
[ { "created": "Wed, 8 Mar 2023 13:02:26 GMT", "version": "v1" }, { "created": "Wed, 3 May 2023 07:35:52 GMT", "version": "v2" }, { "created": "Fri, 26 May 2023 13:42:20 GMT", "version": "v3" }, { "created": "Tue, 30 May 2023 06:35:01 GMT", "version": "v4" }, { "created": "Tue, 20 Jun 2023 15:00:02 GMT", "version": "v5" } ]
2023-06-21
[ [ "Matha", "Marcel", "" ], [ "Morsbach", "Christian", "" ] ]
The limitations of turbulence closure models in the context of Reynolds-averaged NavierStokes (RANS) simulations play a significant part in contributing to the uncertainty of Computational Fluid Dynamics (CFD). Perturbing the spectral representation of the Reynolds stress tensor within physical limits is common practice in several commercial and open-source CFD solvers, in order to obtain estimates for the epistemic uncertainties of RANS turbulence models. Recent research revealed, that there is a need for moderating the amount of perturbed Reynolds stress tensor tensor to be considered due to upcoming stability issues of the solver. In this paper we point out that the consequent common implementation can lead to unintended states of the resulting perturbed Reynolds stress tensor. The combination of eigenvector perturbation and moderation factor may actually result in moderated eigenvalues, which are not linearly dependent on the originally unperturbed and fully perturbed eigenvalues anymore. Hence, the computational implementation is no longer in accordance with the conceptual idea of the Eigenspace Perturbation Framework. We verify the implementation of the conceptual description with respect to its self-consistency. Adequately representing the basic concept results in formulating a computational implementation to improve self-consistency of the Reynolds stress tensor perturbation
2306.14708
Mingyu Jin
Mingyu Jin, Chong Zhang, Qinkai Yu, Haochen Xue, Xiaobo Jin, Xi Yang
A Simple and Effective Baseline for Attentional Generative Adversarial Networks
12 pages, 3 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Synthesising a text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task. In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, which adopts a self-distillation technique to improve the performance of the generator and the quality of image generation, and Stack-GAN++, which gradually improves the details and quality of the image by stacking multiple generators and discriminators. However, this series of improvements to GAN all have redundancy to a certain extent, which affects the generation performance and complexity to a certain extent. We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN. (2) to integrate and reconstruct multiple losses of DAMSM. Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged and finally proposed our SEAttnGAN. Code is avalilable at https://github.com/jmyissb/SEAttnGAN.
[ { "created": "Mon, 26 Jun 2023 13:55:57 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2023 14:07:35 GMT", "version": "v2" } ]
2023-07-07
[ [ "Jin", "Mingyu", "" ], [ "Zhang", "Chong", "" ], [ "Yu", "Qinkai", "" ], [ "Xue", "Haochen", "" ], [ "Jin", "Xiaobo", "" ], [ "Yang", "Xi", "" ] ]
Synthesising a text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task. In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, which adopts a self-distillation technique to improve the performance of the generator and the quality of image generation, and Stack-GAN++, which gradually improves the details and quality of the image by stacking multiple generators and discriminators. However, this series of improvements to GAN all have redundancy to a certain extent, which affects the generation performance and complexity to a certain extent. We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN. (2) to integrate and reconstruct multiple losses of DAMSM. Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged and finally proposed our SEAttnGAN. Code is avalilable at https://github.com/jmyissb/SEAttnGAN.
2408.03397
Pratyush Dhingra
Pratyush Dhingra, Janardhan Rao Doppa, and Partha Pratim Pande
HeTraX: Energy Efficient 3D Heterogeneous Manycore Architecture for Transformer Acceleration
Presented at ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED-24)
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Transformers have revolutionized deep learning and generative modeling to enable unprecedented advancements in natural language processing tasks and beyond. However, designing hardware accelerators for executing transformer models is challenging due to the wide variety of computing kernels involved in the transformer architecture. Existing accelerators are either inadequate to accelerate end-to-end transformer models or suffer notable thermal limitations. In this paper, we propose the design of a three-dimensional heterogeneous architecture referred to as HeTraX specifically optimized to accelerate end-to-end transformer models. HeTraX employs hardware resources aligned with the computational kernels of transformers and optimizes both performance and energy. Experimental results show that HeTraX outperforms existing state-of-the-art by up to 5.6x in speedup and improves EDP by 14.5x while ensuring thermally feasibility.
[ { "created": "Tue, 6 Aug 2024 18:48:01 GMT", "version": "v1" } ]
2024-08-08
[ [ "Dhingra", "Pratyush", "" ], [ "Doppa", "Janardhan Rao", "" ], [ "Pande", "Partha Pratim", "" ] ]
Transformers have revolutionized deep learning and generative modeling to enable unprecedented advancements in natural language processing tasks and beyond. However, designing hardware accelerators for executing transformer models is challenging due to the wide variety of computing kernels involved in the transformer architecture. Existing accelerators are either inadequate to accelerate end-to-end transformer models or suffer notable thermal limitations. In this paper, we propose the design of a three-dimensional heterogeneous architecture referred to as HeTraX specifically optimized to accelerate end-to-end transformer models. HeTraX employs hardware resources aligned with the computational kernels of transformers and optimizes both performance and energy. Experimental results show that HeTraX outperforms existing state-of-the-art by up to 5.6x in speedup and improves EDP by 14.5x while ensuring thermally feasibility.
2401.05971
Rouwan Wu
Rouwan Wu, Xiaoya Cheng, Juelin Zhu, Xuxiang Liu, Maojun Zhang, Shen Yan
UAVD4L: A Large-Scale Dataset for UAV 6-DoF Localization
null
3DV 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant progress in global localization of Unmanned Aerial Vehicles (UAVs) in GPS-denied environments, existing methods remain constrained by the availability of datasets. Current datasets often focus on small-scale scenes and lack viewpoint variability, accurate ground truth (GT) pose, and UAV build-in sensor data. To address these limitations, we introduce a large-scale 6-DoF UAV dataset for localization (UAVD4L) and develop a two-stage 6-DoF localization pipeline (UAVLoc), which consists of offline synthetic data generation and online visual localization. Additionally, based on the 6-DoF estimator, we design a hierarchical system for tracking ground target in 3D space. Experimental results on the new dataset demonstrate the effectiveness of the proposed approach. Code and dataset are available at https://github.com/RingoWRW/UAVD4L
[ { "created": "Thu, 11 Jan 2024 15:19:21 GMT", "version": "v1" } ]
2024-01-12
[ [ "Wu", "Rouwan", "" ], [ "Cheng", "Xiaoya", "" ], [ "Zhu", "Juelin", "" ], [ "Liu", "Xuxiang", "" ], [ "Zhang", "Maojun", "" ], [ "Yan", "Shen", "" ] ]
Despite significant progress in global localization of Unmanned Aerial Vehicles (UAVs) in GPS-denied environments, existing methods remain constrained by the availability of datasets. Current datasets often focus on small-scale scenes and lack viewpoint variability, accurate ground truth (GT) pose, and UAV build-in sensor data. To address these limitations, we introduce a large-scale 6-DoF UAV dataset for localization (UAVD4L) and develop a two-stage 6-DoF localization pipeline (UAVLoc), which consists of offline synthetic data generation and online visual localization. Additionally, based on the 6-DoF estimator, we design a hierarchical system for tracking ground target in 3D space. Experimental results on the new dataset demonstrate the effectiveness of the proposed approach. Code and dataset are available at https://github.com/RingoWRW/UAVD4L
2207.05696
Prateek Chhikara
Prateek Chhikara, Anil Goyal, Chirag Sharma
RE-Tagger: A light-weight Real-Estate Image Classifier
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (DEMO TRACK)
null
10.1007/978-3-031-26422-1_44
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-estate image tagging is one of the essential use-cases to save efforts involved in manual annotation and enhance the user experience. This paper proposes an end-to-end pipeline (referred to as RE-Tagger) for the real-estate image classification problem. We present a two-stage transfer learning approach using custom InceptionV3 architecture to classify images into different categories (i.e., bedroom, bathroom, kitchen, balcony, hall, and others). Finally, we released the application as REST API hosted as a web application running on 2 cores machine with 2 GB RAM. The demo video is available here.
[ { "created": "Tue, 12 Jul 2022 17:16:06 GMT", "version": "v1" } ]
2023-06-06
[ [ "Chhikara", "Prateek", "" ], [ "Goyal", "Anil", "" ], [ "Sharma", "Chirag", "" ] ]
Real-estate image tagging is one of the essential use-cases to save efforts involved in manual annotation and enhance the user experience. This paper proposes an end-to-end pipeline (referred to as RE-Tagger) for the real-estate image classification problem. We present a two-stage transfer learning approach using custom InceptionV3 architecture to classify images into different categories (i.e., bedroom, bathroom, kitchen, balcony, hall, and others). Finally, we released the application as REST API hosted as a web application running on 2 cores machine with 2 GB RAM. The demo video is available here.
2008.09105
Mingkui Tan
Deng Huang, Peihao Chen, Runhao Zeng, Qing Du, Mingkui Tan, Chuang Gan
Location-aware Graph Convolutional Networks for Video Question Answering
null
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We addressed the challenging task of video question answering, which requires machines to answer questions about videos in a natural language form. Previous state-of-the-art methods attempt to apply spatio-temporal attention mechanism on video frame features without explicitly modeling the location and relations among object interaction occurred in videos. However, the relations between object interaction and their location information are very critical for both action recognition and question reasoning. In this work, we propose to represent the contents in the video as a location-aware graph by incorporating the location information of an object into the graph construction. Here, each node is associated with an object represented by its appearance and location features. Based on the constructed graph, we propose to use graph convolution to infer both the category and temporal locations of an action. As the graph is built on objects, our method is able to focus on the foreground action contents for better video question answering. Lastly, we leverage an attention mechanism to combine the output of graph convolution and encoded question features for final answer reasoning. Extensive experiments demonstrate the effectiveness of the proposed methods. Specifically, our method significantly outperforms state-of-the-art methods on TGIF-QA, Youtube2Text-QA, and MSVD-QA datasets. Code and pre-trained models are publicly available at: https://github.com/SunDoge/L-GCN
[ { "created": "Fri, 7 Aug 2020 02:12:56 GMT", "version": "v1" } ]
2020-08-21
[ [ "Huang", "Deng", "" ], [ "Chen", "Peihao", "" ], [ "Zeng", "Runhao", "" ], [ "Du", "Qing", "" ], [ "Tan", "Mingkui", "" ], [ "Gan", "Chuang", "" ] ]
We addressed the challenging task of video question answering, which requires machines to answer questions about videos in a natural language form. Previous state-of-the-art methods attempt to apply spatio-temporal attention mechanism on video frame features without explicitly modeling the location and relations among object interaction occurred in videos. However, the relations between object interaction and their location information are very critical for both action recognition and question reasoning. In this work, we propose to represent the contents in the video as a location-aware graph by incorporating the location information of an object into the graph construction. Here, each node is associated with an object represented by its appearance and location features. Based on the constructed graph, we propose to use graph convolution to infer both the category and temporal locations of an action. As the graph is built on objects, our method is able to focus on the foreground action contents for better video question answering. Lastly, we leverage an attention mechanism to combine the output of graph convolution and encoded question features for final answer reasoning. Extensive experiments demonstrate the effectiveness of the proposed methods. Specifically, our method significantly outperforms state-of-the-art methods on TGIF-QA, Youtube2Text-QA, and MSVD-QA datasets. Code and pre-trained models are publicly available at: https://github.com/SunDoge/L-GCN
2103.10357
Vincent Vajnovszki
Phan Thuan Do, Thi Thu Huong Tran, Vincent Vajnovszki
The equidistribution of some Mahonian statistics over permutations avoiding a pattern of length three
null
null
null
null
cs.DM math.CO
http://creativecommons.org/licenses/by-nc-nd/4.0/
We prove the equidistribution of several multistatistics over some classes of permutations avoiding a $3$-length pattern. We deduce the equidistribution, on the one hand of inv and foze" statistics, and on the other hand that of maj and makl statistics, over these classes of pattern avoiding permutations. Here inv and maj are the celebrated Mahonian statistics, foze" is one of the statistics defined in terms of generalized patterns in the 2000 pioneering paper of Babson and Steingr\'imsson, and makl is one of the statistics defined by Clarke, Steingr\'imsson and Zeng in 1997. These results solve several conjectures posed by Amini in 2018.
[ { "created": "Thu, 18 Mar 2021 16:25:36 GMT", "version": "v1" }, { "created": "Wed, 11 Aug 2021 15:04:40 GMT", "version": "v2" } ]
2021-08-12
[ [ "Do", "Phan Thuan", "" ], [ "Tran", "Thi Thu Huong", "" ], [ "Vajnovszki", "Vincent", "" ] ]
We prove the equidistribution of several multistatistics over some classes of permutations avoiding a $3$-length pattern. We deduce the equidistribution, on the one hand of inv and foze" statistics, and on the other hand that of maj and makl statistics, over these classes of pattern avoiding permutations. Here inv and maj are the celebrated Mahonian statistics, foze" is one of the statistics defined in terms of generalized patterns in the 2000 pioneering paper of Babson and Steingr\'imsson, and makl is one of the statistics defined by Clarke, Steingr\'imsson and Zeng in 1997. These results solve several conjectures posed by Amini in 2018.
1811.05010
Long Nguyen Msc
Long Nguyen, Zhou Yang, Jiazhen Zhu, Jia Li, Fang Jin
Coordinating Disaster Emergency Response with Heuristic Reinforcement Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A crucial and time-sensitive task when any disaster occurs is to rescue victims and distribute resources to the right groups and locations. This task is challenging in populated urban areas, due to the huge burst of help requests generated in a very short period. To improve the efficiency of the emergency response in the immediate aftermath of a disaster, we propose a heuristic multi-agent reinforcement learning scheduling algorithm, named as ResQ, which can effectively schedule the rapid deployment of volunteers to rescue victims in dynamic settings. The core concept is to quickly identify victims and volunteers from social network data and then schedule rescue parties with an adaptive learning algorithm. This framework performs two key functions: 1) identify trapped victims and rescue volunteers, and 2) optimize the volunteers' rescue strategy in a complex time-sensitive environment. The proposed ResQ algorithm can speed up the training processes through a heuristic function which reduces the state-action space by identifying the set of particular actions over others. Experimental results showed that the proposed heuristic multi-agent reinforcement learning based scheduling outperforms several state-of-art methods, in terms of both reward rate and response times.
[ { "created": "Mon, 12 Nov 2018 21:39:07 GMT", "version": "v1" } ]
2018-11-14
[ [ "Nguyen", "Long", "" ], [ "Yang", "Zhou", "" ], [ "Zhu", "Jiazhen", "" ], [ "Li", "Jia", "" ], [ "Jin", "Fang", "" ] ]
A crucial and time-sensitive task when any disaster occurs is to rescue victims and distribute resources to the right groups and locations. This task is challenging in populated urban areas, due to the huge burst of help requests generated in a very short period. To improve the efficiency of the emergency response in the immediate aftermath of a disaster, we propose a heuristic multi-agent reinforcement learning scheduling algorithm, named as ResQ, which can effectively schedule the rapid deployment of volunteers to rescue victims in dynamic settings. The core concept is to quickly identify victims and volunteers from social network data and then schedule rescue parties with an adaptive learning algorithm. This framework performs two key functions: 1) identify trapped victims and rescue volunteers, and 2) optimize the volunteers' rescue strategy in a complex time-sensitive environment. The proposed ResQ algorithm can speed up the training processes through a heuristic function which reduces the state-action space by identifying the set of particular actions over others. Experimental results showed that the proposed heuristic multi-agent reinforcement learning based scheduling outperforms several state-of-art methods, in terms of both reward rate and response times.
2302.11296
Mashaan Alshammari Dr.
Mashaan Alshammari, John Stavrakakis, Masahiro Takatsuka
Refining a $k$-nearest neighbor graph for a computationally efficient spectral clustering
null
Pattern Recognition, Volume 114, 2021
10.1016/j.patcog.2021.107869
null
cs.LG cs.AI cs.IR cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spectral clustering became a popular choice for data clustering for its ability of uncovering clusters of different shapes. However, it is not always preferable over other clustering methods due to its computational demands. One of the effective ways to bypass these computational demands is to perform spectral clustering on a subset of points (data representatives) then generalize the clustering outcome, this is known as approximate spectral clustering (ASC). ASC uses sampling or quantization to select data representatives. This makes it vulnerable to 1) performance inconsistency (since these methods have a random step either in initialization or training), 2) local statistics loss (because the pairwise similarities are extracted from data representatives instead of data points). We proposed a refined version of $k$-nearest neighbor graph, in which we keep data points and aggressively reduce number of edges for computational efficiency. Local statistics were exploited to keep the edges that do not violate the intra-cluster distances and nullify all other edges in the $k$-nearest neighbor graph. We also introduced an optional step to automatically select the number of clusters $C$. The proposed method was tested on synthetic and real datasets. Compared to ASC methods, the proposed method delivered a consistent performance despite significant reduction of edges.
[ { "created": "Wed, 22 Feb 2023 11:31:32 GMT", "version": "v1" } ]
2023-02-23
[ [ "Alshammari", "Mashaan", "" ], [ "Stavrakakis", "John", "" ], [ "Takatsuka", "Masahiro", "" ] ]
Spectral clustering became a popular choice for data clustering for its ability of uncovering clusters of different shapes. However, it is not always preferable over other clustering methods due to its computational demands. One of the effective ways to bypass these computational demands is to perform spectral clustering on a subset of points (data representatives) then generalize the clustering outcome, this is known as approximate spectral clustering (ASC). ASC uses sampling or quantization to select data representatives. This makes it vulnerable to 1) performance inconsistency (since these methods have a random step either in initialization or training), 2) local statistics loss (because the pairwise similarities are extracted from data representatives instead of data points). We proposed a refined version of $k$-nearest neighbor graph, in which we keep data points and aggressively reduce number of edges for computational efficiency. Local statistics were exploited to keep the edges that do not violate the intra-cluster distances and nullify all other edges in the $k$-nearest neighbor graph. We also introduced an optional step to automatically select the number of clusters $C$. The proposed method was tested on synthetic and real datasets. Compared to ASC methods, the proposed method delivered a consistent performance despite significant reduction of edges.
2010.01753
Rodrigo Toro Icarte
Rodrigo Toro Icarte, Richard Valenzano, Toryn Q. Klassen, Phillip Christoffersen, Amir-massoud Farahmand, Sheila A. McIlraith
The act of remembering: a study in partially observable reinforcement learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement Learning (RL) agents typically learn memoryless policies---policies that only consider the last observation when selecting actions. Learning memoryless policies is efficient and optimal in fully observable environments. However, some form of memory is necessary when RL agents are faced with partial observability. In this paper, we study a lightweight approach to tackle partial observability in RL. We provide the agent with an external memory and additional actions to control what, if anything, is written to the memory. At every step, the current memory state is part of the agent's observation, and the agent selects a tuple of actions: one action that modifies the environment and another that modifies the memory. When the external memory is sufficiently expressive, optimal memoryless policies yield globally optimal solutions. Unfortunately, previous attempts to use external memory in the form of binary memory have produced poor results in practice. Here, we investigate alternative forms of memory in support of learning effective memoryless policies. Our novel forms of memory outperform binary and LSTM-based memory in well-established partially observable domains.
[ { "created": "Mon, 5 Oct 2020 02:56:43 GMT", "version": "v1" } ]
2020-10-06
[ [ "Icarte", "Rodrigo Toro", "" ], [ "Valenzano", "Richard", "" ], [ "Klassen", "Toryn Q.", "" ], [ "Christoffersen", "Phillip", "" ], [ "Farahmand", "Amir-massoud", "" ], [ "McIlraith", "Sheila A.", "" ] ]
Reinforcement Learning (RL) agents typically learn memoryless policies---policies that only consider the last observation when selecting actions. Learning memoryless policies is efficient and optimal in fully observable environments. However, some form of memory is necessary when RL agents are faced with partial observability. In this paper, we study a lightweight approach to tackle partial observability in RL. We provide the agent with an external memory and additional actions to control what, if anything, is written to the memory. At every step, the current memory state is part of the agent's observation, and the agent selects a tuple of actions: one action that modifies the environment and another that modifies the memory. When the external memory is sufficiently expressive, optimal memoryless policies yield globally optimal solutions. Unfortunately, previous attempts to use external memory in the form of binary memory have produced poor results in practice. Here, we investigate alternative forms of memory in support of learning effective memoryless policies. Our novel forms of memory outperform binary and LSTM-based memory in well-established partially observable domains.
1909.00508
Nathan Dahlin
Nathan Dahlin and Rahul Jain
Two-Stage Electricity Markets with Renewable Energy Integration: Market Mechanisms and Equilibrium Analysis
null
null
null
null
cs.GT cs.SY econ.GN eess.SY q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a two-stage market mechanism for trading electricity including renewable generation as an alternative to the widely used multi-settlement market structure. The two-stage market structure allows for recourse decisions by the market operator, which are not possible in today's markets. We allow for different conventional generation cost curves in the forward and the real-time stages. We have considered costs of demand response programs and black outs, and adopt a DC power flow model to account for network constraints. Our first result is to show existence (by construction) of a sequential competitive equilibrium (SCEq) in such a two-stage market. We argue social welfare properties of such an SCEq, and then design a market mechanism that achieves social welfare maximization when the market participants are non-strategic. We also show that under either a congestion-free or a monopoly-free condition, an efficient Nash equilibrium exists.
[ { "created": "Mon, 2 Sep 2019 01:46:35 GMT", "version": "v1" }, { "created": "Tue, 15 Jun 2021 00:40:53 GMT", "version": "v2" } ]
2021-06-16
[ [ "Dahlin", "Nathan", "" ], [ "Jain", "Rahul", "" ] ]
We consider a two-stage market mechanism for trading electricity including renewable generation as an alternative to the widely used multi-settlement market structure. The two-stage market structure allows for recourse decisions by the market operator, which are not possible in today's markets. We allow for different conventional generation cost curves in the forward and the real-time stages. We have considered costs of demand response programs and black outs, and adopt a DC power flow model to account for network constraints. Our first result is to show existence (by construction) of a sequential competitive equilibrium (SCEq) in such a two-stage market. We argue social welfare properties of such an SCEq, and then design a market mechanism that achieves social welfare maximization when the market participants are non-strategic. We also show that under either a congestion-free or a monopoly-free condition, an efficient Nash equilibrium exists.
2009.00099
Behrooz Omidvar-Tehrani
Behrooz Omidvar-Tehrani, Sruthi Viswanathan, Jean-Michel Renders
Interactive and Explainable Point-of-Interest Recommendation using Look-alike Groups
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Recommending Points-of-Interest (POIs) is surfacing in many location-based applications. The literature contains personalized and socialized POI recommendation approaches which employ historical check-ins and social links to make recommendations. However these systems still lack customizability (incorporating session-based user interactions with the system) and contextuality (incorporating the situational context of the user), particularly in cold start situations, where nearly no user information is available. In this paper, we propose LikeMind, a POI recommendation system which tackles the challenges of cold start, customizability, contextuality, and explainability by exploiting look-alike groups mined in public POI datasets. LikeMind reformulates the problem of POI recommendation, as recommending explainable look-alike groups (and their POIs) which are in line with user's interests. LikeMind frames the task of POI recommendation as an exploratory process where users interact with the system by expressing their favorite POIs, and their interactions impact the way look-alike groups are selected out. Moreover, LikeMind employs "mindsets", which capture actual situation and intent of the user, and enforce the semantics of POI interestingness. In an extensive set of experiments, we show the quality of our approach in recommending relevant look-alike groups and their POIs, in terms of efficiency and effectiveness.
[ { "created": "Mon, 31 Aug 2020 21:05:21 GMT", "version": "v1" } ]
2020-09-02
[ [ "Omidvar-Tehrani", "Behrooz", "" ], [ "Viswanathan", "Sruthi", "" ], [ "Renders", "Jean-Michel", "" ] ]
Recommending Points-of-Interest (POIs) is surfacing in many location-based applications. The literature contains personalized and socialized POI recommendation approaches which employ historical check-ins and social links to make recommendations. However these systems still lack customizability (incorporating session-based user interactions with the system) and contextuality (incorporating the situational context of the user), particularly in cold start situations, where nearly no user information is available. In this paper, we propose LikeMind, a POI recommendation system which tackles the challenges of cold start, customizability, contextuality, and explainability by exploiting look-alike groups mined in public POI datasets. LikeMind reformulates the problem of POI recommendation, as recommending explainable look-alike groups (and their POIs) which are in line with user's interests. LikeMind frames the task of POI recommendation as an exploratory process where users interact with the system by expressing their favorite POIs, and their interactions impact the way look-alike groups are selected out. Moreover, LikeMind employs "mindsets", which capture actual situation and intent of the user, and enforce the semantics of POI interestingness. In an extensive set of experiments, we show the quality of our approach in recommending relevant look-alike groups and their POIs, in terms of efficiency and effectiveness.
1810.09807
Hong Chen
Hong Chen, Zhenhua Fan, Hao Lu, Alan L. Yuille and Shu Rong
PreCo: A Large-scale Dataset in Preschool Vocabulary for Coreference Resolution
EMNLP 2018
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce PreCo, a large-scale English dataset for coreference resolution. The dataset is designed to embody the core challenges in coreference, such as entity representation, by alleviating the challenge of low overlap between training and test sets and enabling separated analysis of mention detection and mention clustering. To strengthen the training-test overlap, we collect a large corpus of about 38K documents and 12.4M words which are mostly from the vocabulary of English-speaking preschoolers. Experiments show that with higher training-test overlap, error analysis on PreCo is more efficient than the one on OntoNotes, a popular existing dataset. Furthermore, we annotate singleton mentions making it possible for the first time to quantify the influence that a mention detector makes on coreference resolution performance. The dataset is freely available at https://preschool-lab.github.io/PreCo/.
[ { "created": "Tue, 23 Oct 2018 12:09:37 GMT", "version": "v1" } ]
2018-10-24
[ [ "Chen", "Hong", "" ], [ "Fan", "Zhenhua", "" ], [ "Lu", "Hao", "" ], [ "Yuille", "Alan L.", "" ], [ "Rong", "Shu", "" ] ]
We introduce PreCo, a large-scale English dataset for coreference resolution. The dataset is designed to embody the core challenges in coreference, such as entity representation, by alleviating the challenge of low overlap between training and test sets and enabling separated analysis of mention detection and mention clustering. To strengthen the training-test overlap, we collect a large corpus of about 38K documents and 12.4M words which are mostly from the vocabulary of English-speaking preschoolers. Experiments show that with higher training-test overlap, error analysis on PreCo is more efficient than the one on OntoNotes, a popular existing dataset. Furthermore, we annotate singleton mentions making it possible for the first time to quantify the influence that a mention detector makes on coreference resolution performance. The dataset is freely available at https://preschool-lab.github.io/PreCo/.
2210.01234
Rafid Mahmood
Rafid Mahmood, James Lucas, Jose M. Alvarez, Sanja Fidler, Marc T. Law
Optimizing Data Collection for Machine Learning
Accepted to NeurIPS 2022
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Modern deep learning systems require huge data sets to achieve impressive performance, but there is little guidance on how much or what kind of data to collect. Over-collecting data incurs unnecessary present costs, while under-collecting may incur future costs and delay workflows. We propose a new paradigm for modeling the data collection workflow as a formal optimal data collection problem that allows designers to specify performance targets, collection costs, a time horizon, and penalties for failing to meet the targets. Additionally, this formulation generalizes to tasks requiring multiple data sources, such as labeled and unlabeled data used in semi-supervised learning. To solve our problem, we develop Learn-Optimize-Collect (LOC), which minimizes expected future collection costs. Finally, we numerically compare our framework to the conventional baseline of estimating data requirements by extrapolating from neural scaling laws. We significantly reduce the risks of failing to meet desired performance targets on several classification, segmentation, and detection tasks, while maintaining low total collection costs.
[ { "created": "Mon, 3 Oct 2022 21:19:05 GMT", "version": "v1" } ]
2022-10-05
[ [ "Mahmood", "Rafid", "" ], [ "Lucas", "James", "" ], [ "Alvarez", "Jose M.", "" ], [ "Fidler", "Sanja", "" ], [ "Law", "Marc T.", "" ] ]
Modern deep learning systems require huge data sets to achieve impressive performance, but there is little guidance on how much or what kind of data to collect. Over-collecting data incurs unnecessary present costs, while under-collecting may incur future costs and delay workflows. We propose a new paradigm for modeling the data collection workflow as a formal optimal data collection problem that allows designers to specify performance targets, collection costs, a time horizon, and penalties for failing to meet the targets. Additionally, this formulation generalizes to tasks requiring multiple data sources, such as labeled and unlabeled data used in semi-supervised learning. To solve our problem, we develop Learn-Optimize-Collect (LOC), which minimizes expected future collection costs. Finally, we numerically compare our framework to the conventional baseline of estimating data requirements by extrapolating from neural scaling laws. We significantly reduce the risks of failing to meet desired performance targets on several classification, segmentation, and detection tasks, while maintaining low total collection costs.
1903.10404
Bharat Prakash
Bharat Prakash, Mark Horton, Nicholas R. Waytowich, William David Hairston, Tim Oates, Tinoosh Mohsenin
On the use of Deep Autoencoders for Efficient Embedded Reinforcement Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In autonomous embedded systems, it is often vital to reduce the amount of actions taken in the real world and energy required to learn a policy. Training reinforcement learning agents from high dimensional image representations can be very expensive and time consuming. Autoencoders are deep neural network used to compress high dimensional data such as pixelated images into small latent representations. This compression model is vital to efficiently learn policies, especially when learning on embedded systems. We have implemented this model on the NVIDIA Jetson TX2 embedded GPU, and evaluated the power consumption, throughput, and energy consumption of the autoencoders for various CPU/GPU core combinations, frequencies, and model parameters. Additionally, we have shown the reconstructions generated by the autoencoder to analyze the quality of the generated compressed representation and also the performance of the reinforcement learning agent. Finally, we have presented an assessment of the viability of training these models on embedded systems and their usefulness in developing autonomous policies. Using autoencoders, we were able to achieve 4-5 $\times$ improved performance compared to a baseline RL agent with a convolutional feature extractor, while using less than 2W of power.
[ { "created": "Mon, 25 Mar 2019 15:38:37 GMT", "version": "v1" } ]
2019-03-26
[ [ "Prakash", "Bharat", "" ], [ "Horton", "Mark", "" ], [ "Waytowich", "Nicholas R.", "" ], [ "Hairston", "William David", "" ], [ "Oates", "Tim", "" ], [ "Mohsenin", "Tinoosh", "" ] ]
In autonomous embedded systems, it is often vital to reduce the amount of actions taken in the real world and energy required to learn a policy. Training reinforcement learning agents from high dimensional image representations can be very expensive and time consuming. Autoencoders are deep neural network used to compress high dimensional data such as pixelated images into small latent representations. This compression model is vital to efficiently learn policies, especially when learning on embedded systems. We have implemented this model on the NVIDIA Jetson TX2 embedded GPU, and evaluated the power consumption, throughput, and energy consumption of the autoencoders for various CPU/GPU core combinations, frequencies, and model parameters. Additionally, we have shown the reconstructions generated by the autoencoder to analyze the quality of the generated compressed representation and also the performance of the reinforcement learning agent. Finally, we have presented an assessment of the viability of training these models on embedded systems and their usefulness in developing autonomous policies. Using autoencoders, we were able to achieve 4-5 $\times$ improved performance compared to a baseline RL agent with a convolutional feature extractor, while using less than 2W of power.
2307.07935
Jinlong Li
Jinlong Li, Runsheng Xu, Xinyu Liu, Baolu Li, Qin Zou, Jiaqi Ma, Hongkai Yu
S2R-ViT for Multi-Agent Cooperative Perception: Bridging the Gap from Simulation to Reality
submit the latest one, accepted by the 2024 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Due to the lack of enough real multi-agent data and time-consuming of labeling, existing multi-agent cooperative perception algorithms usually select the simulated sensor data for training and validating. However, the perception performance is degraded when these simulation-trained models are deployed to the real world, due to the significant domain gap between the simulated and real data. In this paper, we propose the first Simulation-to-Reality transfer learning framework for multi-agent cooperative perception using a novel Vision Transformer, named as S2R-ViT, which considers both the Deployment Gap and Feature Gap between simulated and real data. We investigate the effects of these two types of domain gaps and propose a novel uncertainty-aware vision transformer to effectively relief the Deployment Gap and an agent-based feature adaptation module with inter-agent and ego-agent discriminators to reduce the Feature Gap. Our intensive experiments on the public multi-agent cooperative perception datasets OPV2V and V2V4Real demonstrate that the proposed S2R-ViT can effectively bridge the gap from simulation to reality and outperform other methods significantly for point cloud-based 3D object detection.
[ { "created": "Sun, 16 Jul 2023 03:54:10 GMT", "version": "v1" }, { "created": "Tue, 18 Jul 2023 22:33:55 GMT", "version": "v2" }, { "created": "Tue, 26 Sep 2023 18:01:44 GMT", "version": "v3" }, { "created": "Tue, 20 Feb 2024 20:50:55 GMT", "version": "v4" } ]
2024-02-22
[ [ "Li", "Jinlong", "" ], [ "Xu", "Runsheng", "" ], [ "Liu", "Xinyu", "" ], [ "Li", "Baolu", "" ], [ "Zou", "Qin", "" ], [ "Ma", "Jiaqi", "" ], [ "Yu", "Hongkai", "" ] ]
Due to the lack of enough real multi-agent data and time-consuming of labeling, existing multi-agent cooperative perception algorithms usually select the simulated sensor data for training and validating. However, the perception performance is degraded when these simulation-trained models are deployed to the real world, due to the significant domain gap between the simulated and real data. In this paper, we propose the first Simulation-to-Reality transfer learning framework for multi-agent cooperative perception using a novel Vision Transformer, named as S2R-ViT, which considers both the Deployment Gap and Feature Gap between simulated and real data. We investigate the effects of these two types of domain gaps and propose a novel uncertainty-aware vision transformer to effectively relief the Deployment Gap and an agent-based feature adaptation module with inter-agent and ego-agent discriminators to reduce the Feature Gap. Our intensive experiments on the public multi-agent cooperative perception datasets OPV2V and V2V4Real demonstrate that the proposed S2R-ViT can effectively bridge the gap from simulation to reality and outperform other methods significantly for point cloud-based 3D object detection.
1811.12099
Felix Rath
Felix Rath, Daniel Schemmel, Klaus Wehrle
Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution
6 pages
null
10.1145/3284850.3284853
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
[ { "created": "Thu, 29 Nov 2018 12:41:38 GMT", "version": "v1" } ]
2018-11-30
[ [ "Rath", "Felix", "" ], [ "Schemmel", "Daniel", "" ], [ "Wehrle", "Klaus", "" ] ]
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
2107.11208
Simon Burton
Simon Burton
Entropy, Derivation Operators and Huffman Trees
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We build a theory of binary trees on finite multisets that categorifies, or operationalizes, the entropy of a finite probability distribution. Multisets operationalize probabilities as the event outcomes of an experiment. Huffman trees operationalize the entropy of the distribution of these events. We show how the derivation property of the entropy of a joint distribution lifts to Huffman trees.
[ { "created": "Fri, 23 Jul 2021 13:08:37 GMT", "version": "v1" } ]
2021-07-26
[ [ "Burton", "Simon", "" ] ]
We build a theory of binary trees on finite multisets that categorifies, or operationalizes, the entropy of a finite probability distribution. Multisets operationalize probabilities as the event outcomes of an experiment. Huffman trees operationalize the entropy of the distribution of these events. We show how the derivation property of the entropy of a joint distribution lifts to Huffman trees.
1802.02178
Ruizhou Ding
Ruizhou Ding, Zeye Liu, Rongye Shi, Diana Marculescu, and R. D. Blanton
LightNN: Filling the Gap between Conventional Deep Neural Networks and Binarized Networks
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Application-specific integrated circuit (ASIC) implementations for Deep Neural Networks (DNNs) have been adopted in many systems because of their higher classification speed. However, although they may be characterized by better accuracy, larger DNNs require significant energy and area, thereby limiting their wide adoption. The energy consumption of DNNs is driven by both memory accesses and computation. Binarized Neural Networks (BNNs), as a trade-off between accuracy and energy consumption, can achieve great energy reduction, and have good accuracy for large DNNs due to its regularization effect. However, BNNs show poor accuracy when a smaller DNN configuration is adopted. In this paper, we propose a new DNN model, LightNN, which replaces the multiplications to one shift or a constrained number of shifts and adds. For a fixed DNN configuration, LightNNs have better accuracy at a slight energy increase than BNNs, yet are more energy efficient with only slightly less accuracy than conventional DNNs. Therefore, LightNNs provide more options for hardware designers to make trade-offs between accuracy and energy. Moreover, for large DNN configurations, LightNNs have a regularization effect, making them better in accuracy than conventional DNNs. These conclusions are verified by experiment using the MNIST and CIFAR-10 datasets for different DNN configurations.
[ { "created": "Sat, 2 Dec 2017 21:34:39 GMT", "version": "v1" } ]
2018-02-08
[ [ "Ding", "Ruizhou", "" ], [ "Liu", "Zeye", "" ], [ "Shi", "Rongye", "" ], [ "Marculescu", "Diana", "" ], [ "Blanton", "R. D.", "" ] ]
Application-specific integrated circuit (ASIC) implementations for Deep Neural Networks (DNNs) have been adopted in many systems because of their higher classification speed. However, although they may be characterized by better accuracy, larger DNNs require significant energy and area, thereby limiting their wide adoption. The energy consumption of DNNs is driven by both memory accesses and computation. Binarized Neural Networks (BNNs), as a trade-off between accuracy and energy consumption, can achieve great energy reduction, and have good accuracy for large DNNs due to its regularization effect. However, BNNs show poor accuracy when a smaller DNN configuration is adopted. In this paper, we propose a new DNN model, LightNN, which replaces the multiplications to one shift or a constrained number of shifts and adds. For a fixed DNN configuration, LightNNs have better accuracy at a slight energy increase than BNNs, yet are more energy efficient with only slightly less accuracy than conventional DNNs. Therefore, LightNNs provide more options for hardware designers to make trade-offs between accuracy and energy. Moreover, for large DNN configurations, LightNNs have a regularization effect, making them better in accuracy than conventional DNNs. These conclusions are verified by experiment using the MNIST and CIFAR-10 datasets for different DNN configurations.
2303.15671
Suncheng Xiang
Qingzhong Chen, Shilun Cai, Crystal Cai, Zefang Yu, Dahong Qian, Suncheng Xiang
Colo-SCRL: Self-Supervised Contrastive Representation Learning for Colonoscopic Video Retrieval
Accepted by ICME 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Colonoscopic video retrieval, which is a critical part of polyp treatment, has great clinical significance for the prevention and treatment of colorectal cancer. However, retrieval models trained on action recognition datasets usually produce unsatisfactory retrieval results on colonoscopic datasets due to the large domain gap between them. To seek a solution to this problem, we construct a large-scale colonoscopic dataset named Colo-Pair for medical practice. Based on this dataset, a simple yet effective training method called Colo-SCRL is proposed for more robust representation learning. It aims to refine general knowledge from colonoscopies through masked autoencoder-based reconstruction and momentum contrast to improve retrieval performance. To the best of our knowledge, this is the first attempt to employ the contrastive learning paradigm for medical video retrieval. Empirical results show that our method significantly outperforms current state-of-the-art methods in the colonoscopic video retrieval task.
[ { "created": "Tue, 28 Mar 2023 01:27:23 GMT", "version": "v1" } ]
2023-03-29
[ [ "Chen", "Qingzhong", "" ], [ "Cai", "Shilun", "" ], [ "Cai", "Crystal", "" ], [ "Yu", "Zefang", "" ], [ "Qian", "Dahong", "" ], [ "Xiang", "Suncheng", "" ] ]
Colonoscopic video retrieval, which is a critical part of polyp treatment, has great clinical significance for the prevention and treatment of colorectal cancer. However, retrieval models trained on action recognition datasets usually produce unsatisfactory retrieval results on colonoscopic datasets due to the large domain gap between them. To seek a solution to this problem, we construct a large-scale colonoscopic dataset named Colo-Pair for medical practice. Based on this dataset, a simple yet effective training method called Colo-SCRL is proposed for more robust representation learning. It aims to refine general knowledge from colonoscopies through masked autoencoder-based reconstruction and momentum contrast to improve retrieval performance. To the best of our knowledge, this is the first attempt to employ the contrastive learning paradigm for medical video retrieval. Empirical results show that our method significantly outperforms current state-of-the-art methods in the colonoscopic video retrieval task.
2006.07486
Zihan Tan
Julia Chuzhoy, Merav Parter, Zihan Tan
On Packing Low-Diameter Spanning Trees
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Edge connectivity of a graph is one of the most fundamental graph-theoretic concepts. The celebrated tree packing theorem of Tutte and Nash-Williams from 1961 states that every $k$-edge connected graph $G$ contains a collection $\cal{T}$ of $\lfloor k/2 \rfloor$ edge-disjoint spanning trees, that we refer to as a tree packing; the diameter of the tree packing $\cal{T}$ is the largest diameter of any tree in $\cal{T}$. A desirable property of a tree packing, that is both sufficient and necessary for leveraging the high connectivity of a graph in distributed communication, is that its diameter is low. Yet, despite extensive research in this area, it is still unclear how to compute a tree packing, whose diameter is sublinear in $|V(G)|$, in a low-diameter graph $G$, or alternatively how to show that such a packing does not exist. In this paper we provide first non-trivial upper and lower bounds on the diameter of tree packing. First, we show that, for every $k$-edge connected $n$-vertex graph $G$ of diameter $D$, there is a tree packing $\cal{T}$ of size $\Omega(k)$, diameter $O((101k\log n)^D)$, that causes edge-congestion at most $2$. Second, we show that for every $k$-edge connected $n$-vertex graph $G$ of diameter $D$, the diameter of $G[p]$ is $O(k^{D(D+1)/2})$ with high probability, where $G[p]$ is obtained by sampling each edge of $G$ independently with probability $p=\Theta(\log n/k)$. This provides a packing of $\Omega(k/\log n)$ edge-disjoint trees of diameter at most $O(k^{(D(D+1)/2)})$ each. We then prove that these two results are nearly tight. Lastly, we show that if every pair of vertices in a graph has $k$ edge-disjoint paths of length at most $D$ connecting them, then there is a tree packing of size $k$, diameter $O(D\log n)$, causing edge-congestion $O(\log n)$. We also provide several applications of low-diameter tree packing in distributed computation.
[ { "created": "Fri, 12 Jun 2020 21:54:03 GMT", "version": "v1" } ]
2020-06-16
[ [ "Chuzhoy", "Julia", "" ], [ "Parter", "Merav", "" ], [ "Tan", "Zihan", "" ] ]
Edge connectivity of a graph is one of the most fundamental graph-theoretic concepts. The celebrated tree packing theorem of Tutte and Nash-Williams from 1961 states that every $k$-edge connected graph $G$ contains a collection $\cal{T}$ of $\lfloor k/2 \rfloor$ edge-disjoint spanning trees, that we refer to as a tree packing; the diameter of the tree packing $\cal{T}$ is the largest diameter of any tree in $\cal{T}$. A desirable property of a tree packing, that is both sufficient and necessary for leveraging the high connectivity of a graph in distributed communication, is that its diameter is low. Yet, despite extensive research in this area, it is still unclear how to compute a tree packing, whose diameter is sublinear in $|V(G)|$, in a low-diameter graph $G$, or alternatively how to show that such a packing does not exist. In this paper we provide first non-trivial upper and lower bounds on the diameter of tree packing. First, we show that, for every $k$-edge connected $n$-vertex graph $G$ of diameter $D$, there is a tree packing $\cal{T}$ of size $\Omega(k)$, diameter $O((101k\log n)^D)$, that causes edge-congestion at most $2$. Second, we show that for every $k$-edge connected $n$-vertex graph $G$ of diameter $D$, the diameter of $G[p]$ is $O(k^{D(D+1)/2})$ with high probability, where $G[p]$ is obtained by sampling each edge of $G$ independently with probability $p=\Theta(\log n/k)$. This provides a packing of $\Omega(k/\log n)$ edge-disjoint trees of diameter at most $O(k^{(D(D+1)/2)})$ each. We then prove that these two results are nearly tight. Lastly, we show that if every pair of vertices in a graph has $k$ edge-disjoint paths of length at most $D$ connecting them, then there is a tree packing of size $k$, diameter $O(D\log n)$, causing edge-congestion $O(\log n)$. We also provide several applications of low-diameter tree packing in distributed computation.
2107.09862
Bruno Benedetti
Bruno Benedetti, Crystal Lai, Davide Lofano, and Frank H. Lutz
Random Simple-Homotopy Theory
23 pages, 6 figures, 5 tables
null
null
null
cs.CG math.AT math.CO math.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We implement an algorithm RSHT (Random Simple-Homotopy) to study the simple-homotopy types of simplicial complexes, with a particular focus on contractible spaces and on finding substructures in higher-dimensional complexes. The algorithm combines elementary simplicial collapses with pure elementary expansions. For triangulated d-manifolds with d < 7, we show that RSHT reduces to (random) bistellar flips. Among the many examples on which we test RSHT, we describe an explicit 15-vertex triangulation of the Abalone, and more generally, (14k+1)-vertex triangulations of Bing's houses with k rooms, which all can be deformed to a point using only six pure elementary expansions.
[ { "created": "Wed, 21 Jul 2021 03:05:11 GMT", "version": "v1" }, { "created": "Sun, 26 Sep 2021 13:48:55 GMT", "version": "v2" } ]
2021-09-28
[ [ "Benedetti", "Bruno", "" ], [ "Lai", "Crystal", "" ], [ "Lofano", "Davide", "" ], [ "Lutz", "Frank H.", "" ] ]
We implement an algorithm RSHT (Random Simple-Homotopy) to study the simple-homotopy types of simplicial complexes, with a particular focus on contractible spaces and on finding substructures in higher-dimensional complexes. The algorithm combines elementary simplicial collapses with pure elementary expansions. For triangulated d-manifolds with d < 7, we show that RSHT reduces to (random) bistellar flips. Among the many examples on which we test RSHT, we describe an explicit 15-vertex triangulation of the Abalone, and more generally, (14k+1)-vertex triangulations of Bing's houses with k rooms, which all can be deformed to a point using only six pure elementary expansions.
1709.10177
Maria-Laura Torrente
Maria-Laura Torrente, Silvia Biasotti, Bianca Falcidieno
Recognition of feature curves on 3D shapes using an algebraic approach to Hough transforms
null
Pattern Recognition, Volume 73, Pages 1-288 (January 2018)
10.1016/j.patcog.2017.08.008
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature curves are largely adopted to highlight shape features, such as sharp lines, or to divide surfaces into meaningful segments, like convex or concave regions. Extracting these curves is not sufficient to convey prominent and meaningful information about a shape. We have first to separate the curves belonging to features from those caused by noise and then to select the lines, which describe non-trivial portions of a surface. The automatic detection of such features is crucial for the identification and/or annotation of relevant parts of a given shape. To do this, the Hough transform (HT) is a feature extraction technique widely used in image analysis, computer vision and digital image processing, while, for 3D shapes, the extraction of salient feature curves is still an open problem. Thanks to algebraic geometry concepts, the HT technique has been recently extended to include a vast class of algebraic curves, thus proving to be a competitive tool for yielding an explicit representation of the diverse feature lines equations. In the paper, for the first time we apply this novel extension of the HT technique to the realm of 3D shapes in order to identify and localize semantic features like patterns, decorations or anatomical details on 3D objects (both complete and fragments), even in the case of features partially damaged or incomplete. The method recognizes various features, possibly compound, and it selects the most suitable feature profiles among families of algebraic curves.
[ { "created": "Thu, 28 Sep 2017 21:36:05 GMT", "version": "v1" } ]
2017-10-02
[ [ "Torrente", "Maria-Laura", "" ], [ "Biasotti", "Silvia", "" ], [ "Falcidieno", "Bianca", "" ] ]
Feature curves are largely adopted to highlight shape features, such as sharp lines, or to divide surfaces into meaningful segments, like convex or concave regions. Extracting these curves is not sufficient to convey prominent and meaningful information about a shape. We have first to separate the curves belonging to features from those caused by noise and then to select the lines, which describe non-trivial portions of a surface. The automatic detection of such features is crucial for the identification and/or annotation of relevant parts of a given shape. To do this, the Hough transform (HT) is a feature extraction technique widely used in image analysis, computer vision and digital image processing, while, for 3D shapes, the extraction of salient feature curves is still an open problem. Thanks to algebraic geometry concepts, the HT technique has been recently extended to include a vast class of algebraic curves, thus proving to be a competitive tool for yielding an explicit representation of the diverse feature lines equations. In the paper, for the first time we apply this novel extension of the HT technique to the realm of 3D shapes in order to identify and localize semantic features like patterns, decorations or anatomical details on 3D objects (both complete and fragments), even in the case of features partially damaged or incomplete. The method recognizes various features, possibly compound, and it selects the most suitable feature profiles among families of algebraic curves.
1901.03396
Ryan Webster
Ryan Webster, Julien Rabin, Loic Simon, Frederic Jurie
Detecting Overfitting of Deep Generative Networks via Latent Recovery
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
State of the art deep generative networks are capable of producing images with such incredible realism that they can be suspected of memorizing training images. It is why it is not uncommon to include visualizations of training set nearest neighbors, to suggest generated images are not simply memorized. We demonstrate this is not sufficient and motivates the need to study memorization/overfitting of deep generators with more scrutiny. This paper addresses this question by i) showing how simple losses are highly effective at reconstructing images for deep generators ii) analyzing the statistics of reconstruction errors when reconstructing training and validation images, which is the standard way to analyze overfitting in machine learning. Using this methodology, this paper shows that overfitting is not detectable in the pure GAN models proposed in the literature, in contrast with those using hybrid adversarial losses, which are amongst the most widely applied generative methods. The paper also shows that standard GAN evaluation metrics fail to capture memorization for some deep generators. Finally, the paper also shows how off-the-shelf GAN generators can be successfully applied to face inpainting and face super-resolution using the proposed reconstruction method, without hybrid adversarial losses.
[ { "created": "Wed, 9 Jan 2019 16:29:05 GMT", "version": "v1" } ]
2019-01-14
[ [ "Webster", "Ryan", "" ], [ "Rabin", "Julien", "" ], [ "Simon", "Loic", "" ], [ "Jurie", "Frederic", "" ] ]
State of the art deep generative networks are capable of producing images with such incredible realism that they can be suspected of memorizing training images. It is why it is not uncommon to include visualizations of training set nearest neighbors, to suggest generated images are not simply memorized. We demonstrate this is not sufficient and motivates the need to study memorization/overfitting of deep generators with more scrutiny. This paper addresses this question by i) showing how simple losses are highly effective at reconstructing images for deep generators ii) analyzing the statistics of reconstruction errors when reconstructing training and validation images, which is the standard way to analyze overfitting in machine learning. Using this methodology, this paper shows that overfitting is not detectable in the pure GAN models proposed in the literature, in contrast with those using hybrid adversarial losses, which are amongst the most widely applied generative methods. The paper also shows that standard GAN evaluation metrics fail to capture memorization for some deep generators. Finally, the paper also shows how off-the-shelf GAN generators can be successfully applied to face inpainting and face super-resolution using the proposed reconstruction method, without hybrid adversarial losses.
2406.12975
Xiaoze Liu
Xiaoze Liu, Ting Sun, Tianyang Xu, Feijie Wu, Cunxiang Wang, Xiaoqian Wang, Jing Gao
SHIELD: Evaluation and Defense Strategies for Copyright Compliance in LLM Text Generation
null
null
null
null
cs.CL cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have transformed machine learning but raised significant legal concerns due to their potential to produce text that infringes on copyrights, resulting in several high-profile lawsuits. The legal landscape is struggling to keep pace with these rapid advancements, with ongoing debates about whether generated text might plagiarize copyrighted materials. Current LLMs may infringe on copyrights or overly restrict non-copyrighted texts, leading to these challenges: (i) the need for a comprehensive evaluation benchmark to assess copyright compliance from multiple aspects; (ii) evaluating robustness against safeguard bypassing attacks; and (iii) developing effective defenses targeted against the generation of copyrighted text. To tackle these challenges, we introduce a curated dataset to evaluate methods, test attack strategies, and propose lightweight, real-time defenses to prevent the generation of copyrighted text, ensuring the safe and lawful use of LLMs. Our experiments demonstrate that current LLMs frequently output copyrighted text, and that jailbreaking attacks can significantly increase the volume of copyrighted output. Our proposed defense mechanisms significantly reduce the volume of copyrighted text generated by LLMs by effectively refusing malicious requests. Code is publicly available at https://github.com/xz-liu/SHIELD
[ { "created": "Tue, 18 Jun 2024 18:00:03 GMT", "version": "v1" } ]
2024-06-21
[ [ "Liu", "Xiaoze", "" ], [ "Sun", "Ting", "" ], [ "Xu", "Tianyang", "" ], [ "Wu", "Feijie", "" ], [ "Wang", "Cunxiang", "" ], [ "Wang", "Xiaoqian", "" ], [ "Gao", "Jing", "" ] ]
Large Language Models (LLMs) have transformed machine learning but raised significant legal concerns due to their potential to produce text that infringes on copyrights, resulting in several high-profile lawsuits. The legal landscape is struggling to keep pace with these rapid advancements, with ongoing debates about whether generated text might plagiarize copyrighted materials. Current LLMs may infringe on copyrights or overly restrict non-copyrighted texts, leading to these challenges: (i) the need for a comprehensive evaluation benchmark to assess copyright compliance from multiple aspects; (ii) evaluating robustness against safeguard bypassing attacks; and (iii) developing effective defenses targeted against the generation of copyrighted text. To tackle these challenges, we introduce a curated dataset to evaluate methods, test attack strategies, and propose lightweight, real-time defenses to prevent the generation of copyrighted text, ensuring the safe and lawful use of LLMs. Our experiments demonstrate that current LLMs frequently output copyrighted text, and that jailbreaking attacks can significantly increase the volume of copyrighted output. Our proposed defense mechanisms significantly reduce the volume of copyrighted text generated by LLMs by effectively refusing malicious requests. Code is publicly available at https://github.com/xz-liu/SHIELD
2104.14234
Jannis Clausius
Jannis Clausius, Sebastian D\"orner, Sebastian Cammerer, Stephan ten Brink
Serial vs. Parallel Turbo-Autoencoders and Accelerated Training for Learned Channel Codes
Submitted to ISTC 2021
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attracted by its scalability towards practical codeword lengths, we revisit the idea of Turbo-autoencoders for end-to-end learning of PHY-Layer communications. For this, we study the existing concepts of Turbo-autoencoders from the literature and compare the concept with state-of-the-art classical coding schemes. We propose a new component-wise training algorithm based on the idea of Gaussian a priori distributions that reduces the overall training time by almost a magnitude. Further, we propose a new serial architecture inspired by classical serially concatenated Turbo code structures and show that a carefully optimized interface between the two component autoencoders is required. To the best of our knowledge, these serial Turbo autoencoder structures are the best known neural network based learned sequences that can be trained from scratch without any required expert knowledge in the domain of channel codes.
[ { "created": "Thu, 29 Apr 2021 09:54:22 GMT", "version": "v1" }, { "created": "Thu, 22 Jul 2021 08:04:45 GMT", "version": "v2" } ]
2021-07-23
[ [ "Clausius", "Jannis", "" ], [ "Dörner", "Sebastian", "" ], [ "Cammerer", "Sebastian", "" ], [ "Brink", "Stephan ten", "" ] ]
Attracted by its scalability towards practical codeword lengths, we revisit the idea of Turbo-autoencoders for end-to-end learning of PHY-Layer communications. For this, we study the existing concepts of Turbo-autoencoders from the literature and compare the concept with state-of-the-art classical coding schemes. We propose a new component-wise training algorithm based on the idea of Gaussian a priori distributions that reduces the overall training time by almost a magnitude. Further, we propose a new serial architecture inspired by classical serially concatenated Turbo code structures and show that a carefully optimized interface between the two component autoencoders is required. To the best of our knowledge, these serial Turbo autoencoder structures are the best known neural network based learned sequences that can be trained from scratch without any required expert knowledge in the domain of channel codes.
2312.11556
Juan A. Rodriguez
Juan A. Rodriguez, Shubham Agarwal, Issam H. Laradji, Pau Rodriguez, David Vazquez, Christopher Pal, and Marco Pedersoli
StarVector: Generating Scalable Vector Graphics Code from Images
null
null
null
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Scalable Vector Graphics (SVGs) have become integral in modern image rendering applications due to their infinite scalability in resolution, versatile usability, and editing capabilities. SVGs are particularly popular in the fields of web development and graphic design. Existing approaches for SVG modeling using deep learning often struggle with generating complex SVGs and are restricted to simpler ones that require extensive processing and simplification. This paper introduces StarVector, a multimodal SVG generation model that effectively integrates Code Generation Large Language Models (CodeLLMs) and vision models. Our approach utilizes a CLIP image encoder to extract visual representations from pixel-based images, which are then transformed into visual tokens via an adapter module. These visual tokens are pre-pended to the SVG token embeddings, and the sequence is modeled by the StarCoder model using next-token prediction, effectively learning to align the visual and code tokens. This enables StarVector to generate unrestricted SVGs that accurately represent pixel images. To evaluate StarVector's performance, we present SVG-Bench, a comprehensive benchmark for evaluating SVG methods across multiple datasets and relevant metrics. Within this benchmark, we introduce novel datasets including SVG-Stack, a large-scale dataset of real-world SVG examples, and use it to pre-train StarVector as a large foundation model for SVGs. Our results demonstrate significant enhancements in visual quality and complexity handling over current methods, marking a notable advancement in SVG generation technology. Code and models: https://github.com/joanrod/star-vector
[ { "created": "Sun, 17 Dec 2023 08:07:32 GMT", "version": "v1" } ]
2023-12-20
[ [ "Rodriguez", "Juan A.", "" ], [ "Agarwal", "Shubham", "" ], [ "Laradji", "Issam H.", "" ], [ "Rodriguez", "Pau", "" ], [ "Vazquez", "David", "" ], [ "Pal", "Christopher", "" ], [ "Pedersoli", "Marco", "" ] ]
Scalable Vector Graphics (SVGs) have become integral in modern image rendering applications due to their infinite scalability in resolution, versatile usability, and editing capabilities. SVGs are particularly popular in the fields of web development and graphic design. Existing approaches for SVG modeling using deep learning often struggle with generating complex SVGs and are restricted to simpler ones that require extensive processing and simplification. This paper introduces StarVector, a multimodal SVG generation model that effectively integrates Code Generation Large Language Models (CodeLLMs) and vision models. Our approach utilizes a CLIP image encoder to extract visual representations from pixel-based images, which are then transformed into visual tokens via an adapter module. These visual tokens are pre-pended to the SVG token embeddings, and the sequence is modeled by the StarCoder model using next-token prediction, effectively learning to align the visual and code tokens. This enables StarVector to generate unrestricted SVGs that accurately represent pixel images. To evaluate StarVector's performance, we present SVG-Bench, a comprehensive benchmark for evaluating SVG methods across multiple datasets and relevant metrics. Within this benchmark, we introduce novel datasets including SVG-Stack, a large-scale dataset of real-world SVG examples, and use it to pre-train StarVector as a large foundation model for SVGs. Our results demonstrate significant enhancements in visual quality and complexity handling over current methods, marking a notable advancement in SVG generation technology. Code and models: https://github.com/joanrod/star-vector
2011.06680
Pouya Agheli
Pouya Agheli, Mohammad Javad Emadi, Hamzeh Beyranvand
Cognitive RF-FSO Fronthaul Assignment in Cell-Free and User-Centric mMIMO Networks
14 pages, 10 figures, This work has been submitted for possible publication
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Cell-free massive MIMO (CF-mMIMO) network and its user-centric (UC) version are considered as promising techniques for the next generations of wireless networks. However, fronthaul and backhaul assignments are challenging issues in these networks. In this paper, energy efficiencies of uplink transmission for the CF- and UC-mMIMO networks are studied, wherein access points (APs) are connected to aggregation nodes (ANs) through radio frequency (RF) and/or free-space optic (FSO) fronthauls, and the ANs are connected to a central processing unit via fiber backhauls. The achievable data rates are derived by taking into account the effects of hardware non-ideality at the APs and ANs, FSO alignment and weather conditions. To have a robust and energy-efficient network, especially in the presence of FSO misalignment and adverse weather conditions, firstly, a cognitive RF--FSO fronthaul assignment algorithm is proposed at the cost of sharing the available RF bandwidth between the access and fronthaul links. Then, optimal power allocations at the users and APs are investigated, and two analytical approaches are proposed to solve the non-convex optimization problem. Through numerical results, we have discussed how utilizing the cognitive RF--FSO fronthaul assignment achieves higher energy efficiency compared to that of FSO-only, RF-only, or simultaneously using RF and FSO fronthaul links, e.g., achieving up to $198\%$ higher energy efficiency under unfavorable conditions. Moreover, the effects of FSO misalignment, weather conditions, and power allocations on the performances of the CF- and UC-mMIMO networks are discussed.
[ { "created": "Thu, 12 Nov 2020 22:56:42 GMT", "version": "v1" }, { "created": "Wed, 18 Nov 2020 10:50:01 GMT", "version": "v2" }, { "created": "Tue, 9 Mar 2021 10:37:16 GMT", "version": "v3" } ]
2021-03-10
[ [ "Agheli", "Pouya", "" ], [ "Emadi", "Mohammad Javad", "" ], [ "Beyranvand", "Hamzeh", "" ] ]
Cell-free massive MIMO (CF-mMIMO) network and its user-centric (UC) version are considered as promising techniques for the next generations of wireless networks. However, fronthaul and backhaul assignments are challenging issues in these networks. In this paper, energy efficiencies of uplink transmission for the CF- and UC-mMIMO networks are studied, wherein access points (APs) are connected to aggregation nodes (ANs) through radio frequency (RF) and/or free-space optic (FSO) fronthauls, and the ANs are connected to a central processing unit via fiber backhauls. The achievable data rates are derived by taking into account the effects of hardware non-ideality at the APs and ANs, FSO alignment and weather conditions. To have a robust and energy-efficient network, especially in the presence of FSO misalignment and adverse weather conditions, firstly, a cognitive RF--FSO fronthaul assignment algorithm is proposed at the cost of sharing the available RF bandwidth between the access and fronthaul links. Then, optimal power allocations at the users and APs are investigated, and two analytical approaches are proposed to solve the non-convex optimization problem. Through numerical results, we have discussed how utilizing the cognitive RF--FSO fronthaul assignment achieves higher energy efficiency compared to that of FSO-only, RF-only, or simultaneously using RF and FSO fronthaul links, e.g., achieving up to $198\%$ higher energy efficiency under unfavorable conditions. Moreover, the effects of FSO misalignment, weather conditions, and power allocations on the performances of the CF- and UC-mMIMO networks are discussed.
2104.03631
Daniel Reti
Daniel Reti, Daniel Fraunholz, Janis Zemitis, Daniel Schneider, Hans Dieter Schotten
Deep Down the Rabbit Hole: On References in Networks of Decoy Elements
null
2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security)
10.1109/CyberSecurity49315.2020.9138850
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deception technology has proven to be a sound approach against threats to information systems. Aside from well-established honeypots, decoy elements, also known as honeytokens, are an excellent method to address various types of threats. Decoy elements are causing distraction and uncertainty to an attacker and help detecting malicious activity. Deception is meant to be complementing firewalls and intrusion detection systems. Particularly insider threats may be mitigated with deception methods. While current approaches consider the use of multiple decoy elements as well as context-sensitivity, they do not sufficiently describe a relationship between individual elements. In this work, inter-referencing decoy elements are introduced as a plausible extension to existing deception frameworks, leading attackers along a path of decoy elements. A theoretical foundation is introduced, as well as a stochastic model and a reference implementation. It was found that the proposed system is suitable to enhance current decoy frameworks by adding a further dimension of inter-connectivity and therefore improve intrusion detection and prevention.
[ { "created": "Thu, 8 Apr 2021 09:34:05 GMT", "version": "v1" } ]
2021-04-09
[ [ "Reti", "Daniel", "" ], [ "Fraunholz", "Daniel", "" ], [ "Zemitis", "Janis", "" ], [ "Schneider", "Daniel", "" ], [ "Schotten", "Hans Dieter", "" ] ]
Deception technology has proven to be a sound approach against threats to information systems. Aside from well-established honeypots, decoy elements, also known as honeytokens, are an excellent method to address various types of threats. Decoy elements are causing distraction and uncertainty to an attacker and help detecting malicious activity. Deception is meant to be complementing firewalls and intrusion detection systems. Particularly insider threats may be mitigated with deception methods. While current approaches consider the use of multiple decoy elements as well as context-sensitivity, they do not sufficiently describe a relationship between individual elements. In this work, inter-referencing decoy elements are introduced as a plausible extension to existing deception frameworks, leading attackers along a path of decoy elements. A theoretical foundation is introduced, as well as a stochastic model and a reference implementation. It was found that the proposed system is suitable to enhance current decoy frameworks by adding a further dimension of inter-connectivity and therefore improve intrusion detection and prevention.