id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2403.06027
Adam Mahdi
Felix H. Krones, Ben Walker, Guy Parsons, Terry Lyons, Adam Mahdi
Multimodal deep learning approach to predicting neurological recovery from coma after cardiac arrest
5 figures, 2 tables
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
This work showcases our team's (The BEEGees) contributions to the 2023 George B. Moody PhysioNet Challenge. The aim was to predict neurological recovery from coma following cardiac arrest using clinical data and time-series such as multi-channel EEG and ECG signals. Our modelling approach is multimodal, based on two-dimensional spectrogram representations derived from numerous EEG channels, alongside the integration of clinical data and features extracted directly from EEG recordings. Our submitted model achieved a Challenge score of $0.53$ on the hidden test set for predictions made $72$ hours after return of spontaneous circulation. Our study shows the efficacy and limitations of employing transfer learning in medical classification. With regard to prospective implementation, our analysis reveals that the performance of the model is strongly linked to the selection of a decision threshold and exhibits strong variability across data splits.
[ { "created": "Sat, 9 Mar 2024 22:29:24 GMT", "version": "v1" } ]
2024-03-12
[ [ "Krones", "Felix H.", "" ], [ "Walker", "Ben", "" ], [ "Parsons", "Guy", "" ], [ "Lyons", "Terry", "" ], [ "Mahdi", "Adam", "" ] ]
This work showcases our team's (The BEEGees) contributions to the 2023 George B. Moody PhysioNet Challenge. The aim was to predict neurological recovery from coma following cardiac arrest using clinical data and time-series such as multi-channel EEG and ECG signals. Our modelling approach is multimodal, based on two-dimensional spectrogram representations derived from numerous EEG channels, alongside the integration of clinical data and features extracted directly from EEG recordings. Our submitted model achieved a Challenge score of $0.53$ on the hidden test set for predictions made $72$ hours after return of spontaneous circulation. Our study shows the efficacy and limitations of employing transfer learning in medical classification. With regard to prospective implementation, our analysis reveals that the performance of the model is strongly linked to the selection of a decision threshold and exhibits strong variability across data splits.
2407.02870
Noam Koren
Noam Koren, Abigail Goldsteen, Ariel Farkash, Guy Amit
Membership Inference Attacks Against Time-Series Models
16 pages
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Analyzing time-series data that may contain personal information, particularly in the medical field, presents serious privacy concerns. Sensitive health data from patients is often used to train machine-learning models for diagnostics and ongoing care. Assessing the privacy risk of such models is crucial to making knowledgeable decisions on whether to use a model in production, share it with third parties, or deploy it in patients homes. Membership Inference Attacks (MIA) are a key method for this kind of evaluation, however time-series prediction models have not been thoroughly studied in this context. We explore existing MIA techniques on time-series models, and introduce new features, focusing on the seasonality and trend components of the data. Seasonality is estimated using a multivariate Fourier transform, and a low-degree polynomial is used to approximate trends. We applied these techniques to various types of time-series models, using datasets from the health domain. Our results demonstrate that these new features enhance the effectiveness of MIAs in identifying membership, improving the understanding of privacy risks in medical data applications.
[ { "created": "Wed, 3 Jul 2024 07:34:49 GMT", "version": "v1" } ]
2024-07-04
[ [ "Koren", "Noam", "" ], [ "Goldsteen", "Abigail", "" ], [ "Farkash", "Ariel", "" ], [ "Amit", "Guy", "" ] ]
Analyzing time-series data that may contain personal information, particularly in the medical field, presents serious privacy concerns. Sensitive health data from patients is often used to train machine-learning models for diagnostics and ongoing care. Assessing the privacy risk of such models is crucial to making knowledgeable decisions on whether to use a model in production, share it with third parties, or deploy it in patients homes. Membership Inference Attacks (MIA) are a key method for this kind of evaluation, however time-series prediction models have not been thoroughly studied in this context. We explore existing MIA techniques on time-series models, and introduce new features, focusing on the seasonality and trend components of the data. Seasonality is estimated using a multivariate Fourier transform, and a low-degree polynomial is used to approximate trends. We applied these techniques to various types of time-series models, using datasets from the health domain. Our results demonstrate that these new features enhance the effectiveness of MIAs in identifying membership, improving the understanding of privacy risks in medical data applications.
2311.12341
Ziye Qin
Ziye Qin and Ang Ji and Zhanbo Sun and Guoyuan Wu and Peng Hao and Xishun Liao
Game Theoretic Application to Intersection Management: A Literature Review
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of vehicle-to-everything (V2X) technology offers new insights into intersection management. This, however, has also presented new challenges, such as the need to understand and model the interactions of traffic participants, including their competition and cooperation behaviors. Game theory has been widely adopted to study rationally selfish or cooperative behaviors during interactions and has been applied to advanced intersection management. In this paper, we review the application of game theory to intersection management and sort out relevant studies under various levels of intelligence and connectivity. First, the problem of urban intersection management and its challenges are briefly introduced. The basic elements of game theory specifically for intersection applications are then summarized. Next, we present the game-theoretic models and solutions that have been applied to intersection management. Finally, the limitations and potential opportunities for subsequent studies within the game-theoretic application to intersection management are discussed.
[ { "created": "Tue, 21 Nov 2023 04:25:08 GMT", "version": "v1" } ]
2023-11-22
[ [ "Qin", "Ziye", "" ], [ "Ji", "Ang", "" ], [ "Sun", "Zhanbo", "" ], [ "Wu", "Guoyuan", "" ], [ "Hao", "Peng", "" ], [ "Liao", "Xishun", "" ] ]
The emergence of vehicle-to-everything (V2X) technology offers new insights into intersection management. This, however, has also presented new challenges, such as the need to understand and model the interactions of traffic participants, including their competition and cooperation behaviors. Game theory has been widely adopted to study rationally selfish or cooperative behaviors during interactions and has been applied to advanced intersection management. In this paper, we review the application of game theory to intersection management and sort out relevant studies under various levels of intelligence and connectivity. First, the problem of urban intersection management and its challenges are briefly introduced. The basic elements of game theory specifically for intersection applications are then summarized. Next, we present the game-theoretic models and solutions that have been applied to intersection management. Finally, the limitations and potential opportunities for subsequent studies within the game-theoretic application to intersection management are discussed.
1011.5677
Sachin Adlakha
Sachin Adlakha and Ramesh Johari
Mean Field Equilibrium in Dynamic Games with Complementarities
56 pages, 5 figures
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a class of stochastic dynamic games that exhibit strategic complementarities between players; formally, in the games we consider, the payoff of a player has increasing differences between her own state and the empirical distribution of the states of other players. Such games can be used to model a diverse set of applications, including network security models, recommender systems, and dynamic search in markets. Stochastic games are generally difficult to analyze, and these difficulties are only exacerbated when the number of players is large (as might be the case in the preceding examples). We consider an approximation methodology called mean field equilibrium to study these games. In such an equilibrium, each player reacts to only the long run average state of other players. We find necessary conditions for the existence of a mean field equilibrium in such games. Furthermore, as a simple consequence of this existence theorem, we obtain several natural monotonicity properties. We show that there exist a "largest" and a "smallest" equilibrium among all those where the equilibrium strategy used by a player is nondecreasing, and we also show that players converge to each of these equilibria via natural myopic learning dynamics; as we argue, these dynamics are more reasonable than the standard best response dynamics. We also provide sensitivity results, where we quantify how the equilibria of such games move in response to changes in parameters of the game (e.g., the introduction of incentives to players).
[ { "created": "Thu, 25 Nov 2010 19:31:47 GMT", "version": "v1" } ]
2010-12-13
[ [ "Adlakha", "Sachin", "" ], [ "Johari", "Ramesh", "" ] ]
We study a class of stochastic dynamic games that exhibit strategic complementarities between players; formally, in the games we consider, the payoff of a player has increasing differences between her own state and the empirical distribution of the states of other players. Such games can be used to model a diverse set of applications, including network security models, recommender systems, and dynamic search in markets. Stochastic games are generally difficult to analyze, and these difficulties are only exacerbated when the number of players is large (as might be the case in the preceding examples). We consider an approximation methodology called mean field equilibrium to study these games. In such an equilibrium, each player reacts to only the long run average state of other players. We find necessary conditions for the existence of a mean field equilibrium in such games. Furthermore, as a simple consequence of this existence theorem, we obtain several natural monotonicity properties. We show that there exist a "largest" and a "smallest" equilibrium among all those where the equilibrium strategy used by a player is nondecreasing, and we also show that players converge to each of these equilibria via natural myopic learning dynamics; as we argue, these dynamics are more reasonable than the standard best response dynamics. We also provide sensitivity results, where we quantify how the equilibria of such games move in response to changes in parameters of the game (e.g., the introduction of incentives to players).
2211.04927
Hanwei Zhu
Hanwei Zhu, Baoliang Chen, Lingyu Zhu, Shiqi Wang, and Weisi Lin
DeepDC: Deep Distance Correlation as a Perceptual Image Quality Evaluator
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models. Such a remarkable byproduct has often been identified as an emergent property in previous studies. In this work, we attribute such capability to the intrinsic texture-sensitive characteristic that classifies images using texture features. We fully exploit this characteristic to develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features. Specifically, we compute the distance correlation, a highly promising yet relatively under-investigated statistic, between reference and distorted images in the deep feature domain. In addition, the distance correlation quantifies both linear and nonlinear feature relationships, which is far beyond the widely used first-order and second-order statistics in the feature space. We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets, one perceptual similarity dataset, two texture similarity datasets, and one geometric transformation dataset. Moreover, we optimize the proposed model to generate a broad spectrum of texture patterns, by treating the model as the style loss function for neural style transfer (NST). Extensive experiments demonstrate that the proposed texture synthesis and NST methods achieve the best quantitative and qualitative results. We release our code at https://github.com/h4nwei/DeepDC.
[ { "created": "Wed, 9 Nov 2022 14:57:27 GMT", "version": "v1" }, { "created": "Fri, 24 Nov 2023 12:59:12 GMT", "version": "v2" } ]
2023-11-27
[ [ "Zhu", "Hanwei", "" ], [ "Chen", "Baoliang", "" ], [ "Zhu", "Lingyu", "" ], [ "Wang", "Shiqi", "" ], [ "Lin", "Weisi", "" ] ]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models. Such a remarkable byproduct has often been identified as an emergent property in previous studies. In this work, we attribute such capability to the intrinsic texture-sensitive characteristic that classifies images using texture features. We fully exploit this characteristic to develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features. Specifically, we compute the distance correlation, a highly promising yet relatively under-investigated statistic, between reference and distorted images in the deep feature domain. In addition, the distance correlation quantifies both linear and nonlinear feature relationships, which is far beyond the widely used first-order and second-order statistics in the feature space. We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets, one perceptual similarity dataset, two texture similarity datasets, and one geometric transformation dataset. Moreover, we optimize the proposed model to generate a broad spectrum of texture patterns, by treating the model as the style loss function for neural style transfer (NST). Extensive experiments demonstrate that the proposed texture synthesis and NST methods achieve the best quantitative and qualitative results. We release our code at https://github.com/h4nwei/DeepDC.
2004.14257
Aman Madaan
Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, Shrimai Prabhumoye
Politeness Transfer: A Tag and Generate Approach
To appear at ACL 2020
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate.
[ { "created": "Wed, 29 Apr 2020 15:08:53 GMT", "version": "v1" }, { "created": "Fri, 1 May 2020 22:33:41 GMT", "version": "v2" } ]
2020-05-05
[ [ "Madaan", "Aman", "" ], [ "Setlur", "Amrith", "" ], [ "Parekh", "Tanmay", "" ], [ "Poczos", "Barnabas", "" ], [ "Neubig", "Graham", "" ], [ "Yang", "Yiming", "" ], [ "Salakhutdinov", "Ruslan", "" ], [ "Black", "Alan W", "" ], [ "Prabhumoye", "Shrimai", "" ] ]
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate.
1612.01492
Jennifer Iglesias
Jennifer Iglesias and Rajmohan Rajaraman and R Ravi and Ravi Sundaram
Plane Gossip: Approximating rumor spread in planar graphs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the design of schedules for multi-commodity multicast; we are given an undirected graph $G$ and a collection of source destination pairs, and the goal is to schedule a minimum-length sequence of matchings that connects every source with its respective destination. Multi-commodity multicast models a classic information dissemination problem in networks where the primary communication constraint is the number of connections that a node can make, not link bandwidth. Multi-commodity multicast is closely related to the problem of finding a subgraph, $H$, of optimal poise, where the poise is defined as the sum of the maximum degree of $H$ and the maximum distance between any source-destination pair in $H$. We first show that the minimum poise subgraph for single-commodity multicast can be approximated to within a factor of $O(\log k)$ with respect to the value of a natural LP relaxation in an instance with $k$ terminals. This is the first upper bound on the integrality gap of the natural LP. Using this poise result and shortest-path separators in planar graphs, we obtain a $O(\log^3 k\log n/(\log\log n))$-approximation for multi-commodity multicast for planar graphs. We also study the minimum-time radio gossip problem in planar graphs where a message from each node must be transmitted to all other nodes under a model where nodes can broadcast to all neighbors in a single step but only nodes with a single broadcasting neighbor get a message. We give an $O(\log^2 n)$-approximation for radio gossip in planar graphs breaking previous barriers. This is the first bound for radio gossip that does not rely on the maximum degree of the graph. Finally, we show that our techniques for planar graphs extend to graphs with excluded minors. We establish polylogarithmic-approximation algorithms for both multi-commodity multicast and radio gossip problems in minor-free graphs.
[ { "created": "Mon, 5 Dec 2016 19:41:00 GMT", "version": "v1" }, { "created": "Fri, 14 Jul 2017 20:07:45 GMT", "version": "v2" } ]
2017-07-18
[ [ "Iglesias", "Jennifer", "" ], [ "Rajaraman", "Rajmohan", "" ], [ "Ravi", "R", "" ], [ "Sundaram", "Ravi", "" ] ]
We study the design of schedules for multi-commodity multicast; we are given an undirected graph $G$ and a collection of source destination pairs, and the goal is to schedule a minimum-length sequence of matchings that connects every source with its respective destination. Multi-commodity multicast models a classic information dissemination problem in networks where the primary communication constraint is the number of connections that a node can make, not link bandwidth. Multi-commodity multicast is closely related to the problem of finding a subgraph, $H$, of optimal poise, where the poise is defined as the sum of the maximum degree of $H$ and the maximum distance between any source-destination pair in $H$. We first show that the minimum poise subgraph for single-commodity multicast can be approximated to within a factor of $O(\log k)$ with respect to the value of a natural LP relaxation in an instance with $k$ terminals. This is the first upper bound on the integrality gap of the natural LP. Using this poise result and shortest-path separators in planar graphs, we obtain a $O(\log^3 k\log n/(\log\log n))$-approximation for multi-commodity multicast for planar graphs. We also study the minimum-time radio gossip problem in planar graphs where a message from each node must be transmitted to all other nodes under a model where nodes can broadcast to all neighbors in a single step but only nodes with a single broadcasting neighbor get a message. We give an $O(\log^2 n)$-approximation for radio gossip in planar graphs breaking previous barriers. This is the first bound for radio gossip that does not rely on the maximum degree of the graph. Finally, we show that our techniques for planar graphs extend to graphs with excluded minors. We establish polylogarithmic-approximation algorithms for both multi-commodity multicast and radio gossip problems in minor-free graphs.
2303.14476
Xiaoru Yuan
Can Liu, Yu Zhang, Cong Wu, Chen Li and Xiaoru Yuan
A Spatial-Constraint Model for Manipulating Static Visualizations
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We propose a spatial-constraint approach for modeling spatial-based interactions and enabling interactive visualizations, which involves the manipulation of visualizations through selection, filtering, navigation, arrangement, and aggregation. We proposes a system that activates static visualizations by adding intelligent interactions, which is achieved by associating static visual objects with forces. Our force-directed technique facilitates smooth animated transitions of the visualizations between different interaction states. We showcase the effectiveness of our technique through usage scenarios that involve activating visualizations in real-world settings.
[ { "created": "Sat, 25 Mar 2023 14:09:18 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2024 06:49:32 GMT", "version": "v2" } ]
2024-03-21
[ [ "Liu", "Can", "" ], [ "Zhang", "Yu", "" ], [ "Wu", "Cong", "" ], [ "Li", "Chen", "" ], [ "Yuan", "Xiaoru", "" ] ]
We propose a spatial-constraint approach for modeling spatial-based interactions and enabling interactive visualizations, which involves the manipulation of visualizations through selection, filtering, navigation, arrangement, and aggregation. We proposes a system that activates static visualizations by adding intelligent interactions, which is achieved by associating static visual objects with forces. Our force-directed technique facilitates smooth animated transitions of the visualizations between different interaction states. We showcase the effectiveness of our technique through usage scenarios that involve activating visualizations in real-world settings.
2406.04356
Yi Yao
Yi Yao and Jun Wang and Yabai Hu and Lifeng Wang and Yi Zhou and Jack Chen and Xuming Gai and Zhenming Wang and Wenjun Liu
BugBlitz-AI: An Intelligent QA Assistant
null
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of software testing from manual to automated methods has significantly influenced quality assurance (QA) practices. However, challenges persist in post-execution phases, particularly in result analysis and reporting. Traditional post-execution validation phases require manual intervention for result analysis and report generation, leading to inefficiencies and potential development cycle delays. This paper introduces BugBlitz-AI, an AI-powered validation toolkit designed to enhance end-to-end test automation by automating result analysis and bug reporting processes. BugBlitz-AI leverages recent advancements in artificial intelligence to reduce the time-intensive tasks of manual result analysis and report generation, allowing QA teams to focus more on crucial aspects of product quality. By adopting BugBlitz-AI, organizations can advance automated testing practices and integrate AI into QA processes, ensuring higher product quality and faster time-to-market. The paper outlines BugBlitz-AI's architecture, discusses related work, details its quality enhancement strategies, and presents results demonstrating its effectiveness in real-world scenarios.
[ { "created": "Fri, 17 May 2024 11:09:10 GMT", "version": "v1" } ]
2024-06-10
[ [ "Yao", "Yi", "" ], [ "Wang", "Jun", "" ], [ "Hu", "Yabai", "" ], [ "Wang", "Lifeng", "" ], [ "Zhou", "Yi", "" ], [ "Chen", "Jack", "" ], [ "Gai", "Xuming", "" ], [ "Wang", "Zhenming", "" ], [ "Liu", "Wenjun", "" ] ]
The evolution of software testing from manual to automated methods has significantly influenced quality assurance (QA) practices. However, challenges persist in post-execution phases, particularly in result analysis and reporting. Traditional post-execution validation phases require manual intervention for result analysis and report generation, leading to inefficiencies and potential development cycle delays. This paper introduces BugBlitz-AI, an AI-powered validation toolkit designed to enhance end-to-end test automation by automating result analysis and bug reporting processes. BugBlitz-AI leverages recent advancements in artificial intelligence to reduce the time-intensive tasks of manual result analysis and report generation, allowing QA teams to focus more on crucial aspects of product quality. By adopting BugBlitz-AI, organizations can advance automated testing practices and integrate AI into QA processes, ensuring higher product quality and faster time-to-market. The paper outlines BugBlitz-AI's architecture, discusses related work, details its quality enhancement strategies, and presents results demonstrating its effectiveness in real-world scenarios.
1601.00286
Christian Lorenz Staudt
Michael Hamann, Gerd Lindner, Henning Meyerhenke, Christian L. Staudt, Dorothea Wagner
Structure-Preserving Sparsification Methods for Social Networks
null
null
null
null
cs.SI cs.DC physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Sparsification reduces the size of networks while preserving structural and statistical properties of interest. Various sparsifying algorithms have been proposed in different contexts. We contribute the first systematic conceptual and experimental comparison of \textit{edge sparsification} methods on a diverse set of network properties. It is shown that they can be understood as methods for rating edges by importance and then filtering globally or locally by these scores. We show that applying a local filtering technique improves the preservation of all kinds of properties. In addition, we propose a new sparsification method (\textit{Local Degree}) which preserves edges leading to local hub nodes. All methods are evaluated on a set of social networks from Facebook, Google+, Twitter and LiveJournal with respect to network properties including diameter, connected components, community structure, multiple node centrality measures and the behavior of epidemic simulations. In order to assess the preservation of the community structure, we also include experiments on synthetically generated networks with ground truth communities. Experiments with our implementations of the sparsification methods (included in the open-source network analysis tool suite NetworKit) show that many network properties can be preserved down to about 20\% of the original set of edges for sparse graphs with a reasonable density. The experimental results allow us to differentiate the behavior of different methods and show which method is suitable with respect to which property. While our Local Degree method is best for preserving connectivity and short distances, other newly introduced local variants are best for preserving the community structure.
[ { "created": "Sun, 3 Jan 2016 12:28:37 GMT", "version": "v1" } ]
2016-01-05
[ [ "Hamann", "Michael", "" ], [ "Lindner", "Gerd", "" ], [ "Meyerhenke", "Henning", "" ], [ "Staudt", "Christian L.", "" ], [ "Wagner", "Dorothea", "" ] ]
Sparsification reduces the size of networks while preserving structural and statistical properties of interest. Various sparsifying algorithms have been proposed in different contexts. We contribute the first systematic conceptual and experimental comparison of \textit{edge sparsification} methods on a diverse set of network properties. It is shown that they can be understood as methods for rating edges by importance and then filtering globally or locally by these scores. We show that applying a local filtering technique improves the preservation of all kinds of properties. In addition, we propose a new sparsification method (\textit{Local Degree}) which preserves edges leading to local hub nodes. All methods are evaluated on a set of social networks from Facebook, Google+, Twitter and LiveJournal with respect to network properties including diameter, connected components, community structure, multiple node centrality measures and the behavior of epidemic simulations. In order to assess the preservation of the community structure, we also include experiments on synthetically generated networks with ground truth communities. Experiments with our implementations of the sparsification methods (included in the open-source network analysis tool suite NetworKit) show that many network properties can be preserved down to about 20\% of the original set of edges for sparse graphs with a reasonable density. The experimental results allow us to differentiate the behavior of different methods and show which method is suitable with respect to which property. While our Local Degree method is best for preserving connectivity and short distances, other newly introduced local variants are best for preserving the community structure.
2301.01413
Yuren Cong
Yuren Cong, Martin Renqiang Min, Li Erran Li, Bodo Rosenhahn, Michael Ying Yang
Attribute-Centric Compositional Text-to-Image Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the recent impressive breakthroughs in text-to-image generation, generative models have difficulty in capturing the data distribution of underrepresented attribute compositions while over-memorizing overrepresented attribute compositions, which raises public concerns about their robustness and fairness. To tackle this challenge, we propose ACTIG, an attribute-centric compositional text-to-image generation framework. We present an attribute-centric feature augmentation and a novel image-free training scheme, which greatly improves model's ability to generate images with underrepresented attributes. We further propose an attribute-centric contrastive loss to avoid overfitting to overrepresented attribute compositions. We validate our framework on the CelebA-HQ and CUB datasets. Extensive experiments show that the compositional generalization of ACTIG is outstanding, and our framework outperforms previous works in terms of image quality and text-image consistency.
[ { "created": "Wed, 4 Jan 2023 03:03:08 GMT", "version": "v1" } ]
2023-01-05
[ [ "Cong", "Yuren", "" ], [ "Min", "Martin Renqiang", "" ], [ "Li", "Li Erran", "" ], [ "Rosenhahn", "Bodo", "" ], [ "Yang", "Michael Ying", "" ] ]
Despite the recent impressive breakthroughs in text-to-image generation, generative models have difficulty in capturing the data distribution of underrepresented attribute compositions while over-memorizing overrepresented attribute compositions, which raises public concerns about their robustness and fairness. To tackle this challenge, we propose ACTIG, an attribute-centric compositional text-to-image generation framework. We present an attribute-centric feature augmentation and a novel image-free training scheme, which greatly improves model's ability to generate images with underrepresented attributes. We further propose an attribute-centric contrastive loss to avoid overfitting to overrepresented attribute compositions. We validate our framework on the CelebA-HQ and CUB datasets. Extensive experiments show that the compositional generalization of ACTIG is outstanding, and our framework outperforms previous works in terms of image quality and text-image consistency.
1810.01489
Paul Liu
Paul Liu, Jan Vondrak
Submodular Optimization in the MapReduce Model
10 pages
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Submodular optimization has received significant attention in both practice and theory, as a wide array of problems in machine learning, auction theory, and combinatorial optimization have submodular structure. In practice, these problems often involve large amounts of data, and must be solved in a distributed way. One popular framework for running such distributed algorithms is MapReduce. In this paper, we present two simple algorithms for cardinality constrained submodular optimization in the MapReduce model: the first is a $(1/2-o(1))$-approximation in 2 MapReduce rounds, and the second is a $(1-1/e-\epsilon)$-approximation in $\frac{1+o(1)}{\epsilon}$ MapReduce rounds.
[ { "created": "Tue, 2 Oct 2018 20:08:27 GMT", "version": "v1" } ]
2018-10-04
[ [ "Liu", "Paul", "" ], [ "Vondrak", "Jan", "" ] ]
Submodular optimization has received significant attention in both practice and theory, as a wide array of problems in machine learning, auction theory, and combinatorial optimization have submodular structure. In practice, these problems often involve large amounts of data, and must be solved in a distributed way. One popular framework for running such distributed algorithms is MapReduce. In this paper, we present two simple algorithms for cardinality constrained submodular optimization in the MapReduce model: the first is a $(1/2-o(1))$-approximation in 2 MapReduce rounds, and the second is a $(1-1/e-\epsilon)$-approximation in $\frac{1+o(1)}{\epsilon}$ MapReduce rounds.
1904.11799
Mohit Sharma
Mohit Sharma, Jiayu Zhou, Junling Hu, George Karypis
Feature-based factorized Bilinear Similarity Model for Cold-Start Top-n Item Recommendation
9 pages, Proceedings of the 2015 SIAM International Conference on Data Mining
null
10.1137/1.9781611974010.22
null
cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommending new items to existing users has remained a challenging problem due to absence of user's past preferences for these items. The user personalized non-collaborative methods based on item features can be used to address this item cold-start problem. These methods rely on similarities between the target item and user's previous preferred items. While computing similarities based on item features, these methods overlook the interactions among the features of the items and consider them independently. Modeling interactions among features can be helpful as some features, when considered together, provide a stronger signal on the relevance of an item when compared to case where features are considered independently. To address this important issue, in this work we introduce the Feature-based factorized Bilinear Similarity Model (FBSM), which learns factorized bilinear similarity model for TOP-n recommendation of new items, given the information about items preferred by users in past as well as the features of these items. We carry out extensive empirical evaluations on benchmark datasets, and we find that the proposed FBSM approach improves upon traditional non-collaborative methods in terms of recommendation performance. Moreover, the proposed approach also learns insightful interactions among item features from data, which lead to deep understanding on how these interactions contribute to personalized recommendation.
[ { "created": "Mon, 22 Apr 2019 05:10:48 GMT", "version": "v1" } ]
2019-04-29
[ [ "Sharma", "Mohit", "" ], [ "Zhou", "Jiayu", "" ], [ "Hu", "Junling", "" ], [ "Karypis", "George", "" ] ]
Recommending new items to existing users has remained a challenging problem due to absence of user's past preferences for these items. The user personalized non-collaborative methods based on item features can be used to address this item cold-start problem. These methods rely on similarities between the target item and user's previous preferred items. While computing similarities based on item features, these methods overlook the interactions among the features of the items and consider them independently. Modeling interactions among features can be helpful as some features, when considered together, provide a stronger signal on the relevance of an item when compared to case where features are considered independently. To address this important issue, in this work we introduce the Feature-based factorized Bilinear Similarity Model (FBSM), which learns factorized bilinear similarity model for TOP-n recommendation of new items, given the information about items preferred by users in past as well as the features of these items. We carry out extensive empirical evaluations on benchmark datasets, and we find that the proposed FBSM approach improves upon traditional non-collaborative methods in terms of recommendation performance. Moreover, the proposed approach also learns insightful interactions among item features from data, which lead to deep understanding on how these interactions contribute to personalized recommendation.
1803.01221
Bhavya Kailkhura
Bhavya Kailkhura, Priyadip Ray, Deepak Rajan, Anton Yen, Peter Barnes, Ryan Goldhahn
Byzantine-Resilient Locally Optimum Detection Using Collaborative Autonomous Networks
Proceedings of the 2017 IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP 2017), 10.-13. December 2017, Curacao, Dutch Antilles
null
null
null
cs.SY stat.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a locally optimum detection (LOD) scheme for detecting a weak radioactive source buried in background clutter. We develop a decentralized algorithm, based on alternating direction method of multipliers (ADMM), for implementing the proposed scheme in autonomous sensor networks. Results show that algorithm performance approaches the centralized clairvoyant detection algorithm in the low SNR regime, and exhibits excellent convergence rate and scaling behavior (w.r.t. number of nodes). We also devise a low-overhead, robust ADMM algorithm for Byzantine-resilient detection, and demonstrate its robustness to data falsification attacks.
[ { "created": "Sat, 3 Mar 2018 19:34:29 GMT", "version": "v1" } ]
2018-03-06
[ [ "Kailkhura", "Bhavya", "" ], [ "Ray", "Priyadip", "" ], [ "Rajan", "Deepak", "" ], [ "Yen", "Anton", "" ], [ "Barnes", "Peter", "" ], [ "Goldhahn", "Ryan", "" ] ]
In this paper, we propose a locally optimum detection (LOD) scheme for detecting a weak radioactive source buried in background clutter. We develop a decentralized algorithm, based on alternating direction method of multipliers (ADMM), for implementing the proposed scheme in autonomous sensor networks. Results show that algorithm performance approaches the centralized clairvoyant detection algorithm in the low SNR regime, and exhibits excellent convergence rate and scaling behavior (w.r.t. number of nodes). We also devise a low-overhead, robust ADMM algorithm for Byzantine-resilient detection, and demonstrate its robustness to data falsification attacks.
2303.14078
Jisoo Jeong
Jisoo Jeong, Hong Cai, Risheek Garrepalli, Fatih Porikli
DistractFlow: Improving Optical Flow Estimation via Realistic Distractions and Pseudo-Labeling
CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose a novel data augmentation approach, DistractFlow, for training optical flow estimation models by introducing realistic distractions to the input frames. Based on a mixing ratio, we combine one of the frames in the pair with a distractor image depicting a similar domain, which allows for inducing visual perturbations congruent with natural objects and scenes. We refer to such pairs as distracted pairs. Our intuition is that using semantically meaningful distractors enables the model to learn related variations and attain robustness against challenging deviations, compared to conventional augmentation schemes focusing only on low-level aspects and modifications. More specifically, in addition to the supervised loss computed between the estimated flow for the original pair and its ground-truth flow, we include a second supervised loss defined between the distracted pair's flow and the original pair's ground-truth flow, weighted with the same mixing ratio. Furthermore, when unlabeled data is available, we extend our augmentation approach to self-supervised settings through pseudo-labeling and cross-consistency regularization. Given an original pair and its distracted version, we enforce the estimated flow on the distracted pair to agree with the flow of the original pair. Our approach allows increasing the number of available training pairs significantly without requiring additional annotations. It is agnostic to the model architecture and can be applied to training any optical flow estimation models. Our extensive evaluations on multiple benchmarks, including Sintel, KITTI, and SlowFlow, show that DistractFlow improves existing models consistently, outperforming the latest state of the art.
[ { "created": "Fri, 24 Mar 2023 15:42:54 GMT", "version": "v1" } ]
2023-03-27
[ [ "Jeong", "Jisoo", "" ], [ "Cai", "Hong", "" ], [ "Garrepalli", "Risheek", "" ], [ "Porikli", "Fatih", "" ] ]
We propose a novel data augmentation approach, DistractFlow, for training optical flow estimation models by introducing realistic distractions to the input frames. Based on a mixing ratio, we combine one of the frames in the pair with a distractor image depicting a similar domain, which allows for inducing visual perturbations congruent with natural objects and scenes. We refer to such pairs as distracted pairs. Our intuition is that using semantically meaningful distractors enables the model to learn related variations and attain robustness against challenging deviations, compared to conventional augmentation schemes focusing only on low-level aspects and modifications. More specifically, in addition to the supervised loss computed between the estimated flow for the original pair and its ground-truth flow, we include a second supervised loss defined between the distracted pair's flow and the original pair's ground-truth flow, weighted with the same mixing ratio. Furthermore, when unlabeled data is available, we extend our augmentation approach to self-supervised settings through pseudo-labeling and cross-consistency regularization. Given an original pair and its distracted version, we enforce the estimated flow on the distracted pair to agree with the flow of the original pair. Our approach allows increasing the number of available training pairs significantly without requiring additional annotations. It is agnostic to the model architecture and can be applied to training any optical flow estimation models. Our extensive evaluations on multiple benchmarks, including Sintel, KITTI, and SlowFlow, show that DistractFlow improves existing models consistently, outperforming the latest state of the art.
cs/0607095
Hyundong Shin
Hyundong Shin, Moe Z. Win
Gallager's Exponent for MIMO Channels: A Reliability-Rate Tradeoff
Submitted to the IEEE Transactions on Communications
null
null
null
cs.IT math.IT
null
In this paper, we derive Gallager's random coding error exponent for multiple-input multiple-output (MIMO) channels, assuming no channel-state information (CSI) at the transmitter and perfect CSI at the receiver. This measure gives insight into a fundamental tradeoff between the communication reliability and information rate of MIMO channels, enabling to determine the required codeword length to achieve a prescribed error probability at a given rate below the channel capacity. We quantify the effects of the number of antennas, channel coherence time, and spatial fading correlation on the MIMO exponent. In addition, general formulae for the ergodic capacity and the cutoff rate in the presence of spatial correlation are deduced from the exponent expressions. These formulae are applicable to arbitrary structures of transmit and receive correlation, encompassing all the previously known results as special cases of our expressions.
[ { "created": "Thu, 20 Jul 2006 06:56:02 GMT", "version": "v1" } ]
2007-07-13
[ [ "Shin", "Hyundong", "" ], [ "Win", "Moe Z.", "" ] ]
In this paper, we derive Gallager's random coding error exponent for multiple-input multiple-output (MIMO) channels, assuming no channel-state information (CSI) at the transmitter and perfect CSI at the receiver. This measure gives insight into a fundamental tradeoff between the communication reliability and information rate of MIMO channels, enabling to determine the required codeword length to achieve a prescribed error probability at a given rate below the channel capacity. We quantify the effects of the number of antennas, channel coherence time, and spatial fading correlation on the MIMO exponent. In addition, general formulae for the ergodic capacity and the cutoff rate in the presence of spatial correlation are deduced from the exponent expressions. These formulae are applicable to arbitrary structures of transmit and receive correlation, encompassing all the previously known results as special cases of our expressions.
cs/0608098
Arvind Parthasarathy
Arvind Parthasarathy
Improved Content Based Image Watermarking
24 pages
null
null
null
cs.CR
null
This paper presents a robust and transparent scheme of watermarking that exploits the human visual systems' sensitivity to frequency, along with local image characteristics obtained from the spatial domain. The underlying idea is generating a visual mask based on the visual systems' perception of image content. This mask is used to embed a decimal sequence while keeping its amplitude below the distortion sensitivity of the image pixel. We consider texture, luminance, corner and the edge information in the image to generate a mask that makes the addition of the watermark imperceptible to the human eye.
[ { "created": "Fri, 25 Aug 2006 12:55:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Parthasarathy", "Arvind", "" ] ]
This paper presents a robust and transparent scheme of watermarking that exploits the human visual systems' sensitivity to frequency, along with local image characteristics obtained from the spatial domain. The underlying idea is generating a visual mask based on the visual systems' perception of image content. This mask is used to embed a decimal sequence while keeping its amplitude below the distortion sensitivity of the image pixel. We consider texture, luminance, corner and the edge information in the image to generate a mask that makes the addition of the watermark imperceptible to the human eye.
1905.02265
Xusen Yin
Xusen Yin and Jonathan May
Comprehensible Context-driven Text Game Playing
IEEE Conference on Games 2019 Long Paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to train a computer agent to play a text-based computer game, we must represent each hidden state of the game. A Long Short-Term Memory (LSTM) model running over observed texts is a common choice for state construction. However, a normal Deep Q-learning Network (DQN) for such an agent requires millions of steps of training or more to converge. As such, an LSTM-based DQN can take tens of days to finish the training process. Though we can use a Convolutional Neural Network (CNN) as a text-encoder to construct states much faster than the LSTM, doing so without an understanding of the syntactic context of the words being analyzed can slow convergence. In this paper, we use a fast CNN to encode position- and syntax-oriented structures extracted from observed texts as states. We additionally augment the reward signal in a universal and practical manner. Together, we show that our improvements can not only speed up the process by one order of magnitude but also learn a superior agent.
[ { "created": "Mon, 6 May 2019 21:14:41 GMT", "version": "v1" }, { "created": "Sun, 2 Jun 2019 02:48:31 GMT", "version": "v2" }, { "created": "Thu, 29 Aug 2019 11:50:00 GMT", "version": "v3" } ]
2019-08-30
[ [ "Yin", "Xusen", "" ], [ "May", "Jonathan", "" ] ]
In order to train a computer agent to play a text-based computer game, we must represent each hidden state of the game. A Long Short-Term Memory (LSTM) model running over observed texts is a common choice for state construction. However, a normal Deep Q-learning Network (DQN) for such an agent requires millions of steps of training or more to converge. As such, an LSTM-based DQN can take tens of days to finish the training process. Though we can use a Convolutional Neural Network (CNN) as a text-encoder to construct states much faster than the LSTM, doing so without an understanding of the syntactic context of the words being analyzed can slow convergence. In this paper, we use a fast CNN to encode position- and syntax-oriented structures extracted from observed texts as states. We additionally augment the reward signal in a universal and practical manner. Together, we show that our improvements can not only speed up the process by one order of magnitude but also learn a superior agent.
2008.12813
Sanxing Chen
Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang and Yangfeng Ji
HittER: Hierarchical Transformers for Knowledge Graph Embeddings
EMNLP 2021
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines the challenging problem of learning representations of entities and relations in a complex multi-relational knowledge graph. We propose HittER, a Hierarchical Transformer model to jointly learn Entity-relation composition and Relational contextualization based on a source entity's neighborhood. Our proposed model consists of two different Transformer blocks: the bottom block extracts features of each entity-relation pair in the local neighborhood of the source entity and the top block aggregates the relational information from outputs of the bottom block. We further design a masked entity prediction task to balance information from the relational context and the source entity itself. Experimental results show that HittER achieves new state-of-the-art results on multiple link prediction datasets. We additionally propose a simple approach to integrate HittER into BERT and demonstrate its effectiveness on two Freebase factoid question answering datasets.
[ { "created": "Fri, 28 Aug 2020 18:58:15 GMT", "version": "v1" }, { "created": "Wed, 6 Oct 2021 04:52:07 GMT", "version": "v2" } ]
2021-10-07
[ [ "Chen", "Sanxing", "" ], [ "Liu", "Xiaodong", "" ], [ "Gao", "Jianfeng", "" ], [ "Jiao", "Jian", "" ], [ "Zhang", "Ruofei", "" ], [ "Ji", "Yangfeng", "" ] ]
This paper examines the challenging problem of learning representations of entities and relations in a complex multi-relational knowledge graph. We propose HittER, a Hierarchical Transformer model to jointly learn Entity-relation composition and Relational contextualization based on a source entity's neighborhood. Our proposed model consists of two different Transformer blocks: the bottom block extracts features of each entity-relation pair in the local neighborhood of the source entity and the top block aggregates the relational information from outputs of the bottom block. We further design a masked entity prediction task to balance information from the relational context and the source entity itself. Experimental results show that HittER achieves new state-of-the-art results on multiple link prediction datasets. We additionally propose a simple approach to integrate HittER into BERT and demonstrate its effectiveness on two Freebase factoid question answering datasets.
2207.12872
Ishaan Bhat
Ishaan Bhat, Josien P.W. Pluim, Hugo J. Kuijf
Generalized Probabilistic U-Net for medical image segementation
Accepted at Uncertainty for Safe Utilization of Machine Learning in Medical Imaging (UNSURE) 2022
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose the Generalized Probabilistic U-Net, which extends the Probabilistic U-Net by allowing more general forms of the Gaussian distribution as the latent space distribution that can better approximate the uncertainty in the reference segmentations. We study the effect the choice of latent space distribution has on capturing the uncertainty in the reference segmentations using the LIDC-IDRI dataset. We show that the choice of distribution affects the sample diversity of the predictions and their overlap with respect to the reference segmentations. For the LIDC-IDRI dataset, we show that using a mixture of Gaussians results in a statistically significant improvement in the generalized energy distance (GED) metric with respect to the standard Probabilistic U-Net. We have made our implementation available at https://github.com/ishaanb92/GeneralizedProbabilisticUNet
[ { "created": "Tue, 26 Jul 2022 13:03:37 GMT", "version": "v1" } ]
2022-07-27
[ [ "Bhat", "Ishaan", "" ], [ "Pluim", "Josien P. W.", "" ], [ "Kuijf", "Hugo J.", "" ] ]
We propose the Generalized Probabilistic U-Net, which extends the Probabilistic U-Net by allowing more general forms of the Gaussian distribution as the latent space distribution that can better approximate the uncertainty in the reference segmentations. We study the effect the choice of latent space distribution has on capturing the uncertainty in the reference segmentations using the LIDC-IDRI dataset. We show that the choice of distribution affects the sample diversity of the predictions and their overlap with respect to the reference segmentations. For the LIDC-IDRI dataset, we show that using a mixture of Gaussians results in a statistically significant improvement in the generalized energy distance (GED) metric with respect to the standard Probabilistic U-Net. We have made our implementation available at https://github.com/ishaanb92/GeneralizedProbabilisticUNet
2302.13394
Ahmed Abulila
Ahmed Abulila, Izzat El Hajj, Myoungsoo Jung, Nam Sung Kim
Asynchronous Persistence with ASAP
2 pages, 2 figures, 14th Annual Non-Volatile Memories Workshop
null
null
null
cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Supporting atomic durability of updates for persistent memories is typically achieved with Write-Ahead Logging (WAL). WAL flushes log entries to persistent memory before making the actual data persistent to ensure that a consistent state can be recovered if a crash occurs. Performing WAL in hardware is attractive because it makes most aspects of log management transparent to software, and it completes log persist operations (LPs) and data persist operations (DPs) in the background, overlapping them with the execution of other instructions. Prior hardware logging solutions commit atomic regions synchronously. Once the end of a region is reached, all outstanding persist operations required for the region to commit must be completed before instruction execution may proceed. For undo logging, LPs and DPs are both performed synchronously to ensure that the region commits synchronously. For redo logging, DPs can be performed asynchronously, but LPs are performed synchronously to ensure that the region commits synchronously. In both cases, waiting for synchronous persist operations (LP or DP) at the end of an atomic region causes atomic regions to incur high latency. To tackle this limitation, we propose ASAP, a hardware logging solution that allows atomic regions to commit asynchronously. That is, once the end of an atomic region is reached, instruction execution may proceed without waiting for outstanding persist operations to complete. As such, both LPs and DPs can be performed asynchronously. The challenge with allowing atomic regions to commit asynchronously is that it can lead to control and data dependence violations in the commit order of the atomic regions, leaving data in an unrecoverable state in case of a crash. To address this issue, ASAP tracks and enforces control and data dependencies between atomic regions in hardware to ensure that the regions commit in the proper order.
[ { "created": "Sun, 26 Feb 2023 19:34:59 GMT", "version": "v1" } ]
2023-02-28
[ [ "Abulila", "Ahmed", "" ], [ "Hajj", "Izzat El", "" ], [ "Jung", "Myoungsoo", "" ], [ "Kim", "Nam Sung", "" ] ]
Supporting atomic durability of updates for persistent memories is typically achieved with Write-Ahead Logging (WAL). WAL flushes log entries to persistent memory before making the actual data persistent to ensure that a consistent state can be recovered if a crash occurs. Performing WAL in hardware is attractive because it makes most aspects of log management transparent to software, and it completes log persist operations (LPs) and data persist operations (DPs) in the background, overlapping them with the execution of other instructions. Prior hardware logging solutions commit atomic regions synchronously. Once the end of a region is reached, all outstanding persist operations required for the region to commit must be completed before instruction execution may proceed. For undo logging, LPs and DPs are both performed synchronously to ensure that the region commits synchronously. For redo logging, DPs can be performed asynchronously, but LPs are performed synchronously to ensure that the region commits synchronously. In both cases, waiting for synchronous persist operations (LP or DP) at the end of an atomic region causes atomic regions to incur high latency. To tackle this limitation, we propose ASAP, a hardware logging solution that allows atomic regions to commit asynchronously. That is, once the end of an atomic region is reached, instruction execution may proceed without waiting for outstanding persist operations to complete. As such, both LPs and DPs can be performed asynchronously. The challenge with allowing atomic regions to commit asynchronously is that it can lead to control and data dependence violations in the commit order of the atomic regions, leaving data in an unrecoverable state in case of a crash. To address this issue, ASAP tracks and enforces control and data dependencies between atomic regions in hardware to ensure that the regions commit in the proper order.
2105.02432
Taotao Jing
Taotao Jing, Hongfu Liu, Zhengming Ding
Towards Novel Target Discovery Through Open-Set Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-set domain adaptation (OSDA) considers that the target domain contains samples from novel categories unobserved in external source domain. Unfortunately, existing OSDA methods always ignore the demand for the information of unseen categories and simply recognize them as "unknown" set without further explanation. This motivates us to understand the unknown categories more specifically by exploring the underlying structures and recovering their interpretable semantic attributes. In this paper, we propose a novel framework to accurately identify the seen categories in target domain, and effectively recover the semantic attributes for unseen categories. Specifically, structure preserving partial alignment is developed to recognize the seen categories through domain-invariant feature learning. Attribute propagation over visual graph is designed to smoothly transit attributes from seen to unseen categories via visual-semantic mapping. Moreover, two new cross-main benchmarks are constructed to evaluate the proposed framework in the novel and practical challenge. Experimental results on open-set recognition and semantic recovery demonstrate the superiority of the proposed method over other compared baselines.
[ { "created": "Thu, 6 May 2021 04:22:29 GMT", "version": "v1" }, { "created": "Sun, 16 May 2021 22:32:43 GMT", "version": "v2" }, { "created": "Mon, 9 Aug 2021 17:12:45 GMT", "version": "v3" }, { "created": "Wed, 11 Aug 2021 18:32:16 GMT", "version": "v4" } ]
2021-08-13
[ [ "Jing", "Taotao", "" ], [ "Liu", "Hongfu", "" ], [ "Ding", "Zhengming", "" ] ]
Open-set domain adaptation (OSDA) considers that the target domain contains samples from novel categories unobserved in external source domain. Unfortunately, existing OSDA methods always ignore the demand for the information of unseen categories and simply recognize them as "unknown" set without further explanation. This motivates us to understand the unknown categories more specifically by exploring the underlying structures and recovering their interpretable semantic attributes. In this paper, we propose a novel framework to accurately identify the seen categories in target domain, and effectively recover the semantic attributes for unseen categories. Specifically, structure preserving partial alignment is developed to recognize the seen categories through domain-invariant feature learning. Attribute propagation over visual graph is designed to smoothly transit attributes from seen to unseen categories via visual-semantic mapping. Moreover, two new cross-main benchmarks are constructed to evaluate the proposed framework in the novel and practical challenge. Experimental results on open-set recognition and semantic recovery demonstrate the superiority of the proposed method over other compared baselines.
2302.14533
Rajiv Kumar V
Rajiv Kumar, G. Sivakumar
DEff-GAN: Diverse Attribute Transfer for Few-Shot Image Synthesis
null
null
10.5220/0011799600003417
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Requirements of large amounts of data is a difficulty in training many GANs. Data efficient GANs involve fitting a generators continuous target distribution with a limited discrete set of data samples, which is a difficult task. Single image methods have focused on modeling the internal distribution of a single image and generating its samples. While single image methods can synthesize image samples with diversity, they do not model multiple images or capture the inherent relationship possible between two images. Given only a handful of images, we are interested in generating samples and exploiting the commonalities in the input images. In this work, we extend the single-image GAN method to model multiple images for sample synthesis. We modify the discriminator with an auxiliary classifier branch, which helps to generate a wide variety of samples and to classify the input labels. Our Data-Efficient GAN (DEff-GAN) generates excellent results when similarities and correspondences can be drawn between the input images or classes.
[ { "created": "Tue, 28 Feb 2023 12:43:52 GMT", "version": "v1" } ]
2023-03-09
[ [ "Kumar", "Rajiv", "" ], [ "Sivakumar", "G.", "" ] ]
Requirements of large amounts of data is a difficulty in training many GANs. Data efficient GANs involve fitting a generators continuous target distribution with a limited discrete set of data samples, which is a difficult task. Single image methods have focused on modeling the internal distribution of a single image and generating its samples. While single image methods can synthesize image samples with diversity, they do not model multiple images or capture the inherent relationship possible between two images. Given only a handful of images, we are interested in generating samples and exploiting the commonalities in the input images. In this work, we extend the single-image GAN method to model multiple images for sample synthesis. We modify the discriminator with an auxiliary classifier branch, which helps to generate a wide variety of samples and to classify the input labels. Our Data-Efficient GAN (DEff-GAN) generates excellent results when similarities and correspondences can be drawn between the input images or classes.
1806.02681
Wanderson Ten\'orio
Carlos Munuera, Wanderson Ten\'orio, Fernando Torres
Locally Recoverable codes from algebraic curves with separated variables
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Locally Recoverable code is an error-correcting code such that any erasure in a single coordinate of a codeword can be recovered from a small subset of other coordinates. We study Locally Recoverable Algebraic Geometry codes arising from certain curves defined by equations with separated variables. The recovery of erasures is obtained by means of Lagrangian interpolation in general, and simply by one addition in some particular cases.
[ { "created": "Thu, 7 Jun 2018 13:48:35 GMT", "version": "v1" } ]
2018-06-08
[ [ "Munuera", "Carlos", "" ], [ "Tenório", "Wanderson", "" ], [ "Torres", "Fernando", "" ] ]
A Locally Recoverable code is an error-correcting code such that any erasure in a single coordinate of a codeword can be recovered from a small subset of other coordinates. We study Locally Recoverable Algebraic Geometry codes arising from certain curves defined by equations with separated variables. The recovery of erasures is obtained by means of Lagrangian interpolation in general, and simply by one addition in some particular cases.
2104.00675
Hsin-Ying Lee
Yen-Chi Cheng, Chieh Hubert Lin, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Ming-Hsuan Yang
In&Out : Diverse Image Outpainting via GAN Inversion
Project Page: https://yccyenchicheng.github.io/InOut/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content. Compared to inpainting -- filling in missing pixels in a way coherent with the neighboring pixels -- outpainting can be achieved in more diverse ways since the problem is less constrained by the surrounding pixels. Existing image outpainting methods pose the problem as a conditional image-to-image translation task, often generating repetitive structures and textures by replicating the content available in the input image. In this work, we formulate the problem from the perspective of inverting generative adversarial networks. Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image. To outpaint an image, we seek for multiple latent codes not only recovering available patches but also synthesizing diverse outpainting by patch-based generation. This leads to richer structure and content in the outpainted regions. Furthermore, our formulation allows for outpainting conditioned on the categorical input, thereby enabling flexible user controls. Extensive experimental results demonstrate the proposed method performs favorably against existing in- and outpainting methods, featuring higher visual quality and diversity.
[ { "created": "Thu, 1 Apr 2021 17:59:10 GMT", "version": "v1" } ]
2021-04-02
[ [ "Cheng", "Yen-Chi", "" ], [ "Lin", "Chieh Hubert", "" ], [ "Lee", "Hsin-Ying", "" ], [ "Ren", "Jian", "" ], [ "Tulyakov", "Sergey", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content. Compared to inpainting -- filling in missing pixels in a way coherent with the neighboring pixels -- outpainting can be achieved in more diverse ways since the problem is less constrained by the surrounding pixels. Existing image outpainting methods pose the problem as a conditional image-to-image translation task, often generating repetitive structures and textures by replicating the content available in the input image. In this work, we formulate the problem from the perspective of inverting generative adversarial networks. Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image. To outpaint an image, we seek for multiple latent codes not only recovering available patches but also synthesizing diverse outpainting by patch-based generation. This leads to richer structure and content in the outpainted regions. Furthermore, our formulation allows for outpainting conditioned on the categorical input, thereby enabling flexible user controls. Extensive experimental results demonstrate the proposed method performs favorably against existing in- and outpainting methods, featuring higher visual quality and diversity.
2112.09647
Matteo Dunnhofer
Matteo Dunnhofer, Alberto Zurini, Maurizio Dunnhofer, Christian Micheloni
Video-Based Reconstruction of the Trajectories Performed by Skiers
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Trajectories are fundamental in different skiing disciplines. Tools enabling the analysis of such curves can enhance the training activity and enrich the broadcasting contents. However, the solutions currently available are based on geo-localized sensors and surface models. In this short paper, we propose a video-based approach to reconstruct the sequence of points traversed by an athlete during its performance. Our prototype is constituted by a pipeline of deep learning-based algorithms to reconstruct the athlete's motion and to visualize it according to the camera perspective. This is achieved for different skiing disciplines in the wild without any camera calibration. We tested our solution on broadcast and smartphone-captured videos of alpine skiing and ski jumping professional competitions. The qualitative results achieved show the potential of our solution.
[ { "created": "Fri, 17 Dec 2021 17:40:06 GMT", "version": "v1" } ]
2021-12-20
[ [ "Dunnhofer", "Matteo", "" ], [ "Zurini", "Alberto", "" ], [ "Dunnhofer", "Maurizio", "" ], [ "Micheloni", "Christian", "" ] ]
Trajectories are fundamental in different skiing disciplines. Tools enabling the analysis of such curves can enhance the training activity and enrich the broadcasting contents. However, the solutions currently available are based on geo-localized sensors and surface models. In this short paper, we propose a video-based approach to reconstruct the sequence of points traversed by an athlete during its performance. Our prototype is constituted by a pipeline of deep learning-based algorithms to reconstruct the athlete's motion and to visualize it according to the camera perspective. This is achieved for different skiing disciplines in the wild without any camera calibration. We tested our solution on broadcast and smartphone-captured videos of alpine skiing and ski jumping professional competitions. The qualitative results achieved show the potential of our solution.
2109.00958
Juan David Guerrero Balaguera
Juan-David Guerrero-Balaguera, Josie E. Rodriguez Condia, Matteo Sonza Reorda
A Novel Compaction Approach for SBST Test Programs
Paper accepted to be presented in The 30th IEEE Asian Test Symposium (ATS 2021) November 22 - 25, 2021, Japan. to be published in the IEEE xplorer after the presentation in the event
null
null
null
cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In-field test of processor-based devices is a must when considering safety-critical systems (e.g., in robotics, aerospace, and automotive applications). During in-field testing, different solutions can be adopted, depending on the specific constraints of each scenario. In the last years, Self-Test Libraries (STLs) developed by IP or semiconductor companies became widely adopted. Given the strict constraints of in-field test, the size and time duration of a STL is a crucial parameter. This work introduces a novel approach to compress functional test programs belonging to an STL. The proposed approach is based on analyzing (via logic simulation) the interaction between the micro-architectural operation performed by each instruction and its capacity to propagate fault effects on any observable output, reducing the required fault simulations to only one. The proposed compaction strategy was validated by resorting to a RISC-V processor and several test programs stemming from diverse generation strategies. Results showed that the proposed compaction approach can reduce the length of test programs by up to 93.9% and their duration by up to 95%, with minimal effect on fault coverage.
[ { "created": "Thu, 2 Sep 2021 13:58:02 GMT", "version": "v1" }, { "created": "Wed, 8 Sep 2021 12:07:03 GMT", "version": "v2" } ]
2021-09-09
[ [ "Guerrero-Balaguera", "Juan-David", "" ], [ "Condia", "Josie E. Rodriguez", "" ], [ "Reorda", "Matteo Sonza", "" ] ]
In-field test of processor-based devices is a must when considering safety-critical systems (e.g., in robotics, aerospace, and automotive applications). During in-field testing, different solutions can be adopted, depending on the specific constraints of each scenario. In the last years, Self-Test Libraries (STLs) developed by IP or semiconductor companies became widely adopted. Given the strict constraints of in-field test, the size and time duration of a STL is a crucial parameter. This work introduces a novel approach to compress functional test programs belonging to an STL. The proposed approach is based on analyzing (via logic simulation) the interaction between the micro-architectural operation performed by each instruction and its capacity to propagate fault effects on any observable output, reducing the required fault simulations to only one. The proposed compaction strategy was validated by resorting to a RISC-V processor and several test programs stemming from diverse generation strategies. Results showed that the proposed compaction approach can reduce the length of test programs by up to 93.9% and their duration by up to 95%, with minimal effect on fault coverage.
2309.00504
Manuel Sorge
Alexander Firbas and Alexander Dobler and Fabian Holzer and Jakob Schafellner and Manuel Sorge and Ana\"is Villedieu and Monika Wi{\ss}mann
The Complexity of Cluster Vertex Splitting and Company
30 pages, 9 figures. Appears in SOFSEM 2024
null
null
null
cs.DS cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering a graph when the clusters can overlap can be seen from three different angles: We may look for cliques that cover the edges of the graph with bounded overlap, we may look to add or delete few edges to uncover the cluster structure, or we may split vertices to separate the clusters from each other. Splitting a vertex $v$ means to remove it and to add two new copies of $v$ and to make each previous neighbor of $v$ adjacent with at least one of the copies. In this work, we study underlying computational problems regarding the three angles to overlapping clusterings, in particular when the overlap is small. We show that the above-mentioned covering problem is NP-complete. We then make structural observations that show that the covering viewpoint and the vertex-splitting viewpoint are equivalent, yielding NP-hardness for the vertex-splitting problem. On the positive side, we show that splitting at most $k$ vertices to obtain a cluster graph has a problem kernel with $O(k)$ vertices. Finally, we observe that combining our hardness results with the so-called critical-clique lemma yields NP-hardness for Cluster Editing with Vertex Splitting, which was previously open (Abu-Khzam et al. [ISCO 2018]) and independently shown to be NP-hard by Arrighi et al. [IPEC 2023]. We observe that a previous version of the critical-clique lemma was flawed; a corrected version has appeared in the meantime on which our hardness result is based.
[ { "created": "Fri, 1 Sep 2023 14:51:28 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2023 12:24:07 GMT", "version": "v2" }, { "created": "Wed, 3 Apr 2024 13:20:13 GMT", "version": "v3" } ]
2024-04-04
[ [ "Firbas", "Alexander", "" ], [ "Dobler", "Alexander", "" ], [ "Holzer", "Fabian", "" ], [ "Schafellner", "Jakob", "" ], [ "Sorge", "Manuel", "" ], [ "Villedieu", "Anaïs", "" ], [ "Wißmann", "Monika", "" ] ]
Clustering a graph when the clusters can overlap can be seen from three different angles: We may look for cliques that cover the edges of the graph with bounded overlap, we may look to add or delete few edges to uncover the cluster structure, or we may split vertices to separate the clusters from each other. Splitting a vertex $v$ means to remove it and to add two new copies of $v$ and to make each previous neighbor of $v$ adjacent with at least one of the copies. In this work, we study underlying computational problems regarding the three angles to overlapping clusterings, in particular when the overlap is small. We show that the above-mentioned covering problem is NP-complete. We then make structural observations that show that the covering viewpoint and the vertex-splitting viewpoint are equivalent, yielding NP-hardness for the vertex-splitting problem. On the positive side, we show that splitting at most $k$ vertices to obtain a cluster graph has a problem kernel with $O(k)$ vertices. Finally, we observe that combining our hardness results with the so-called critical-clique lemma yields NP-hardness for Cluster Editing with Vertex Splitting, which was previously open (Abu-Khzam et al. [ISCO 2018]) and independently shown to be NP-hard by Arrighi et al. [IPEC 2023]. We observe that a previous version of the critical-clique lemma was flawed; a corrected version has appeared in the meantime on which our hardness result is based.
1605.01880
Kittipong Kittichokechai
Kittipong Kittichokechai and Giuseppe Caire
Privacy-Constrained Remote Source Coding
10 pages, 1 figure, to be presented at ISIT 2016
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of revealing/sharing data in an efficient and secure way via a compact representation. The representation should ensure reliable reconstruction of the desired features/attributes while still preserve privacy of the secret parts of the data. The problem is formulated as a remote lossy source coding with a privacy constraint where the remote source consists of public and secret parts. Inner and outer bounds for the optimal tradeoff region of compression rate, distortion, and privacy leakage rate are given and shown to coincide for some special cases. When specializing the distortion measure to a logarithmic loss function, the resulting rate-distortion-leakage tradeoff for the case of identical side information forms an optimization problem which corresponds to the "secure" version of the so-called information bottleneck.
[ { "created": "Fri, 6 May 2016 10:15:57 GMT", "version": "v1" } ]
2016-05-09
[ [ "Kittichokechai", "Kittipong", "" ], [ "Caire", "Giuseppe", "" ] ]
We consider the problem of revealing/sharing data in an efficient and secure way via a compact representation. The representation should ensure reliable reconstruction of the desired features/attributes while still preserve privacy of the secret parts of the data. The problem is formulated as a remote lossy source coding with a privacy constraint where the remote source consists of public and secret parts. Inner and outer bounds for the optimal tradeoff region of compression rate, distortion, and privacy leakage rate are given and shown to coincide for some special cases. When specializing the distortion measure to a logarithmic loss function, the resulting rate-distortion-leakage tradeoff for the case of identical side information forms an optimization problem which corresponds to the "secure" version of the so-called information bottleneck.
1804.00755
Mahmoud Mohammadi
Mahmoud Mohammadi, Bill Chu, Heather Richter Lipford
Detecting Cross-Site Scripting Vulnerabilities through Automated Unit Testing
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are automatically constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is used to automatically generate test inputs. We evaluate our approach on a large open source medical records application, demonstrating that we can detect many 0-day XSS vulnerabilities with very low false positives, and that the grammar-based attack generator has better test coverage than industry best practices.
[ { "created": "Mon, 2 Apr 2018 22:59:18 GMT", "version": "v1" } ]
2018-04-04
[ [ "Mohammadi", "Mahmoud", "" ], [ "Chu", "Bill", "" ], [ "Lipford", "Heather Richter", "" ] ]
The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are automatically constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is used to automatically generate test inputs. We evaluate our approach on a large open source medical records application, demonstrating that we can detect many 0-day XSS vulnerabilities with very low false positives, and that the grammar-based attack generator has better test coverage than industry best practices.
2008.13300
Michael Luby
Michael Luby
SOPI design and analysis for LDN
This is a companion paper to the LDN paper that appears in ACM ICN 2020
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Liquid Data Networking (LDN) is an ICN architecture that is designed to enable the benefits of erasure-code enabled object delivery. A primary contribution of LDN is the introduction of SOPIs, which enables client s to concurrently download encoded data for the same object from multiple edge nodes, optimizes caching efficiency, and enables seamless mobility. This paper provides an enhanced design and analysis of SOPI s.
[ { "created": "Mon, 31 Aug 2020 00:16:20 GMT", "version": "v1" }, { "created": "Sat, 5 Sep 2020 00:04:23 GMT", "version": "v2" } ]
2020-09-08
[ [ "Luby", "Michael", "" ] ]
Liquid Data Networking (LDN) is an ICN architecture that is designed to enable the benefits of erasure-code enabled object delivery. A primary contribution of LDN is the introduction of SOPIs, which enables client s to concurrently download encoded data for the same object from multiple edge nodes, optimizes caching efficiency, and enables seamless mobility. This paper provides an enhanced design and analysis of SOPI s.
2109.02941
Chandni Saxena
Mudit Chaudhary, Chandni Saxena, Helen Meng
Countering Online Hate Speech: An NLP Perspective
12 pages
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Online hate speech has caught everyone's attention from the news related to the COVID-19 pandemic, US elections, and worldwide protests. Online toxicity - an umbrella term for online hateful behavior, manifests itself in forms such as online hate speech. Hate speech is a deliberate attack directed towards an individual or a group motivated by the targeted entity's identity or opinions. The rising mass communication through social media further exacerbates the harmful consequences of online hate speech. While there has been significant research on hate-speech identification using Natural Language Processing (NLP), the work on utilizing NLP for prevention and intervention of online hate speech lacks relatively. This paper presents a holistic conceptual framework on hate-speech NLP countering methods along with a thorough survey on the current progress of NLP for countering online hate speech. It classifies the countering techniques based on their time of action, and identifies potential future research areas on this topic.
[ { "created": "Tue, 7 Sep 2021 08:48:13 GMT", "version": "v1" } ]
2021-09-08
[ [ "Chaudhary", "Mudit", "" ], [ "Saxena", "Chandni", "" ], [ "Meng", "Helen", "" ] ]
Online hate speech has caught everyone's attention from the news related to the COVID-19 pandemic, US elections, and worldwide protests. Online toxicity - an umbrella term for online hateful behavior, manifests itself in forms such as online hate speech. Hate speech is a deliberate attack directed towards an individual or a group motivated by the targeted entity's identity or opinions. The rising mass communication through social media further exacerbates the harmful consequences of online hate speech. While there has been significant research on hate-speech identification using Natural Language Processing (NLP), the work on utilizing NLP for prevention and intervention of online hate speech lacks relatively. This paper presents a holistic conceptual framework on hate-speech NLP countering methods along with a thorough survey on the current progress of NLP for countering online hate speech. It classifies the countering techniques based on their time of action, and identifies potential future research areas on this topic.
2405.02580
Ye Liu
Ye Liu, Yue Xue, Daoyuan Wu, Yuqiang Sun, Yi Li, Miaolei Shi, Yang Liu
PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation
null
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With recent advances in large language models (LLMs), this paper explores the potential of leveraging state-of-the-art LLMs, such as GPT-4, to transfer existing human-written properties (e.g., those from Certora auditing reports) and automatically generate customized properties for unknown code. To this end, we embed existing properties into a vector database and retrieve a reference property for LLM-based in-context learning to generate a new prop- erty for a given code. While this basic process is relatively straight- forward, ensuring that the generated properties are (i) compilable, (ii) appropriate, and (iii) runtime-verifiable presents challenges. To address (i), we use the compilation and static analysis feedback as an external oracle to guide LLMs in iteratively revising the generated properties. For (ii), we consider multiple dimensions of similarity to rank the properties and employ a weighted algorithm to identify the top-K properties as the final result. For (iii), we design a dedicated prover to formally verify the correctness of the generated prop- erties. We have implemented these strategies into a novel system called PropertyGPT, with 623 human-written properties collected from 23 Certora projects. Our experiments show that PropertyGPT can generate comprehensive and high-quality properties, achieving an 80% recall compared to the ground truth. It successfully detected 26 CVEs/attack incidents out of 37 tested and also uncovered 12 zero-day vulnerabilities, resulting in $8,256 bug bounty rewards.
[ { "created": "Sat, 4 May 2024 06:28:27 GMT", "version": "v1" } ]
2024-05-07
[ [ "Liu", "Ye", "" ], [ "Xue", "Yue", "" ], [ "Wu", "Daoyuan", "" ], [ "Sun", "Yuqiang", "" ], [ "Li", "Yi", "" ], [ "Shi", "Miaolei", "" ], [ "Liu", "Yang", "" ] ]
With recent advances in large language models (LLMs), this paper explores the potential of leveraging state-of-the-art LLMs, such as GPT-4, to transfer existing human-written properties (e.g., those from Certora auditing reports) and automatically generate customized properties for unknown code. To this end, we embed existing properties into a vector database and retrieve a reference property for LLM-based in-context learning to generate a new prop- erty for a given code. While this basic process is relatively straight- forward, ensuring that the generated properties are (i) compilable, (ii) appropriate, and (iii) runtime-verifiable presents challenges. To address (i), we use the compilation and static analysis feedback as an external oracle to guide LLMs in iteratively revising the generated properties. For (ii), we consider multiple dimensions of similarity to rank the properties and employ a weighted algorithm to identify the top-K properties as the final result. For (iii), we design a dedicated prover to formally verify the correctness of the generated prop- erties. We have implemented these strategies into a novel system called PropertyGPT, with 623 human-written properties collected from 23 Certora projects. Our experiments show that PropertyGPT can generate comprehensive and high-quality properties, achieving an 80% recall compared to the ground truth. It successfully detected 26 CVEs/attack incidents out of 37 tested and also uncovered 12 zero-day vulnerabilities, resulting in $8,256 bug bounty rewards.
1708.00980
Yudong Guo
Yudong Guo, Juyong Zhang, Jianfei Cai, Boyi Jiang and Jianmin Zheng
CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images
Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the powerfulness of convolution neural networks (CNN), CNN based face reconstruction has recently shown promising performance in reconstructing detailed face shape from 2D face images. The success of CNN-based methods relies on a large number of labeled data. The state-of-the-art synthesizes such data using a coarse morphable face model, which however has difficulty to generate detailed photo-realistic images of faces (with wrinkles). This paper presents a novel face data generation method. Specifically, we render a large number of photo-realistic face images with different attributes based on inverse rendering. Furthermore, we construct a fine-detailed face image dataset by transferring different scales of details from one image to another. We also construct a large number of video-type adjacent frame pairs by simulating the distribution of real video data. With these nicely constructed datasets, we propose a coarse-to-fine learning framework consisting of three convolutional networks. The networks are trained for real-time detailed 3D face reconstruction from monocular video as well as from a single image. Extensive experimental results demonstrate that our framework can produce high-quality reconstruction but with much less computation time compared to the state-of-the-art. Moreover, our method is robust to pose, expression and lighting due to the diversity of data.
[ { "created": "Thu, 3 Aug 2017 03:18:34 GMT", "version": "v1" }, { "created": "Mon, 11 Sep 2017 11:32:01 GMT", "version": "v2" }, { "created": "Tue, 15 May 2018 07:02:35 GMT", "version": "v3" } ]
2018-05-16
[ [ "Guo", "Yudong", "" ], [ "Zhang", "Juyong", "" ], [ "Cai", "Jianfei", "" ], [ "Jiang", "Boyi", "" ], [ "Zheng", "Jianmin", "" ] ]
With the powerfulness of convolution neural networks (CNN), CNN based face reconstruction has recently shown promising performance in reconstructing detailed face shape from 2D face images. The success of CNN-based methods relies on a large number of labeled data. The state-of-the-art synthesizes such data using a coarse morphable face model, which however has difficulty to generate detailed photo-realistic images of faces (with wrinkles). This paper presents a novel face data generation method. Specifically, we render a large number of photo-realistic face images with different attributes based on inverse rendering. Furthermore, we construct a fine-detailed face image dataset by transferring different scales of details from one image to another. We also construct a large number of video-type adjacent frame pairs by simulating the distribution of real video data. With these nicely constructed datasets, we propose a coarse-to-fine learning framework consisting of three convolutional networks. The networks are trained for real-time detailed 3D face reconstruction from monocular video as well as from a single image. Extensive experimental results demonstrate that our framework can produce high-quality reconstruction but with much less computation time compared to the state-of-the-art. Moreover, our method is robust to pose, expression and lighting due to the diversity of data.
2309.04175
Haochun Wang
Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu
Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese
11 pages, 5 figures
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have demonstrated remarkable success in diverse natural language processing (NLP) tasks in general domains. However, LLMs sometimes generate responses with the hallucination about medical facts due to limited domain knowledge. Such shortcomings pose potential risks in the utilization of LLMs within medical contexts. To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation. We also release cMedKnowQA, a Chinese medical knowledge question-answering dataset constructed from medical knowledge bases to assess the medical knowledge proficiency of LLMs. Experimental results show that the LLMs which are knowledge-tuned with cMedKnowQA, can exhibit higher levels of accuracy in response generation compared with vanilla instruction-tuning and offer a new reliable way for the domain adaptation of LLMs.
[ { "created": "Fri, 8 Sep 2023 07:42:57 GMT", "version": "v1" } ]
2023-09-11
[ [ "Wang", "Haochun", "" ], [ "Zhao", "Sendong", "" ], [ "Qiang", "Zewen", "" ], [ "Li", "Zijian", "" ], [ "Xi", "Nuwa", "" ], [ "Du", "Yanrui", "" ], [ "Cai", "MuZhen", "" ], [ "Guo", "Haoqiang", "" ], [ "Chen", "Yuhan", "" ], [ "Xu", "Haoming", "" ], [ "Qin", "Bing", "" ], [ "Liu", "Ting", "" ] ]
Large Language Models (LLMs) have demonstrated remarkable success in diverse natural language processing (NLP) tasks in general domains. However, LLMs sometimes generate responses with the hallucination about medical facts due to limited domain knowledge. Such shortcomings pose potential risks in the utilization of LLMs within medical contexts. To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation. We also release cMedKnowQA, a Chinese medical knowledge question-answering dataset constructed from medical knowledge bases to assess the medical knowledge proficiency of LLMs. Experimental results show that the LLMs which are knowledge-tuned with cMedKnowQA, can exhibit higher levels of accuracy in response generation compared with vanilla instruction-tuning and offer a new reliable way for the domain adaptation of LLMs.
2303.16611
Sebastien Valette Dr
Kaifeng Zou, Sylvain Faisan, Boyang Yu, S\'ebastien Valette, Hyewon Seo
4D Facial Expression Diffusion Model
null
null
10.1145/3653455
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences (i.e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks: (1) Learning the generative model that is trained over a set of 3D landmark sequences, and (2) Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model (DDPM), which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmark-guided encoder-decoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods. Videos and qualitative comparisons with other methods can be found at \url{https://github.com/ZOUKaifeng/4DFM}.
[ { "created": "Wed, 29 Mar 2023 11:50:21 GMT", "version": "v1" }, { "created": "Mon, 15 Apr 2024 13:29:47 GMT", "version": "v2" } ]
2024-04-16
[ [ "Zou", "Kaifeng", "" ], [ "Faisan", "Sylvain", "" ], [ "Yu", "Boyang", "" ], [ "Valette", "Sébastien", "" ], [ "Seo", "Hyewon", "" ] ]
Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences (i.e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks: (1) Learning the generative model that is trained over a set of 3D landmark sequences, and (2) Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model (DDPM), which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmark-guided encoder-decoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods. Videos and qualitative comparisons with other methods can be found at \url{https://github.com/ZOUKaifeng/4DFM}.
2405.03734
Xiaoning Wang
Silan Hu, Xiaoning Wang
FOKE: A Personalized and Explainable Education Framework Integrating Foundation Models, Knowledge Graphs, and Prompt Engineering
null
null
null
null
cs.HC cs.AI stat.AP
http://creativecommons.org/licenses/by/4.0/
Integrating large language models (LLMs) and knowledge graphs (KGs) holds great promise for revolutionizing intelligent education, but challenges remain in achieving personalization, interactivity, and explainability. We propose FOKE, a Forest Of Knowledge and Education framework that synergizes foundation models, knowledge graphs, and prompt engineering to address these challenges. FOKE introduces three key innovations: (1) a hierarchical knowledge forest for structured domain knowledge representation; (2) a multi-dimensional user profiling mechanism for comprehensive learner modeling; and (3) an interactive prompt engineering scheme for generating precise and tailored learning guidance. We showcase FOKE's application in programming education, homework assessment, and learning path planning, demonstrating its effectiveness and practicality. Additionally, we implement Scholar Hero, a real-world instantiation of FOKE. Our research highlights the potential of integrating foundation models, knowledge graphs, and prompt engineering to revolutionize intelligent education practices, ultimately benefiting learners worldwide. FOKE provides a principled and unified approach to harnessing cutting-edge AI technologies for personalized, interactive, and explainable educational services, paving the way for further research and development in this critical direction.
[ { "created": "Mon, 6 May 2024 15:11:05 GMT", "version": "v1" } ]
2024-05-08
[ [ "Hu", "Silan", "" ], [ "Wang", "Xiaoning", "" ] ]
Integrating large language models (LLMs) and knowledge graphs (KGs) holds great promise for revolutionizing intelligent education, but challenges remain in achieving personalization, interactivity, and explainability. We propose FOKE, a Forest Of Knowledge and Education framework that synergizes foundation models, knowledge graphs, and prompt engineering to address these challenges. FOKE introduces three key innovations: (1) a hierarchical knowledge forest for structured domain knowledge representation; (2) a multi-dimensional user profiling mechanism for comprehensive learner modeling; and (3) an interactive prompt engineering scheme for generating precise and tailored learning guidance. We showcase FOKE's application in programming education, homework assessment, and learning path planning, demonstrating its effectiveness and practicality. Additionally, we implement Scholar Hero, a real-world instantiation of FOKE. Our research highlights the potential of integrating foundation models, knowledge graphs, and prompt engineering to revolutionize intelligent education practices, ultimately benefiting learners worldwide. FOKE provides a principled and unified approach to harnessing cutting-edge AI technologies for personalized, interactive, and explainable educational services, paving the way for further research and development in this critical direction.
1612.03052
Joe Yue-Hei Ng
Joe Yue-Hei Ng, Jonghyun Choi, Jan Neumann, Larry S. Davis
ActionFlowNet: Learning Motion Representation for Action Recognition
WACV 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Even with the recent advances in convolutional neural networks (CNN) in various visual recognition tasks, the state-of-the-art action recognition system still relies on hand crafted motion feature such as optical flow to achieve the best performance. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. We additionally provide insights to how the quality of the learned optical flow affects the action recognition. Our model significantly improves action recognition accuracy by a large margin 31% compared to state-of-the-art CNN-based action recognition models trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.
[ { "created": "Fri, 9 Dec 2016 15:20:23 GMT", "version": "v1" }, { "created": "Fri, 21 Apr 2017 01:45:42 GMT", "version": "v2" }, { "created": "Fri, 16 Feb 2018 22:15:25 GMT", "version": "v3" } ]
2018-02-20
[ [ "Ng", "Joe Yue-Hei", "" ], [ "Choi", "Jonghyun", "" ], [ "Neumann", "Jan", "" ], [ "Davis", "Larry S.", "" ] ]
Even with the recent advances in convolutional neural networks (CNN) in various visual recognition tasks, the state-of-the-art action recognition system still relies on hand crafted motion feature such as optical flow to achieve the best performance. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. We additionally provide insights to how the quality of the learned optical flow affects the action recognition. Our model significantly improves action recognition accuracy by a large margin 31% compared to state-of-the-art CNN-based action recognition models trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.
1904.10386
Mario Gleirscher
Mario Gleirscher
Risk Structures: Towards Engineering Risk-aware Autonomous Systems
null
null
10.1007/s00165-021-00545-4
null
cs.SE cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Inspired by widely-used techniques of causal modelling in risk, failure, and accident analysis, this work discusses a compositional framework for risk modelling. Risk models capture fragments of the space of risky events likely to occur when operating a machine in a given environment. Moreover, one can build such models into machines such as autonomous robots, to equip them with the ability of risk-aware perception, monitoring, decision making, and control. With the notion of a risk factor as the modelling primitive, the framework provides several means to construct and shape risk models. Relational and algebraic properties are investigated and proofs support the validity and consistency of these properties over the corresponding models. Several examples throughout the discussion illustrate the applicability of the concepts. Overall, this work focuses on the qualitative treatment of risk with the outlook of transferring these results to probabilistic refinements of the discussed framework.
[ { "created": "Tue, 23 Apr 2019 15:29:00 GMT", "version": "v1" } ]
2021-12-22
[ [ "Gleirscher", "Mario", "" ] ]
Inspired by widely-used techniques of causal modelling in risk, failure, and accident analysis, this work discusses a compositional framework for risk modelling. Risk models capture fragments of the space of risky events likely to occur when operating a machine in a given environment. Moreover, one can build such models into machines such as autonomous robots, to equip them with the ability of risk-aware perception, monitoring, decision making, and control. With the notion of a risk factor as the modelling primitive, the framework provides several means to construct and shape risk models. Relational and algebraic properties are investigated and proofs support the validity and consistency of these properties over the corresponding models. Several examples throughout the discussion illustrate the applicability of the concepts. Overall, this work focuses on the qualitative treatment of risk with the outlook of transferring these results to probabilistic refinements of the discussed framework.
2301.07634
Panagiotis Meletis
Panagiotis Meletis, Gijs Dubbelman
Training Semantic Segmentation on Heterogeneous Datasets
Submitted 2021 (under review)
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We explore semantic segmentation beyond the conventional, single-dataset homogeneous training and bring forward the problem of Heterogeneous Training of Semantic Segmentation (HTSS). HTSS involves simultaneous training on multiple heterogeneous datasets, i.e. datasets with conflicting label spaces and different (weak) annotation types from the perspective of semantic segmentation. The HTSS formulation exposes deep networks to a larger and previously unexplored aggregation of information that can potentially enhance semantic segmentation in three directions: i) performance: increased segmentation metrics on seen datasets, ii) generalization: improved segmentation metrics on unseen datasets, and iii) knowledgeability: increased number of recognizable semantic concepts. To research these benefits of HTSS, we propose a unified framework, that incorporates heterogeneous datasets in a single-network training pipeline following the established FCN standard. Our framework first curates heterogeneous datasets to bring them into a common format and then trains a single-backbone FCN on all of them simultaneously. To achieve this, it transforms weak annotations, which are incompatible with semantic segmentation, to per-pixel labels, and hierarchizes their label spaces into a universal taxonomy. The trained HTSS models demonstrate performance and generalization gains over a wide range of datasets and extend the inference label space entailing hundreds of semantic classes.
[ { "created": "Wed, 18 Jan 2023 16:22:40 GMT", "version": "v1" } ]
2023-01-19
[ [ "Meletis", "Panagiotis", "" ], [ "Dubbelman", "Gijs", "" ] ]
We explore semantic segmentation beyond the conventional, single-dataset homogeneous training and bring forward the problem of Heterogeneous Training of Semantic Segmentation (HTSS). HTSS involves simultaneous training on multiple heterogeneous datasets, i.e. datasets with conflicting label spaces and different (weak) annotation types from the perspective of semantic segmentation. The HTSS formulation exposes deep networks to a larger and previously unexplored aggregation of information that can potentially enhance semantic segmentation in three directions: i) performance: increased segmentation metrics on seen datasets, ii) generalization: improved segmentation metrics on unseen datasets, and iii) knowledgeability: increased number of recognizable semantic concepts. To research these benefits of HTSS, we propose a unified framework, that incorporates heterogeneous datasets in a single-network training pipeline following the established FCN standard. Our framework first curates heterogeneous datasets to bring them into a common format and then trains a single-backbone FCN on all of them simultaneously. To achieve this, it transforms weak annotations, which are incompatible with semantic segmentation, to per-pixel labels, and hierarchizes their label spaces into a universal taxonomy. The trained HTSS models demonstrate performance and generalization gains over a wide range of datasets and extend the inference label space entailing hundreds of semantic classes.
1906.10104
Weilian Song
Weilian Song, Tawfiq Salem, Hunter Blanton, Nathan Jacobs
Remote Estimation of Free-Flow Speeds
4 pages, 4 figures, IGARSS 2019 camera-ready submission
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an automated method to estimate a road segment's free-flow speed from overhead imagery and road metadata. The free-flow speed of a road segment is the average observed vehicle speed in ideal conditions, without congestion or adverse weather. Standard practice for estimating free-flow speeds depends on several road attributes, including grade, curve, and width of the right of way. Unfortunately, many of these fine-grained labels are not always readily available and are costly to manually annotate. To compensate, our model uses a small, easy to obtain subset of road features along with aerial imagery to directly estimate free-flow speed with a deep convolutional neural network (CNN). We evaluate our approach on a large dataset, and demonstrate that using imagery alone performs nearly as well as the road features and that the combination of imagery with road features leads to the highest accuracy.
[ { "created": "Mon, 24 Jun 2019 17:41:46 GMT", "version": "v1" } ]
2019-06-25
[ [ "Song", "Weilian", "" ], [ "Salem", "Tawfiq", "" ], [ "Blanton", "Hunter", "" ], [ "Jacobs", "Nathan", "" ] ]
We propose an automated method to estimate a road segment's free-flow speed from overhead imagery and road metadata. The free-flow speed of a road segment is the average observed vehicle speed in ideal conditions, without congestion or adverse weather. Standard practice for estimating free-flow speeds depends on several road attributes, including grade, curve, and width of the right of way. Unfortunately, many of these fine-grained labels are not always readily available and are costly to manually annotate. To compensate, our model uses a small, easy to obtain subset of road features along with aerial imagery to directly estimate free-flow speed with a deep convolutional neural network (CNN). We evaluate our approach on a large dataset, and demonstrate that using imagery alone performs nearly as well as the road features and that the combination of imagery with road features leads to the highest accuracy.
1508.06710
EPTCS
Pedro R. D'Argenio (FaMAF, Universidad Nacional de C\'ordoba - CONICET), Matias David Lee (FaMAF, Universidad Nacional de C\'ordoba - CONICET), Daniel Gebler (Department of Computer Science, VU University Amsterdam)
SOS rule formats for convex and abstract probabilistic bisimulations
In Proceedings EXPRESS/SOS 2015, arXiv:1508.06347
EPTCS 190, 2015, pp. 31-45
10.4204/EPTCS.190.3
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic transition system specifications (PTSSs) in the $nt \mu f\theta / nt\mu x\theta$ format provide structural operational semantics for Segala-type systems that exhibit both probabilistic and nondeterministic behavior and guarantee that bisimilarity is a congruence for all operator defined in such format. Starting from the $nt \mu f\theta / nt\mu x\theta$ format, we obtain restricted formats that guarantee that three coarser bisimulation equivalences are congruences. We focus on (i) Segala's variant of bisimulation that considers combined transitions, which we call here "convex bisimulation"; (ii) the bisimulation equivalence resulting from considering Park & Milner's bisimulation on the usual stripped probabilistic transition system (translated into a labelled transition system), which we call here "probability obliterated bisimulation"; and (iii) a "probability abstracted bisimulation", which, like bisimulation, preserves the structure of the distributions but instead, it ignores the probability values. In addition, we compare these bisimulation equivalences and provide a logic characterization for each of them.
[ { "created": "Thu, 27 Aug 2015 03:21:26 GMT", "version": "v1" } ]
2015-08-28
[ [ "D'Argenio", "Pedro R.", "", "FaMAF, Universidad Nacional de Córdoba -\n CONICET" ], [ "Lee", "Matias David", "", "FaMAF, Universidad Nacional de Córdoba -\n CONICET" ], [ "Gebler", "Daniel", "", "Department of Computer Science, VU University\n Amsterdam" ] ]
Probabilistic transition system specifications (PTSSs) in the $nt \mu f\theta / nt\mu x\theta$ format provide structural operational semantics for Segala-type systems that exhibit both probabilistic and nondeterministic behavior and guarantee that bisimilarity is a congruence for all operator defined in such format. Starting from the $nt \mu f\theta / nt\mu x\theta$ format, we obtain restricted formats that guarantee that three coarser bisimulation equivalences are congruences. We focus on (i) Segala's variant of bisimulation that considers combined transitions, which we call here "convex bisimulation"; (ii) the bisimulation equivalence resulting from considering Park & Milner's bisimulation on the usual stripped probabilistic transition system (translated into a labelled transition system), which we call here "probability obliterated bisimulation"; and (iii) a "probability abstracted bisimulation", which, like bisimulation, preserves the structure of the distributions but instead, it ignores the probability values. In addition, we compare these bisimulation equivalences and provide a logic characterization for each of them.
2205.00772
Jonas Falkner
Jonas K. Falkner, Daniela Thyssens, Lars Schmidt-Thieme
Large Neighborhood Search based on Neural Construction Heuristics
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
We propose a Large Neighborhood Search (LNS) approach utilizing a learned construction heuristic based on neural networks as repair operator to solve the vehicle routing problem with time windows (VRPTW). Our method uses graph neural networks to encode the problem and auto-regressively decodes a solution and is trained with reinforcement learning on the construction task without requiring any labels for supervision. The neural repair operator is combined with a local search routine, heuristic destruction operators and a selection procedure applied to a small population to arrive at a sophisticated solution approach. The key idea is to use the learned model to re-construct the partially destructed solution and to introduce randomness via the destruction heuristics (or the stochastic policy itself) to effectively explore a large neighborhood.
[ { "created": "Mon, 2 May 2022 09:38:19 GMT", "version": "v1" }, { "created": "Tue, 10 May 2022 12:02:44 GMT", "version": "v2" } ]
2022-05-11
[ [ "Falkner", "Jonas K.", "" ], [ "Thyssens", "Daniela", "" ], [ "Schmidt-Thieme", "Lars", "" ] ]
We propose a Large Neighborhood Search (LNS) approach utilizing a learned construction heuristic based on neural networks as repair operator to solve the vehicle routing problem with time windows (VRPTW). Our method uses graph neural networks to encode the problem and auto-regressively decodes a solution and is trained with reinforcement learning on the construction task without requiring any labels for supervision. The neural repair operator is combined with a local search routine, heuristic destruction operators and a selection procedure applied to a small population to arrive at a sophisticated solution approach. The key idea is to use the learned model to re-construct the partially destructed solution and to introduce randomness via the destruction heuristics (or the stochastic policy itself) to effectively explore a large neighborhood.
2406.07393
Peng Hu
Peng Hu, Changjiang Gao, Ruiqi Gao, Jiajun Chen, and Shujian Huang
Limited Out-of-Context Knowledge Reasoning in Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have demonstrated strong capabilities as knowledge bases and significant in-context reasoning capabilities. However, previous work challenges their out-of-context reasoning ability, i.e., the ability to infer information from their training data, instead of from the context or prompt. This paper focuses on a significant facet of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge. We designed a synthetic dataset with seven representative OCKR tasks to systematically assess the OCKR capabilities of LLMs. Using this dataset, we evaluated the LLaMA2-13B-chat model and discovered that its proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings. Moreover, training the model to reason with complete reasoning data did not result in significant improvement. Training the model to perform explicit knowledge retrieval helps in only one of the tasks, indicating that the model's limited OCKR capabilities are due to difficulties in retrieving relevant knowledge. Furthermore, we treat cross-lingual knowledge transfer as a distinct form of OCKR, and evaluate this ability. Our results show that the evaluated model also exhibits limited ability in transferring knowledge across languages. The dataset used in this study is available at https://github.com/NJUNLP/ID-OCKR.
[ { "created": "Tue, 11 Jun 2024 15:58:59 GMT", "version": "v1" }, { "created": "Mon, 24 Jun 2024 14:59:54 GMT", "version": "v2" } ]
2024-06-25
[ [ "Hu", "Peng", "" ], [ "Gao", "Changjiang", "" ], [ "Gao", "Ruiqi", "" ], [ "Chen", "Jiajun", "" ], [ "Huang", "Shujian", "" ] ]
Large Language Models (LLMs) have demonstrated strong capabilities as knowledge bases and significant in-context reasoning capabilities. However, previous work challenges their out-of-context reasoning ability, i.e., the ability to infer information from their training data, instead of from the context or prompt. This paper focuses on a significant facet of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge. We designed a synthetic dataset with seven representative OCKR tasks to systematically assess the OCKR capabilities of LLMs. Using this dataset, we evaluated the LLaMA2-13B-chat model and discovered that its proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings. Moreover, training the model to reason with complete reasoning data did not result in significant improvement. Training the model to perform explicit knowledge retrieval helps in only one of the tasks, indicating that the model's limited OCKR capabilities are due to difficulties in retrieving relevant knowledge. Furthermore, we treat cross-lingual knowledge transfer as a distinct form of OCKR, and evaluate this ability. Our results show that the evaluated model also exhibits limited ability in transferring knowledge across languages. The dataset used in this study is available at https://github.com/NJUNLP/ID-OCKR.
2203.08737
Giorgos Armeniakos
Giorgos Armeniakos, Georgios Zervakis, Dimitrios Soudris, J\"org Henkel
Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey
This paper has been accepted by ACM Computing Surveys (CSUR), 2022
ACM Computing Surveys 2022
10.1145/3527156
null
cs.AR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks (DNNs) are very popular because of their high performance in various cognitive tasks in Machine Learning (ML). Recent advancements in DNNs have brought beyond human accuracy in many tasks, but at the cost of high computational complexity. To enable efficient execution of DNN inference, more and more research works, therefore, exploit the inherent error resilience of DNNs and employ Approximate Computing (AC) principles to address the elevated energy demands of DNN accelerators. This article provides a comprehensive survey and analysis of hardware approximation techniques for DNN accelerators. First, we analyze the state of the art and by identifying approximation families, we cluster the respective works with respect to the approximation type. Next, we analyze the complexity of the performed evaluations (with respect to the dataset and DNN size) to assess the efficiency, the potential, and limitations of approximate DNN accelerators. Moreover, a broad discussion is provided, regarding error metrics that are more suitable for designing approximate units for DNN accelerators as well as accuracy recovery approaches that are tailored to DNN inference. Finally, we present how Approximate Computing for DNN accelerators can go beyond energy efficiency and address reliability and security issues, as well.
[ { "created": "Wed, 16 Mar 2022 16:33:13 GMT", "version": "v1" } ]
2022-03-18
[ [ "Armeniakos", "Giorgos", "" ], [ "Zervakis", "Georgios", "" ], [ "Soudris", "Dimitrios", "" ], [ "Henkel", "Jörg", "" ] ]
Deep Neural Networks (DNNs) are very popular because of their high performance in various cognitive tasks in Machine Learning (ML). Recent advancements in DNNs have brought beyond human accuracy in many tasks, but at the cost of high computational complexity. To enable efficient execution of DNN inference, more and more research works, therefore, exploit the inherent error resilience of DNNs and employ Approximate Computing (AC) principles to address the elevated energy demands of DNN accelerators. This article provides a comprehensive survey and analysis of hardware approximation techniques for DNN accelerators. First, we analyze the state of the art and by identifying approximation families, we cluster the respective works with respect to the approximation type. Next, we analyze the complexity of the performed evaluations (with respect to the dataset and DNN size) to assess the efficiency, the potential, and limitations of approximate DNN accelerators. Moreover, a broad discussion is provided, regarding error metrics that are more suitable for designing approximate units for DNN accelerators as well as accuracy recovery approaches that are tailored to DNN inference. Finally, we present how Approximate Computing for DNN accelerators can go beyond energy efficiency and address reliability and security issues, as well.
2401.15966
Kenta Izumi
Kenta Izumi, Hiroki Tanaka, Kazuhiro Shidara, Hiroyoshi Adachi, Daisuke Kanayama, Takashi Kudo, and Satoshi Nakamura
Response Generation for Cognitive Behavioral Therapy with Large Language Models: Comparative Study with Socratic Questioning
Accepted by IWSDS2024
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Dialogue systems controlled by predefined or rule-based scenarios derived from counseling techniques, such as cognitive behavioral therapy (CBT), play an important role in mental health apps. Despite the need for responsible responses, it is conceivable that using the newly emerging LLMs to generate contextually relevant utterances will enhance these apps. In this study, we construct dialogue modules based on a CBT scenario focused on conventional Socratic questioning using two kinds of LLMs: a Transformer-based dialogue model further trained with a social media empathetic counseling dataset, provided by Osaka Prefecture (OsakaED), and GPT-4, a state-of-the art LLM created by OpenAI. By comparing systems that use LLM-generated responses with those that do not, we investigate the impact of generated responses on subjective evaluations such as mood change, cognitive change, and dialogue quality (e.g., empathy). As a result, no notable improvements are observed when using the OsakaED model. When using GPT-4, the amount of mood change, empathy, and other dialogue qualities improve significantly. Results suggest that GPT-4 possesses a high counseling ability. However, they also indicate that even when using a dialogue model trained with a human counseling dataset, it does not necessarily yield better outcomes compared to scenario-based dialogues. While presenting LLM-generated responses, including GPT-4, and having them interact directly with users in real-life mental health care services may raise ethical issues, it is still possible for human professionals to produce example responses or response templates using LLMs in advance in systems that use rules, scenarios, or example responses.
[ { "created": "Mon, 29 Jan 2024 08:53:41 GMT", "version": "v1" } ]
2024-01-30
[ [ "Izumi", "Kenta", "" ], [ "Tanaka", "Hiroki", "" ], [ "Shidara", "Kazuhiro", "" ], [ "Adachi", "Hiroyoshi", "" ], [ "Kanayama", "Daisuke", "" ], [ "Kudo", "Takashi", "" ], [ "Nakamura", "Satoshi", "" ] ]
Dialogue systems controlled by predefined or rule-based scenarios derived from counseling techniques, such as cognitive behavioral therapy (CBT), play an important role in mental health apps. Despite the need for responsible responses, it is conceivable that using the newly emerging LLMs to generate contextually relevant utterances will enhance these apps. In this study, we construct dialogue modules based on a CBT scenario focused on conventional Socratic questioning using two kinds of LLMs: a Transformer-based dialogue model further trained with a social media empathetic counseling dataset, provided by Osaka Prefecture (OsakaED), and GPT-4, a state-of-the art LLM created by OpenAI. By comparing systems that use LLM-generated responses with those that do not, we investigate the impact of generated responses on subjective evaluations such as mood change, cognitive change, and dialogue quality (e.g., empathy). As a result, no notable improvements are observed when using the OsakaED model. When using GPT-4, the amount of mood change, empathy, and other dialogue qualities improve significantly. Results suggest that GPT-4 possesses a high counseling ability. However, they also indicate that even when using a dialogue model trained with a human counseling dataset, it does not necessarily yield better outcomes compared to scenario-based dialogues. While presenting LLM-generated responses, including GPT-4, and having them interact directly with users in real-life mental health care services may raise ethical issues, it is still possible for human professionals to produce example responses or response templates using LLMs in advance in systems that use rules, scenarios, or example responses.
0708.1150
Marko Antonio Rodriguez
Marko A. Rodriguez, Johah Bollen, Herbert Van de Sompel
A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage
null
Proceedings of the IEEE/ACM Joint Conference on Digital Libraries (JCDL'07), pp. 278-287, 2007
10.1145/1255175.1255229
null
cs.DL cs.AI
null
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
[ { "created": "Wed, 8 Aug 2007 17:06:55 GMT", "version": "v1" } ]
2007-08-09
[ [ "Rodriguez", "Marko A.", "" ], [ "Bollen", "Johah", "" ], [ "Van de Sompel", "Herbert", "" ] ]
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
1905.09514
Min Qiu
Min Qiu, Yu-Chih Huang, Jinhong Yuan
Downlink Non-Orthogonal Multiple Access without SIC for Block Fading Channels: An Algebraic Rotation Approach
15 pages, 8 figures, accepted by IEEE Transactions on Wireless Communications
null
10.1109/TWC.2019.2919292
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the problem of downlink non-orthogonal multiple access (NOMA) over block fading channels. For the single antenna case, we propose a class of NOMA schemes where all the users' signals are mapped into $n$-dimensional constellations corresponding to the same algebraic lattices from a number field, allowing every user attains full diversity gain with single-user decoding, i.e., no successive interference cancellation (SIC). The minimum product distances of the proposed scheme with arbitrary power allocation factor are analyzed and their upper bounds are derived. Within the proposed class of schemes, we also identify a special family of NOMA schemes based on lattice partitions of the underlying ideal lattices, whose minimum product distances can be easily controlled. Our analysis shows that among the proposed schemes, the lattice-partition-based schemes achieve the largest minimum product distances of the superimposed constellations, which are closely related to the symbol error rates for receivers with single-user decoding. Simulation results are presented to verify our analysis and to show the effectiveness of the proposed schemes as compared to benchmark NOMA schemes. Extensions of our design to the multi-antenna case are also considered where similar analysis and results are presented.
[ { "created": "Thu, 23 May 2019 07:45:07 GMT", "version": "v1" } ]
2019-06-18
[ [ "Qiu", "Min", "" ], [ "Huang", "Yu-Chih", "" ], [ "Yuan", "Jinhong", "" ] ]
In this paper, we investigate the problem of downlink non-orthogonal multiple access (NOMA) over block fading channels. For the single antenna case, we propose a class of NOMA schemes where all the users' signals are mapped into $n$-dimensional constellations corresponding to the same algebraic lattices from a number field, allowing every user attains full diversity gain with single-user decoding, i.e., no successive interference cancellation (SIC). The minimum product distances of the proposed scheme with arbitrary power allocation factor are analyzed and their upper bounds are derived. Within the proposed class of schemes, we also identify a special family of NOMA schemes based on lattice partitions of the underlying ideal lattices, whose minimum product distances can be easily controlled. Our analysis shows that among the proposed schemes, the lattice-partition-based schemes achieve the largest minimum product distances of the superimposed constellations, which are closely related to the symbol error rates for receivers with single-user decoding. Simulation results are presented to verify our analysis and to show the effectiveness of the proposed schemes as compared to benchmark NOMA schemes. Extensions of our design to the multi-antenna case are also considered where similar analysis and results are presented.
2002.07418
Peng Zhang
Peng Zhang, Jianye Hao, Weixun Wang, Hongyao Tang, Yi Ma, Yihai Duan, Yan Zheng
KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human Suboptimal Knowledge
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning agents usually learn from scratch, which requires a large number of interactions with the environment. This is quite different from the learning process of human. When faced with a new task, human naturally have the common sense and use the prior knowledge to derive an initial policy and guide the learning process afterwards. Although the prior knowledge may be not fully applicable to the new task, the learning process is significantly sped up since the initial policy ensures a quick-start of learning and intermediate guidance allows to avoid unnecessary exploration. Taking this inspiration, we propose knowledge guided policy network (KoGuN), a novel framework that combines human prior suboptimal knowledge with reinforcement learning. Our framework consists of a fuzzy rule controller to represent human knowledge and a refine module to fine-tune suboptimal prior knowledge. The proposed framework is end-to-end and can be combined with existing policy-based reinforcement learning algorithm. We conduct experiments on both discrete and continuous control tasks. The empirical results show that our approach, which combines human suboptimal knowledge and RL, achieves significant improvement on learning efficiency of flat RL algorithms, even with very low-performance human prior knowledge.
[ { "created": "Tue, 18 Feb 2020 07:58:27 GMT", "version": "v1" }, { "created": "Thu, 21 May 2020 07:02:41 GMT", "version": "v2" } ]
2020-05-22
[ [ "Zhang", "Peng", "" ], [ "Hao", "Jianye", "" ], [ "Wang", "Weixun", "" ], [ "Tang", "Hongyao", "" ], [ "Ma", "Yi", "" ], [ "Duan", "Yihai", "" ], [ "Zheng", "Yan", "" ] ]
Reinforcement learning agents usually learn from scratch, which requires a large number of interactions with the environment. This is quite different from the learning process of human. When faced with a new task, human naturally have the common sense and use the prior knowledge to derive an initial policy and guide the learning process afterwards. Although the prior knowledge may be not fully applicable to the new task, the learning process is significantly sped up since the initial policy ensures a quick-start of learning and intermediate guidance allows to avoid unnecessary exploration. Taking this inspiration, we propose knowledge guided policy network (KoGuN), a novel framework that combines human prior suboptimal knowledge with reinforcement learning. Our framework consists of a fuzzy rule controller to represent human knowledge and a refine module to fine-tune suboptimal prior knowledge. The proposed framework is end-to-end and can be combined with existing policy-based reinforcement learning algorithm. We conduct experiments on both discrete and continuous control tasks. The empirical results show that our approach, which combines human suboptimal knowledge and RL, achieves significant improvement on learning efficiency of flat RL algorithms, even with very low-performance human prior knowledge.
2108.01154
Ali Raheem Mandeel
Ali Raheem Mandeel, Mohammed Salah Al-Radhi, Tam\'as G\'abor Csap\'o
Speaker Adaptation with Continuous Vocoder-based DNN-TTS
10 pages, 3 figures, 23RD INTERNATIONAL CONFERENCE ON SPEECH AND COMPUTER SPECOM 2021
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Traditional vocoder-based statistical parametric speech synthesis can be advantageous in applications that require low computational complexity. Recent neural vocoders, which can produce high naturalness, still cannot fulfill the requirement of being real-time during synthesis. In this paper, we experiment with our earlier continuous vocoder, in which the excitation is modeled with two one-dimensional parameters: continuous F0 and Maximum Voiced Frequency. We show on the data of 9 speakers that an average voice can be trained for DNN-TTS, and speaker adaptation is feasible 400 utterances (about 14 minutes). Objective experiments support that the quality of speaker adaptation with Continuous Vocoder-based DNN-TTS is similar to the quality of the speaker adaptation with a WORLD Vocoder-based baseline.
[ { "created": "Mon, 2 Aug 2021 20:08:07 GMT", "version": "v1" } ]
2021-08-04
[ [ "Mandeel", "Ali Raheem", "" ], [ "Al-Radhi", "Mohammed Salah", "" ], [ "Csapó", "Tamás Gábor", "" ] ]
Traditional vocoder-based statistical parametric speech synthesis can be advantageous in applications that require low computational complexity. Recent neural vocoders, which can produce high naturalness, still cannot fulfill the requirement of being real-time during synthesis. In this paper, we experiment with our earlier continuous vocoder, in which the excitation is modeled with two one-dimensional parameters: continuous F0 and Maximum Voiced Frequency. We show on the data of 9 speakers that an average voice can be trained for DNN-TTS, and speaker adaptation is feasible 400 utterances (about 14 minutes). Objective experiments support that the quality of speaker adaptation with Continuous Vocoder-based DNN-TTS is similar to the quality of the speaker adaptation with a WORLD Vocoder-based baseline.
1609.08154
Zhiyong Shan
Zhiyong Shan, Yu-fang Sun
Implementing RBAC model in An Operating System Kernel
in Chinese
null
null
null
cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the implementation of an operating system oriented RBAC model is discussed. Firstly, on the basis of RBAC96 model, a new RBAC model named OSR is presented. Secondly, the OSR model is enforced in RFSOS kernel by the way of integrating GFAC method and Capability mechanism together. All parts of the OSR implementation are described in detail.
[ { "created": "Mon, 26 Sep 2016 17:36:18 GMT", "version": "v1" } ]
2016-09-28
[ [ "Shan", "Zhiyong", "" ], [ "Sun", "Yu-fang", "" ] ]
In this paper, the implementation of an operating system oriented RBAC model is discussed. Firstly, on the basis of RBAC96 model, a new RBAC model named OSR is presented. Secondly, the OSR model is enforced in RFSOS kernel by the way of integrating GFAC method and Capability mechanism together. All parts of the OSR implementation are described in detail.
1903.00172
Yoshihiko Suhara
Nikita Bhutani, Yoshihiko Suhara, Wang-Chiew Tan, Alon Halevy, H. V. Jagadish
Open Information Extraction from Question-Answer Pairs
NAACL 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open Information Extraction (OpenIE) extracts meaningful structured tuples from free-form text. Most previous work on OpenIE considers extracting data from one sentence at a time. We describe NeurON, a system for extracting tuples from question-answer pairs. Since real questions and answers often contain precisely the information that users care about, such information is particularly desirable to extend a knowledge base with. NeurON addresses several challenges. First, an answer text is often hard to understand without knowing the question, and second, relevant information can span multiple sentences. To address these, NeurON formulates extraction as a multi-source sequence-to-sequence learning task, wherein it combines distributed representations of a question and an answer to generate knowledge facts. We describe experiments on two real-world datasets that demonstrate that NeurON can find a significant number of new and interesting facts to extend a knowledge base compared to state-of-the-art OpenIE methods.
[ { "created": "Fri, 1 Mar 2019 06:26:50 GMT", "version": "v1" }, { "created": "Sat, 6 Apr 2019 08:56:40 GMT", "version": "v2" } ]
2019-04-09
[ [ "Bhutani", "Nikita", "" ], [ "Suhara", "Yoshihiko", "" ], [ "Tan", "Wang-Chiew", "" ], [ "Halevy", "Alon", "" ], [ "Jagadish", "H. V.", "" ] ]
Open Information Extraction (OpenIE) extracts meaningful structured tuples from free-form text. Most previous work on OpenIE considers extracting data from one sentence at a time. We describe NeurON, a system for extracting tuples from question-answer pairs. Since real questions and answers often contain precisely the information that users care about, such information is particularly desirable to extend a knowledge base with. NeurON addresses several challenges. First, an answer text is often hard to understand without knowing the question, and second, relevant information can span multiple sentences. To address these, NeurON formulates extraction as a multi-source sequence-to-sequence learning task, wherein it combines distributed representations of a question and an answer to generate knowledge facts. We describe experiments on two real-world datasets that demonstrate that NeurON can find a significant number of new and interesting facts to extend a knowledge base compared to state-of-the-art OpenIE methods.
1910.09579
Steven W.T. Cheung
Steven W.T. Cheung, Dan R. Ghica, Koko Muroya
Transparent Synchronous Dataflow
null
The Art, Science, and Engineering of Programming, 2021, Vol. 5, Issue 3, Article 12
10.22152/programming-journal.org/2021/5/12
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dataflow programming is a popular and convenient programming paradigm in systems modelling, optimisation, and machine learning. It has a number of advantages, for instance the lacks of control flow allows computation to be carried out in parallel as well as in distributed machines. More recently the idea of dataflow graphs has also been brought into the design of various deep learning frameworks. They facilitate an easy and efficient implementation of automatic differentiation, which is the heart of modern deep learning paradigm. [abstract abridged]
[ { "created": "Mon, 21 Oct 2019 18:12:46 GMT", "version": "v1" }, { "created": "Mon, 1 Mar 2021 21:12:23 GMT", "version": "v2" } ]
2021-03-03
[ [ "Cheung", "Steven W. T.", "" ], [ "Ghica", "Dan R.", "" ], [ "Muroya", "Koko", "" ] ]
Dataflow programming is a popular and convenient programming paradigm in systems modelling, optimisation, and machine learning. It has a number of advantages, for instance the lacks of control flow allows computation to be carried out in parallel as well as in distributed machines. More recently the idea of dataflow graphs has also been brought into the design of various deep learning frameworks. They facilitate an easy and efficient implementation of automatic differentiation, which is the heart of modern deep learning paradigm. [abstract abridged]
2004.03424
Joon Sik Kim
Joon Sik Kim, Jiahao Chen, Ameet Talwalkar
FACT: A Diagnostic for Group Fairness Trade-offs
Accepted to International Conference on Machine Learning (ICML 2020)
null
null
null
cs.LG cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model's predictive performance. We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness. We observe that the majority of group fairness notions can be expressed via the fairness-confusion tensor, which is the confusion matrix split according to the protected attribute values. We frame several optimization problems that directly optimize both accuracy and fairness objectives over the elements of this tensor, which yield a general perspective for understanding multiple trade-offs including group fairness incompatibilities. It also suggests an alternate post-processing method for designing fair classifiers. On synthetic and real datasets, we demonstrate the use cases of our diagnostic, particularly on understanding the trade-off landscape between accuracy and fairness.
[ { "created": "Tue, 7 Apr 2020 14:15:51 GMT", "version": "v1" }, { "created": "Wed, 8 Apr 2020 17:55:32 GMT", "version": "v2" }, { "created": "Tue, 7 Jul 2020 17:34:11 GMT", "version": "v3" } ]
2020-07-08
[ [ "Kim", "Joon Sik", "" ], [ "Chen", "Jiahao", "" ], [ "Talwalkar", "Ameet", "" ] ]
Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model's predictive performance. We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness. We observe that the majority of group fairness notions can be expressed via the fairness-confusion tensor, which is the confusion matrix split according to the protected attribute values. We frame several optimization problems that directly optimize both accuracy and fairness objectives over the elements of this tensor, which yield a general perspective for understanding multiple trade-offs including group fairness incompatibilities. It also suggests an alternate post-processing method for designing fair classifiers. On synthetic and real datasets, we demonstrate the use cases of our diagnostic, particularly on understanding the trade-off landscape between accuracy and fairness.
2307.08233
Liu Liu
Liu Liu, Shuaifeng Zhi, Zhenhua Du, Li Liu, Xinyu Zhang, Kai Huo, and Weidong Jiang
ROFusion: Efficient Object Detection using Hybrid Point-wise Radar-Optical Fusion
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radars, due to their robustness to adverse weather conditions and ability to measure object motions, have served in autonomous driving and intelligent agents for years. However, Radar-based perception suffers from its unintuitive sensing data, which lack of semantic and structural information of scenes. To tackle this problem, camera and Radar sensor fusion has been investigated as a trending strategy with low cost, high reliability and strong maintenance. While most recent works explore how to explore Radar point clouds and images, rich contextual information within Radar observation are discarded. In this paper, we propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios. The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation. Furthermore, we propose a novel local coordinate formulation, tackling the object detection task in an object-centric coordinate. Extensive results show that with the information gained from optical images, we could achieve leading performance in object detection (97.69\% recall) compared to recent state-of-the-art methods FFT-RadNet (82.86\% recall). Ablation studies verify the key design choices and practicability of our approach given machine generated imperfect detections. The code will be available at https://github.com/LiuLiu-55/ROFusion.
[ { "created": "Mon, 17 Jul 2023 04:25:46 GMT", "version": "v1" } ]
2023-07-18
[ [ "Liu", "Liu", "" ], [ "Zhi", "Shuaifeng", "" ], [ "Du", "Zhenhua", "" ], [ "Liu", "Li", "" ], [ "Zhang", "Xinyu", "" ], [ "Huo", "Kai", "" ], [ "Jiang", "Weidong", "" ] ]
Radars, due to their robustness to adverse weather conditions and ability to measure object motions, have served in autonomous driving and intelligent agents for years. However, Radar-based perception suffers from its unintuitive sensing data, which lack of semantic and structural information of scenes. To tackle this problem, camera and Radar sensor fusion has been investigated as a trending strategy with low cost, high reliability and strong maintenance. While most recent works explore how to explore Radar point clouds and images, rich contextual information within Radar observation are discarded. In this paper, we propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios. The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation. Furthermore, we propose a novel local coordinate formulation, tackling the object detection task in an object-centric coordinate. Extensive results show that with the information gained from optical images, we could achieve leading performance in object detection (97.69\% recall) compared to recent state-of-the-art methods FFT-RadNet (82.86\% recall). Ablation studies verify the key design choices and practicability of our approach given machine generated imperfect detections. The code will be available at https://github.com/LiuLiu-55/ROFusion.
cs/0106006
Aspassia Daskalopulu
Aspassia Daskalopulu, Marek Sergot
A Constraint-Driven System for Contract Assembly
null
Proc. 5th International Conference on Artificial Intelligence and Law, ACM Press, pp. 62-69, 1995
null
null
cs.AI
null
We present an approach for modelling the structure and coarse content of legal documents with a view to providing automated support for the drafting of contracts and contract database retrieval. The approach is designed to be applicable where contract drafting is based on model-form contracts or on existing examples of a similar type. The main features of the approach are: (1) the representation addresses the structure and the interrelationships between the constituent parts of contracts, but not the text of the document itself; (2) the representation of documents is separated from the mechanisms that manipulate it; and (3) the drafting process is subject to a collection of explicitly stated constraints that govern the structure of the documents. We describe the representation of document instances and of 'generic documents', which are data structures used to drive the creation of new document instances, and we show extracts from a sample session to illustrate the features of a prototype system implemented in MacProlog.
[ { "created": "Thu, 7 Jun 2001 14:27:30 GMT", "version": "v1" } ]
2007-05-23
[ [ "Daskalopulu", "Aspassia", "" ], [ "Sergot", "Marek", "" ] ]
We present an approach for modelling the structure and coarse content of legal documents with a view to providing automated support for the drafting of contracts and contract database retrieval. The approach is designed to be applicable where contract drafting is based on model-form contracts or on existing examples of a similar type. The main features of the approach are: (1) the representation addresses the structure and the interrelationships between the constituent parts of contracts, but not the text of the document itself; (2) the representation of documents is separated from the mechanisms that manipulate it; and (3) the drafting process is subject to a collection of explicitly stated constraints that govern the structure of the documents. We describe the representation of document instances and of 'generic documents', which are data structures used to drive the creation of new document instances, and we show extracts from a sample session to illustrate the features of a prototype system implemented in MacProlog.
2210.03154
Loukas Ilias
Konstantinos Psychogyios, Loukas Ilias, Dimitris Askounis
Comparison of Missing Data Imputation Methods using the Framingham Heart study dataset
2022 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)
null
10.1109/BHI56158.2022.9926882
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cardiovascular disease (CVD) is a class of diseases that involve the heart or blood vessels and according to World Health Organization is the leading cause of death worldwide. EHR data regarding this case, as well as medical cases in general, contain missing values very frequently. The percentage of missingness may vary and is linked with instrument errors, manual data entry procedures, etc. Even though the missing rate is usually significant, in many cases the missing value imputation part is handled poorly either with case-deletion or with simple statistical approaches such as mode and median imputation. These methods are known to introduce significant bias, since they do not account for the relationships between the dataset's variables. Within the medical framework, many datasets consist of lab tests or patient medical tests, where these relationships are present and strong. To address these limitations, in this paper we test and modify state-of-the-art missing value imputation methods based on Generative Adversarial Networks (GANs) and Autoencoders. The evaluation is accomplished for both the tasks of data imputation and post-imputation prediction. Regarding the imputation task, we achieve improvements of 0.20, 7.00% in normalised Root Mean Squared Error (RMSE) and Area Under the Receiver Operating Characteristic Curve (AUROC) respectively. In terms of the post-imputation prediction task, our models outperform the standard approaches by 2.50% in F1-score.
[ { "created": "Thu, 6 Oct 2022 18:35:08 GMT", "version": "v1" }, { "created": "Mon, 10 Oct 2022 07:22:00 GMT", "version": "v2" } ]
2022-11-08
[ [ "Psychogyios", "Konstantinos", "" ], [ "Ilias", "Loukas", "" ], [ "Askounis", "Dimitris", "" ] ]
Cardiovascular disease (CVD) is a class of diseases that involve the heart or blood vessels and according to World Health Organization is the leading cause of death worldwide. EHR data regarding this case, as well as medical cases in general, contain missing values very frequently. The percentage of missingness may vary and is linked with instrument errors, manual data entry procedures, etc. Even though the missing rate is usually significant, in many cases the missing value imputation part is handled poorly either with case-deletion or with simple statistical approaches such as mode and median imputation. These methods are known to introduce significant bias, since they do not account for the relationships between the dataset's variables. Within the medical framework, many datasets consist of lab tests or patient medical tests, where these relationships are present and strong. To address these limitations, in this paper we test and modify state-of-the-art missing value imputation methods based on Generative Adversarial Networks (GANs) and Autoencoders. The evaluation is accomplished for both the tasks of data imputation and post-imputation prediction. Regarding the imputation task, we achieve improvements of 0.20, 7.00% in normalised Root Mean Squared Error (RMSE) and Area Under the Receiver Operating Characteristic Curve (AUROC) respectively. In terms of the post-imputation prediction task, our models outperform the standard approaches by 2.50% in F1-score.
2405.17914
Xiumei Deng
Xiumei Deng, Jun Li, Long Shi, Kang Wei, Ming Ding, Yumeng Shao, Wen Chen, Shi Jin
Trustworthy DNN Partition for Blockchain-enabled Digital Twin in Wireless IIoT Networks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Digital twin (DT) has emerged as a promising solution to enhance manufacturing efficiency in industrial Internet of Things (IIoT) networks. To promote the efficiency and trustworthiness of DT for wireless IIoT networks, we propose a blockchain-enabled DT (B-DT) framework that employs deep neural network (DNN) partitioning technique and reputation-based consensus mechanism, wherein the DTs maintained at the gateway side execute DNN inference tasks using the data collected from their associated IIoT devices. First, we employ DNN partitioning technique to offload the top-layer DNN inference tasks to the access point (AP) side, which alleviates the computation burden at the gateway side and thereby improves the efficiency of DNN inference. Second, we propose a reputation-based consensus mechanism that integrates Proof of Work (PoW) and Proof of Stake (PoS). Specifically, the proposed consensus mechanism evaluates the off-chain reputation of each AP according to its computation resource contributions to the DNN inference tasks, and utilizes the off-chain reputation as a stake to adjust the block generation difficulty. Third, we formulate a stochastic optimization problem of communication resource (i.e., partition point) and computation resource allocation (i.e., computation frequency of APs for top-layer DNN inference and block generation) to minimize system latency under the time-varying channel state and long-term constraints of off-chain reputation, and solve the problem using Lyapunov optimization method. Experimental results show that the proposed dynamic DNN partitioning and resource allocation (DPRA) algorithm outperforms the baselines in terms of reducing the overall latency while guaranteeing the trustworthiness of the B-DT system.
[ { "created": "Tue, 28 May 2024 07:34:12 GMT", "version": "v1" } ]
2024-05-29
[ [ "Deng", "Xiumei", "" ], [ "Li", "Jun", "" ], [ "Shi", "Long", "" ], [ "Wei", "Kang", "" ], [ "Ding", "Ming", "" ], [ "Shao", "Yumeng", "" ], [ "Chen", "Wen", "" ], [ "Jin", "Shi", "" ] ]
Digital twin (DT) has emerged as a promising solution to enhance manufacturing efficiency in industrial Internet of Things (IIoT) networks. To promote the efficiency and trustworthiness of DT for wireless IIoT networks, we propose a blockchain-enabled DT (B-DT) framework that employs deep neural network (DNN) partitioning technique and reputation-based consensus mechanism, wherein the DTs maintained at the gateway side execute DNN inference tasks using the data collected from their associated IIoT devices. First, we employ DNN partitioning technique to offload the top-layer DNN inference tasks to the access point (AP) side, which alleviates the computation burden at the gateway side and thereby improves the efficiency of DNN inference. Second, we propose a reputation-based consensus mechanism that integrates Proof of Work (PoW) and Proof of Stake (PoS). Specifically, the proposed consensus mechanism evaluates the off-chain reputation of each AP according to its computation resource contributions to the DNN inference tasks, and utilizes the off-chain reputation as a stake to adjust the block generation difficulty. Third, we formulate a stochastic optimization problem of communication resource (i.e., partition point) and computation resource allocation (i.e., computation frequency of APs for top-layer DNN inference and block generation) to minimize system latency under the time-varying channel state and long-term constraints of off-chain reputation, and solve the problem using Lyapunov optimization method. Experimental results show that the proposed dynamic DNN partitioning and resource allocation (DPRA) algorithm outperforms the baselines in terms of reducing the overall latency while guaranteeing the trustworthiness of the B-DT system.
1906.12140
Swanand Kadhe
Swanand Kadhe, Jichan Chung, and Kannan Ramchandran
SeF: A Secure Fountain Architecture for Slashing Storage Costs in Blockchains
null
null
null
null
cs.CR cs.DC cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Full nodes, which synchronize the entire blockchain history and independently validate all the blocks, form the backbone of any blockchain network by playing a vital role in ensuring security properties. On the other hand, a user running a full node needs to pay a heavy price in terms of storage costs. E.g., the Bitcoin blockchain size has grown over 215GB, in spite of its low throughput. The ledger size for a high throughput blockchain Ripple has already reached 9TB, and it is growing at an astonishing rate of 12GB per day! In this paper, we propose an architecture based on 'fountain codes', a class of erasure codes, that enables any full node to 'encode' validated blocks into a small number of 'coded blocks', thereby reducing its storage costs by orders of magnitude. In particular, our proposed "Secure Fountain (SeF)" architecture can achieve a near-optimal trade-off between the storage savings per node and the 'bootstrap cost' in terms of the number of (honest) storage-constrained nodes a new node needs to contact to recover the blockchain. A key technical innovation in SeF codes is to make fountain codes secure against adversarial nodes that can provide maliciously formed coded blocks. Our idea is to use the header-chain as a 'side-information' to check whether a coded block is maliciously formed while it is getting decoded. Further, the 'rateless property' of fountain codes helps in achieving high decentralization and scalability. Our experiments demonstrate that SeF codes tuned to achieve 1000x storage savings enable full nodes to encode the 191GB Bitcoin blockchain into 195MB on average. A new node can recover the blockchain from an arbitrary set of storage-constrained nodes as long as the set contains ~1100 honest nodes on average. Note that for a 1000x storage savings, the fundamental bound on the number of honest nodes to contact is 1000: we need about 10% more in practice.
[ { "created": "Fri, 28 Jun 2019 11:32:33 GMT", "version": "v1" } ]
2019-07-01
[ [ "Kadhe", "Swanand", "" ], [ "Chung", "Jichan", "" ], [ "Ramchandran", "Kannan", "" ] ]
Full nodes, which synchronize the entire blockchain history and independently validate all the blocks, form the backbone of any blockchain network by playing a vital role in ensuring security properties. On the other hand, a user running a full node needs to pay a heavy price in terms of storage costs. E.g., the Bitcoin blockchain size has grown over 215GB, in spite of its low throughput. The ledger size for a high throughput blockchain Ripple has already reached 9TB, and it is growing at an astonishing rate of 12GB per day! In this paper, we propose an architecture based on 'fountain codes', a class of erasure codes, that enables any full node to 'encode' validated blocks into a small number of 'coded blocks', thereby reducing its storage costs by orders of magnitude. In particular, our proposed "Secure Fountain (SeF)" architecture can achieve a near-optimal trade-off between the storage savings per node and the 'bootstrap cost' in terms of the number of (honest) storage-constrained nodes a new node needs to contact to recover the blockchain. A key technical innovation in SeF codes is to make fountain codes secure against adversarial nodes that can provide maliciously formed coded blocks. Our idea is to use the header-chain as a 'side-information' to check whether a coded block is maliciously formed while it is getting decoded. Further, the 'rateless property' of fountain codes helps in achieving high decentralization and scalability. Our experiments demonstrate that SeF codes tuned to achieve 1000x storage savings enable full nodes to encode the 191GB Bitcoin blockchain into 195MB on average. A new node can recover the blockchain from an arbitrary set of storage-constrained nodes as long as the set contains ~1100 honest nodes on average. Note that for a 1000x storage savings, the fundamental bound on the number of honest nodes to contact is 1000: we need about 10% more in practice.
2407.12024
Jordan Rey-Jouanchicot
Jordan Rey-Jouanchicot (IRIT-ELIPSE, LAAS), Andr\'e Bottaro, Eric Campo (LAAS-S4M), Jean-L\'eon Bouraoui, Nadine Vigouroux (IRIT-ELIPSE), Fr\'ed\'eric Vella (IRIT-ELIPSE)
Leveraging Large Language Models for enhanced personalised user experience in Smart Homes
null
null
null
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart home automation systems aim to improve the comfort and convenience of users in their living environment. However, adapting automation to user needs remains a challenge. Indeed, many systems still rely on hand-crafted routines for each smart object.This paper presents an original smart home architecture leveraging Large Language Models (LLMs) and user preferences to push the boundaries of personalisation and intuitiveness in the home environment.This article explores a human-centred approach that uses the general knowledge provided by LLMs to learn and facilitate interactions with the environment.The advantages of the proposed model are demonstrated on a set of scenarios, as well as a comparative analysis with various LLM implementations. Some metrics are assessed to determine the system's ability to maintain comfort, safety, and user preferences. The paper details the approach to real-world implementation and evaluation.The proposed approach of using preferences shows up to 52.3% increase in average grade, and with an average processing time reduced by 35.6% on Starling 7B Alpha LLM. In addition, performance is 26.4% better than the results of the larger models without preferences, with processing time almost 20 times faster.
[ { "created": "Fri, 28 Jun 2024 07:08:20 GMT", "version": "v1" } ]
2024-07-18
[ [ "Rey-Jouanchicot", "Jordan", "", "IRIT-ELIPSE, LAAS" ], [ "Bottaro", "André", "", "LAAS-S4M" ], [ "Campo", "Eric", "", "LAAS-S4M" ], [ "Bouraoui", "Jean-Léon", "", "IRIT-ELIPSE" ], [ "Vigouroux", "Nadine", "", "IRIT-ELIPSE" ], [ "Vella", "Frédéric", "", "IRIT-ELIPSE" ] ]
Smart home automation systems aim to improve the comfort and convenience of users in their living environment. However, adapting automation to user needs remains a challenge. Indeed, many systems still rely on hand-crafted routines for each smart object.This paper presents an original smart home architecture leveraging Large Language Models (LLMs) and user preferences to push the boundaries of personalisation and intuitiveness in the home environment.This article explores a human-centred approach that uses the general knowledge provided by LLMs to learn and facilitate interactions with the environment.The advantages of the proposed model are demonstrated on a set of scenarios, as well as a comparative analysis with various LLM implementations. Some metrics are assessed to determine the system's ability to maintain comfort, safety, and user preferences. The paper details the approach to real-world implementation and evaluation.The proposed approach of using preferences shows up to 52.3% increase in average grade, and with an average processing time reduced by 35.6% on Starling 7B Alpha LLM. In addition, performance is 26.4% better than the results of the larger models without preferences, with processing time almost 20 times faster.
2401.11257
Tianyi Hu
Tianyi Hu, Zhiqiang Pu, Xiaolin Ai, Tenghai Qiu, Jianqiang Yi
Measuring Policy Distance for Multi-Agent Reinforcement Learning
9 pages, 6 figures
null
null
null
cs.MA cs.AI
http://creativecommons.org/licenses/by/4.0/
Diversity plays a crucial role in improving the performance of multi-agent reinforcement learning (MARL). Currently, many diversity-based methods have been developed to overcome the drawbacks of excessive parameter sharing in traditional MARL. However, there remains a lack of a general metric to quantify policy differences among agents. Such a metric would not only facilitate the evaluation of the diversity evolution in multi-agent systems, but also provide guidance for the design of diversity-based MARL algorithms. In this paper, we propose the multi-agent policy distance (MAPD), a general tool for measuring policy differences in MARL. By learning the conditional representations of agents' decisions, MAPD can computes the policy distance between any pair of agents. Furthermore, we extend MAPD to a customizable version, which can quantify differences among agent policies on specified aspects. Based on the online deployment of MAPD, we design a multi-agent dynamic parameter sharing (MADPS) algorithm as an example of the MAPD's applications. Extensive experiments demonstrate that our method is effective in measuring differences in agent policies and specific behavioral tendencies. Moreover, in comparison to other methods of parameter sharing, MADPS exhibits superior performance.
[ { "created": "Sat, 20 Jan 2024 15:34:51 GMT", "version": "v1" }, { "created": "Sun, 28 Jan 2024 15:37:54 GMT", "version": "v2" } ]
2024-01-30
[ [ "Hu", "Tianyi", "" ], [ "Pu", "Zhiqiang", "" ], [ "Ai", "Xiaolin", "" ], [ "Qiu", "Tenghai", "" ], [ "Yi", "Jianqiang", "" ] ]
Diversity plays a crucial role in improving the performance of multi-agent reinforcement learning (MARL). Currently, many diversity-based methods have been developed to overcome the drawbacks of excessive parameter sharing in traditional MARL. However, there remains a lack of a general metric to quantify policy differences among agents. Such a metric would not only facilitate the evaluation of the diversity evolution in multi-agent systems, but also provide guidance for the design of diversity-based MARL algorithms. In this paper, we propose the multi-agent policy distance (MAPD), a general tool for measuring policy differences in MARL. By learning the conditional representations of agents' decisions, MAPD can computes the policy distance between any pair of agents. Furthermore, we extend MAPD to a customizable version, which can quantify differences among agent policies on specified aspects. Based on the online deployment of MAPD, we design a multi-agent dynamic parameter sharing (MADPS) algorithm as an example of the MAPD's applications. Extensive experiments demonstrate that our method is effective in measuring differences in agent policies and specific behavioral tendencies. Moreover, in comparison to other methods of parameter sharing, MADPS exhibits superior performance.
1210.6112
EPTCS
James Smith
The Jasper Framework: Towards a Platform Independent, Formal Treatment of Web Programming
In Proceedings WWV 2012, arXiv:1210.5783. Added doi references where possible
EPTCS 98, 2012, pp. 31-45
10.4204/EPTCS.98.5
null
cs.SE cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces Jasper, a web programming framework which allows web applications to be developed in an essentially platform indepedent manner and which is also suited to a formal treatment. It outlines Jasper conceptually and shows how Jasper is implemented on several commonplace platforms. It also introduces the Jasper Music Store, a web application powered by Jasper and implemented on each of these platforms. And it briefly describes a formal treatment and outlines the tools and languages planned that will allow this treatment to be automated.
[ { "created": "Tue, 23 Oct 2012 02:54:58 GMT", "version": "v1" } ]
2012-12-07
[ [ "Smith", "James", "" ] ]
This paper introduces Jasper, a web programming framework which allows web applications to be developed in an essentially platform indepedent manner and which is also suited to a formal treatment. It outlines Jasper conceptually and shows how Jasper is implemented on several commonplace platforms. It also introduces the Jasper Music Store, a web application powered by Jasper and implemented on each of these platforms. And it briefly describes a formal treatment and outlines the tools and languages planned that will allow this treatment to be automated.
2105.04749
Aron Laszka
Shanto Roy, Nazia Sharmin, Jaime C. Acosta, Christopher Kiekintveld, Aron Laszka
Survey and Taxonomy of Adversarial Reconnaissance Techniques
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversaries are often able to penetrate networks and compromise systems by exploiting vulnerabilities in people and systems. The key to the success of these attacks is information that adversaries collect throughout the phases of the cyber kill chain. We summarize and analyze the methods, tactics, and tools that adversaries use to conduct reconnaissance activities throughout the attack process. First, we discuss what types of information adversaries seek, and how and when they can obtain this information. Then, we provide a taxonomy and detailed overview of adversarial reconnaissance techniques. The taxonomy introduces a categorization of reconnaissance techniques based on the source as third-party, human-, and system-based information gathering. This paper provides a comprehensive view of adversarial reconnaissance that can help in understanding and modeling this complex but vital aspect of cyber attacks as well as insights that can improve defensive strategies, such as cyber deception.
[ { "created": "Tue, 11 May 2021 02:09:12 GMT", "version": "v1" }, { "created": "Fri, 29 Apr 2022 00:11:54 GMT", "version": "v2" } ]
2022-05-02
[ [ "Roy", "Shanto", "" ], [ "Sharmin", "Nazia", "" ], [ "Acosta", "Jaime C.", "" ], [ "Kiekintveld", "Christopher", "" ], [ "Laszka", "Aron", "" ] ]
Adversaries are often able to penetrate networks and compromise systems by exploiting vulnerabilities in people and systems. The key to the success of these attacks is information that adversaries collect throughout the phases of the cyber kill chain. We summarize and analyze the methods, tactics, and tools that adversaries use to conduct reconnaissance activities throughout the attack process. First, we discuss what types of information adversaries seek, and how and when they can obtain this information. Then, we provide a taxonomy and detailed overview of adversarial reconnaissance techniques. The taxonomy introduces a categorization of reconnaissance techniques based on the source as third-party, human-, and system-based information gathering. This paper provides a comprehensive view of adversarial reconnaissance that can help in understanding and modeling this complex but vital aspect of cyber attacks as well as insights that can improve defensive strategies, such as cyber deception.
1501.06802
Gamal Abd El-Nasser A. Said
Gamal Abd El-Nasser A. Said, El-Sayed M. El-Horbaty
A Simulation Modeling Approach for Optimization of Storage Space Allocation in Container Terminal
International Journal of Computer, Information, Systems and Control Engineering Vol:9 No:1, 2015
Information, Systems and Control Engineering Vol. 9, No. 1, 2015, pp. 168-173
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Container handling problems at container terminals are NP-hard problems. This paper presents an approach using discrete-event simulation modeling to optimize solution for storage space allocation problem, taking into account all various interrelated container terminal handling activities. The proposed approach is applied on a real case study data of container terminal at Alexandria port. The computational results show the effectiveness of the proposed model for optimization of storage space allocation in container terminal where 54% reduction in containers handling time in port is achieved.
[ { "created": "Tue, 27 Jan 2015 16:10:23 GMT", "version": "v1" } ]
2015-04-14
[ [ "Said", "Gamal Abd El-Nasser A.", "" ], [ "El-Horbaty", "El-Sayed M.", "" ] ]
Container handling problems at container terminals are NP-hard problems. This paper presents an approach using discrete-event simulation modeling to optimize solution for storage space allocation problem, taking into account all various interrelated container terminal handling activities. The proposed approach is applied on a real case study data of container terminal at Alexandria port. The computational results show the effectiveness of the proposed model for optimization of storage space allocation in container terminal where 54% reduction in containers handling time in port is achieved.
1011.5065
Woohyuk Chang
Woohyuk Chang, Sae-Young Chung, Yong H. Lee
Gaussian Relay Channel Capacity to Within a Fixed Number of Bits
6 pages, 7 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show that the capacity of the three-node Gaussian relay channel can be achieved to within 1 and 2 bit/sec/Hz using compress-and-forward and amplify-and-forward relaying, respectively.
[ { "created": "Tue, 23 Nov 2010 11:27:46 GMT", "version": "v1" } ]
2010-11-24
[ [ "Chang", "Woohyuk", "" ], [ "Chung", "Sae-Young", "" ], [ "Lee", "Yong H.", "" ] ]
In this paper, we show that the capacity of the three-node Gaussian relay channel can be achieved to within 1 and 2 bit/sec/Hz using compress-and-forward and amplify-and-forward relaying, respectively.
2104.13869
Francisco Romero
Francisco Romero, Gohar Irfan Chaudhry, \'I\~nigo Goiri, Pragna Gopa, Paul Batum, Neeraja J. Yadwadkar, Rodrigo Fonseca, Christos Kozyrakis, Ricardo Bianchini
Faa$T: A Transparent Auto-Scaling Cache for Serverless Applications
18 pages, 15 figures
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Function-as-a-Service (FaaS) has become an increasingly popular way for users to deploy their applications without the burden of managing the underlying infrastructure. However, existing FaaS platforms rely on remote storage to maintain state, limiting the set of applications that can be run efficiently. Recent caching work for FaaS platforms has tried to address this problem, but has fallen short: it disregards the widely different characteristics of FaaS applications, does not scale the cache based on data access patterns, or requires changes to applications. To address these limitations, we present Faa\$T, a transparent auto-scaling distributed cache for serverless applications. Each application gets its own Faa\$T cache. After a function executes and the application becomes inactive, the cache is unloaded from memory with the application. Upon reloading for the next invocation, Faa\$T pre-warms the cache with objects likely to be accessed. In addition to traditional compute-based scaling, Faa\$T scales based on working set and object sizes to manage cache space and I/O bandwidth. We motivate our design with a comprehensive study of data access patterns in a large-scale commercial FaaS provider. We implement Faa\$T for the provider's production FaaS platform. Our experiments show that Faa\$T can improve performance by up to 92% (57% on average) for challenging applications, and reduce cost for most users compared to state-of-the-art caching systems, i.e. the cost of having to stand up additional serverful resources.
[ { "created": "Wed, 28 Apr 2021 16:31:19 GMT", "version": "v1" } ]
2021-04-29
[ [ "Romero", "Francisco", "" ], [ "Chaudhry", "Gohar Irfan", "" ], [ "Goiri", "Íñigo", "" ], [ "Gopa", "Pragna", "" ], [ "Batum", "Paul", "" ], [ "Yadwadkar", "Neeraja J.", "" ], [ "Fonseca", "Rodrigo", "" ], [ "Kozyrakis", "Christos", "" ], [ "Bianchini", "Ricardo", "" ] ]
Function-as-a-Service (FaaS) has become an increasingly popular way for users to deploy their applications without the burden of managing the underlying infrastructure. However, existing FaaS platforms rely on remote storage to maintain state, limiting the set of applications that can be run efficiently. Recent caching work for FaaS platforms has tried to address this problem, but has fallen short: it disregards the widely different characteristics of FaaS applications, does not scale the cache based on data access patterns, or requires changes to applications. To address these limitations, we present Faa\$T, a transparent auto-scaling distributed cache for serverless applications. Each application gets its own Faa\$T cache. After a function executes and the application becomes inactive, the cache is unloaded from memory with the application. Upon reloading for the next invocation, Faa\$T pre-warms the cache with objects likely to be accessed. In addition to traditional compute-based scaling, Faa\$T scales based on working set and object sizes to manage cache space and I/O bandwidth. We motivate our design with a comprehensive study of data access patterns in a large-scale commercial FaaS provider. We implement Faa\$T for the provider's production FaaS platform. Our experiments show that Faa\$T can improve performance by up to 92% (57% on average) for challenging applications, and reduce cost for most users compared to state-of-the-art caching systems, i.e. the cost of having to stand up additional serverful resources.
2001.03102
Roy Miles
Roy Miles, Krystian Mikolajczyk
Compression of descriptor models for mobile applications
ICASSP 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have demonstrated state-of-the-art performance for feature-based image matching through the advent of new large and diverse datasets. However, there has been little work on evaluating the computational cost, model size, and matching accuracy tradeoffs for these models. This paper explicitly addresses these practical metrics by considering the state-of-the-art HardNet model. We observe a significant redundancy in the learned weights, which we exploit through the use of depthwise separable layers and an efficient Tucker decomposition. We demonstrate that a combination of these methods is very effective, but still sacrifices the top-end accuracy. To resolve this, we propose the Convolution-Depthwise-Pointwise(CDP) layer, which provides a means of interpolating between the standard and depthwise separable convolutions. With this proposed layer, we can achieve an 8 times reduction in the number of parameters on the HardNet model, 13 times reduction in the computational complexity, while sacrificing less than 1% on the overall accuracy across theHPatchesbenchmarks. To further demonstrate the generalisation of this approach, we apply it to the state-of-the-art SuperPoint model, where we can significantly reduce the number of parameters and floating-point operations, with minimal degradation in the matching accuracy.
[ { "created": "Thu, 9 Jan 2020 17:00:21 GMT", "version": "v1" }, { "created": "Sun, 29 Mar 2020 20:37:33 GMT", "version": "v2" }, { "created": "Fri, 5 Feb 2021 10:41:09 GMT", "version": "v3" } ]
2021-02-08
[ [ "Miles", "Roy", "" ], [ "Mikolajczyk", "Krystian", "" ] ]
Deep neural networks have demonstrated state-of-the-art performance for feature-based image matching through the advent of new large and diverse datasets. However, there has been little work on evaluating the computational cost, model size, and matching accuracy tradeoffs for these models. This paper explicitly addresses these practical metrics by considering the state-of-the-art HardNet model. We observe a significant redundancy in the learned weights, which we exploit through the use of depthwise separable layers and an efficient Tucker decomposition. We demonstrate that a combination of these methods is very effective, but still sacrifices the top-end accuracy. To resolve this, we propose the Convolution-Depthwise-Pointwise(CDP) layer, which provides a means of interpolating between the standard and depthwise separable convolutions. With this proposed layer, we can achieve an 8 times reduction in the number of parameters on the HardNet model, 13 times reduction in the computational complexity, while sacrificing less than 1% on the overall accuracy across theHPatchesbenchmarks. To further demonstrate the generalisation of this approach, we apply it to the state-of-the-art SuperPoint model, where we can significantly reduce the number of parameters and floating-point operations, with minimal degradation in the matching accuracy.
2009.04336
Gabriele Farina
Gabriele Farina and Tuomas Sandholm
Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond
null
null
null
null
cs.GT cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unlike normal-form games, where correlated equilibria have been studied for more than 45 years, extensive-form correlation is still generally not well understood. Part of the reason for this gap is that the sequential nature of extensive-form games allows for a richness of behaviors and incentives that are not possible in normal-form settings. This richness translates to a significantly different complexity landscape surrounding extensive-form correlated equilibria. As of today, it is known that finding an optimal extensive-form correlated equilibrium (EFCE), extensive-form coarse correlated equilibrium (EFCCE), or normal-form coarse correlated equilibrium (NFCCE) in a two-player extensive-form game is computationally tractable when the game does not include chance moves, and intractable when the game involves chance moves. In this paper we significantly refine this complexity threshold by showing that, in two-player games, an optimal correlated equilibrium can be computed in polynomial time, provided that a certain condition is satisfied. We show that the condition holds, for example, when all chance moves are public, that is, both players observe all chance moves. This implies that an optimal EFCE, EFCCE and NFCCE can be computed in polynomial time in the game size in two-player games with public chance moves, providing the biggest positive complexity result surrounding extensive-form correlation in more than a decade.
[ { "created": "Wed, 9 Sep 2020 14:51:58 GMT", "version": "v1" } ]
2020-09-10
[ [ "Farina", "Gabriele", "" ], [ "Sandholm", "Tuomas", "" ] ]
Unlike normal-form games, where correlated equilibria have been studied for more than 45 years, extensive-form correlation is still generally not well understood. Part of the reason for this gap is that the sequential nature of extensive-form games allows for a richness of behaviors and incentives that are not possible in normal-form settings. This richness translates to a significantly different complexity landscape surrounding extensive-form correlated equilibria. As of today, it is known that finding an optimal extensive-form correlated equilibrium (EFCE), extensive-form coarse correlated equilibrium (EFCCE), or normal-form coarse correlated equilibrium (NFCCE) in a two-player extensive-form game is computationally tractable when the game does not include chance moves, and intractable when the game involves chance moves. In this paper we significantly refine this complexity threshold by showing that, in two-player games, an optimal correlated equilibrium can be computed in polynomial time, provided that a certain condition is satisfied. We show that the condition holds, for example, when all chance moves are public, that is, both players observe all chance moves. This implies that an optimal EFCE, EFCCE and NFCCE can be computed in polynomial time in the game size in two-player games with public chance moves, providing the biggest positive complexity result surrounding extensive-form correlation in more than a decade.
1308.6505
Anna Huber
Anna Huber and Andrei Krokhin
Oracle Tractability of Skew Bisubmodular Functions
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider skew bisubmodular functions as introduced in [9]. We construct a convex extension of a skew bisubmodular function which we call Lov\'asz extension in correspondence to the submodular case. We use this extension to show that skew bisubmodular functions given by an oracle can be minimised in polynomial time.
[ { "created": "Thu, 29 Aug 2013 16:03:49 GMT", "version": "v1" } ]
2013-08-30
[ [ "Huber", "Anna", "" ], [ "Krokhin", "Andrei", "" ] ]
In this paper we consider skew bisubmodular functions as introduced in [9]. We construct a convex extension of a skew bisubmodular function which we call Lov\'asz extension in correspondence to the submodular case. We use this extension to show that skew bisubmodular functions given by an oracle can be minimised in polynomial time.
2007.03204
Pashootan Vaezipoor
Pashootan Vaezipoor, Gil Lederman, Yuhuai Wu, Chris J. Maddison, Roger Grosse, Sanjit A. Seshia, Fahiem Bacchus
Learning Branching Heuristics for Propositional Model Counting
null
35(14), 2021, 12427-12435
10.1609/aaai.v35i14.17474
null
cs.LG cs.AI cs.LO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.
[ { "created": "Tue, 7 Jul 2020 05:20:29 GMT", "version": "v1" }, { "created": "Thu, 8 Sep 2022 21:47:20 GMT", "version": "v2" } ]
2022-09-12
[ [ "Vaezipoor", "Pashootan", "" ], [ "Lederman", "Gil", "" ], [ "Wu", "Yuhuai", "" ], [ "Maddison", "Chris J.", "" ], [ "Grosse", "Roger", "" ], [ "Seshia", "Sanjit A.", "" ], [ "Bacchus", "Fahiem", "" ] ]
Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.
cs/0601116
Laurent Noe
Gregory Kucherov (LIFL), Laurent No\'e (LIFL), Mihkail Roytberg (LIFL)
A unifying framework for seed sensitivity and its application to subset seeds
null
Journal of Bioinformatics and Computational Biology 4 (2006) 2, pp 553--569
10.1142/S0219720006001977
null
cs.DS q-bio.QM
null
We propose a general approach to compute the seed sensitivity, that can be applied to different definitions of seeds. It treats separately three components of the seed sensitivity problem -- a set of target alignments, an associated probability distribution, and a seed model -- that are specified by distinct finite automata. The approach is then applied to a new concept of subset seeds for which we propose an efficient automaton construction. Experimental results confirm that sensitive subset seeds can be efficiently designed using our approach, and can then be used in similarity search producing better results than ordinary spaced seeds.
[ { "created": "Fri, 27 Jan 2006 18:53:01 GMT", "version": "v1" }, { "created": "Fri, 15 Sep 2006 07:05:58 GMT", "version": "v2" } ]
2010-01-19
[ [ "Kucherov", "Gregory", "", "LIFL" ], [ "Noé", "Laurent", "", "LIFL" ], [ "Roytberg", "Mihkail", "", "LIFL" ] ]
We propose a general approach to compute the seed sensitivity, that can be applied to different definitions of seeds. It treats separately three components of the seed sensitivity problem -- a set of target alignments, an associated probability distribution, and a seed model -- that are specified by distinct finite automata. The approach is then applied to a new concept of subset seeds for which we propose an efficient automaton construction. Experimental results confirm that sensitive subset seeds can be efficiently designed using our approach, and can then be used in similarity search producing better results than ordinary spaced seeds.
1903.07377
Johannes Michael
Johannes Michael, Roger Labahn, Tobias Gr\"uning, Jochen Z\"ollner
Evaluating Sequence-to-Sequence Models for Handwritten Text Recognition
8 pages, 1 figure, 8 tables
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Encoder-decoder models have become an effective approach for sequence learning tasks like machine translation, image captioning and speech recognition, but have yet to show competitive results for handwritten text recognition. To this end, we propose an attention-based sequence-to-sequence model. It combines a convolutional neural network as a generic feature extractor with a recurrent neural network to encode both the visual information, as well as the temporal context between characters in the input image, and uses a separate recurrent neural network to decode the actual character sequence. We make experimental comparisons between various attention mechanisms and positional encodings, in order to find an appropriate alignment between the input and output sequence. The model can be trained end-to-end and the optional integration of a hybrid loss allows the encoder to retain an interpretable and usable output, if desired. We achieve competitive results on the IAM and ICFHR2016 READ data sets compared to the state-of-the-art without the use of a language model, and we significantly improve over any recent sequence-to-sequence approaches.
[ { "created": "Mon, 18 Mar 2019 11:51:33 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2019 11:40:53 GMT", "version": "v2" } ]
2019-07-16
[ [ "Michael", "Johannes", "" ], [ "Labahn", "Roger", "" ], [ "Grüning", "Tobias", "" ], [ "Zöllner", "Jochen", "" ] ]
Encoder-decoder models have become an effective approach for sequence learning tasks like machine translation, image captioning and speech recognition, but have yet to show competitive results for handwritten text recognition. To this end, we propose an attention-based sequence-to-sequence model. It combines a convolutional neural network as a generic feature extractor with a recurrent neural network to encode both the visual information, as well as the temporal context between characters in the input image, and uses a separate recurrent neural network to decode the actual character sequence. We make experimental comparisons between various attention mechanisms and positional encodings, in order to find an appropriate alignment between the input and output sequence. The model can be trained end-to-end and the optional integration of a hybrid loss allows the encoder to retain an interpretable and usable output, if desired. We achieve competitive results on the IAM and ICFHR2016 READ data sets compared to the state-of-the-art without the use of a language model, and we significantly improve over any recent sequence-to-sequence approaches.
2305.02484
Shilun Li
Venkatesan Guruswami and Shilun Li
A Deterministic Construction of a Large Distance Code from the Wozencraft Ensemble
null
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an explicit construction of a sequence of rate $1/2$ Wozencraft ensemble codes (over any fixed finite field $\mathbb{F}_q$) that achieve minimum distance $\Omega(\sqrt{k})$ where $k$ is the message length. The coefficients of the Wozencraft ensemble codes are constructed using Sidon Sets and the cyclic structure of $\mathbb{F}_{q^{k}}$ where $k+1$ is prime with $q$ a primitive root modulo $k+1$. Assuming Artin's conjecture, there are infinitely many such $k$ for any prime power $q$.
[ { "created": "Thu, 4 May 2023 01:29:34 GMT", "version": "v1" }, { "created": "Tue, 11 Jul 2023 08:03:43 GMT", "version": "v2" } ]
2023-07-12
[ [ "Guruswami", "Venkatesan", "" ], [ "Li", "Shilun", "" ] ]
We present an explicit construction of a sequence of rate $1/2$ Wozencraft ensemble codes (over any fixed finite field $\mathbb{F}_q$) that achieve minimum distance $\Omega(\sqrt{k})$ where $k$ is the message length. The coefficients of the Wozencraft ensemble codes are constructed using Sidon Sets and the cyclic structure of $\mathbb{F}_{q^{k}}$ where $k+1$ is prime with $q$ a primitive root modulo $k+1$. Assuming Artin's conjecture, there are infinitely many such $k$ for any prime power $q$.
2307.13018
Artur Tarassow
Artur Tarassow
The potential of LLMs for coding with low-resource and domain-specific programming languages
null
null
null
null
cs.CL cs.SE
http://creativecommons.org/licenses/by/4.0/
This paper presents a study on the feasibility of using large language models (LLM) for coding with low-resource and domain-specific programming languages that typically lack the amount of data required for effective LLM processing techniques. This study focuses on the econometric scripting language named hansl of the open-source software gretl and employs a proprietary LLM based on GPT-3.5. Our findings suggest that LLMs can be a useful tool for writing, understanding, improving, and documenting gretl code, which includes generating descriptive docstrings for functions and providing precise explanations for abstract and poorly documented econometric code. While the LLM showcased promoting docstring-to-code translation capability, we also identify some limitations, such as its inability to improve certain sections of code and to write accurate unit tests. This study is a step towards leveraging the power of LLMs to facilitate software development in low-resource programming languages and ultimately to lower barriers to entry for their adoption.
[ { "created": "Mon, 24 Jul 2023 17:17:13 GMT", "version": "v1" } ]
2023-07-26
[ [ "Tarassow", "Artur", "" ] ]
This paper presents a study on the feasibility of using large language models (LLM) for coding with low-resource and domain-specific programming languages that typically lack the amount of data required for effective LLM processing techniques. This study focuses on the econometric scripting language named hansl of the open-source software gretl and employs a proprietary LLM based on GPT-3.5. Our findings suggest that LLMs can be a useful tool for writing, understanding, improving, and documenting gretl code, which includes generating descriptive docstrings for functions and providing precise explanations for abstract and poorly documented econometric code. While the LLM showcased promoting docstring-to-code translation capability, we also identify some limitations, such as its inability to improve certain sections of code and to write accurate unit tests. This study is a step towards leveraging the power of LLMs to facilitate software development in low-resource programming languages and ultimately to lower barriers to entry for their adoption.
1803.05120
Yufan He
Yufan He, Aaron Carass, Bruno M. Jedynak, Sharon D. Solomon, Shiv Saidha, Peter A. Calabresi, Jerry L. Prince
Topology guaranteed segmentation of the human retina from OCT using convolutional neural networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optical coherence tomography (OCT) is a noninvasive imaging modality which can be used to obtain depth images of the retina. The changing layer thicknesses can thus be quantified by analyzing these OCT images, moreover these changes have been shown to correlate with disease progression in multiple sclerosis. Recent automated retinal layer segmentation tools use machine learning methods to perform pixel-wise labeling and graph methods to guarantee the layer hierarchy or topology. However, graph parameters like distance and smoothness constraints must be experimentally assigned by retinal region and pathology, thus degrading the flexibility and time efficiency of the whole framework. In this paper, we develop cascaded deep networks to provide a topologically correct segmentation of the retinal layers in a single feed forward propagation. The first network (S-Net) performs pixel-wise labeling and the second regression network (R-Net) takes the topologically unconstrained S-Net results and outputs layer thicknesses for each layer and each position. Relu activation is used as the final operation of the R-Net which guarantees non-negativity of the output layer thickness. Since the segmentation boundary position is acquired by summing up the corresponding non-negative layer thicknesses, the layer ordering (i.e., topology) of the reconstructed boundaries is guaranteed even at the fovea where the distances between boundaries can be zero. The R-Net is trained using simulated masks and thus can be generalized to provide topology guaranteed segmentation for other layered structures. This deep network has achieved comparable mean absolute boundary error (2.82 {\mu}m) to state-of-the-art graph methods (2.83 {\mu}m).
[ { "created": "Wed, 14 Mar 2018 03:21:01 GMT", "version": "v1" } ]
2018-03-15
[ [ "He", "Yufan", "" ], [ "Carass", "Aaron", "" ], [ "Jedynak", "Bruno M.", "" ], [ "Solomon", "Sharon D.", "" ], [ "Saidha", "Shiv", "" ], [ "Calabresi", "Peter A.", "" ], [ "Prince", "Jerry L.", "" ] ]
Optical coherence tomography (OCT) is a noninvasive imaging modality which can be used to obtain depth images of the retina. The changing layer thicknesses can thus be quantified by analyzing these OCT images, moreover these changes have been shown to correlate with disease progression in multiple sclerosis. Recent automated retinal layer segmentation tools use machine learning methods to perform pixel-wise labeling and graph methods to guarantee the layer hierarchy or topology. However, graph parameters like distance and smoothness constraints must be experimentally assigned by retinal region and pathology, thus degrading the flexibility and time efficiency of the whole framework. In this paper, we develop cascaded deep networks to provide a topologically correct segmentation of the retinal layers in a single feed forward propagation. The first network (S-Net) performs pixel-wise labeling and the second regression network (R-Net) takes the topologically unconstrained S-Net results and outputs layer thicknesses for each layer and each position. Relu activation is used as the final operation of the R-Net which guarantees non-negativity of the output layer thickness. Since the segmentation boundary position is acquired by summing up the corresponding non-negative layer thicknesses, the layer ordering (i.e., topology) of the reconstructed boundaries is guaranteed even at the fovea where the distances between boundaries can be zero. The R-Net is trained using simulated masks and thus can be generalized to provide topology guaranteed segmentation for other layered structures. This deep network has achieved comparable mean absolute boundary error (2.82 {\mu}m) to state-of-the-art graph methods (2.83 {\mu}m).
2011.09577
Sadra Naddaf
Sadra Naddaf-Sh, M-Mahdi Naddaf-Sh, Amir R. Kashani and Hassan Zargarzadeh
An Efficient and Scalable Deep Learning Approach for Road Damage Detection
removed redundant postscripts
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Pavement condition evaluation is essential to time the preventative or rehabilitative actions and control distress propagation. Failing to conduct timely evaluations can lead to severe structural and financial loss of the infrastructure and complete reconstructions. Automated computer-aided surveying measures can provide a database of road damage patterns and their locations. This database can be utilized for timely road repairs to gain the minimum cost of maintenance and the asphalt's maximum durability. This paper introduces a deep learning-based surveying scheme to analyze the image-based distress data in real-time. A database consisting of a diverse population of crack distress types such as longitudinal, transverse, and alligator cracks, photographed using mobile-device is used. Then, a family of efficient and scalable models that are tuned for pavement crack detection is trained, and various augmentation policies are explored. Proposed models, resulted in F1-scores, ranging from 52% to 56%, and average inference time from 178-10 images per second. Finally, the performance of the object detectors are examined, and error analysis is reported against various images. The source code is available at https://github.com/mahdi65/roadDamageDetection2020.
[ { "created": "Wed, 18 Nov 2020 23:05:41 GMT", "version": "v1" }, { "created": "Wed, 25 Nov 2020 18:58:18 GMT", "version": "v2" }, { "created": "Thu, 17 Dec 2020 17:58:08 GMT", "version": "v3" } ]
2020-12-18
[ [ "Naddaf-Sh", "Sadra", "" ], [ "Naddaf-Sh", "M-Mahdi", "" ], [ "Kashani", "Amir R.", "" ], [ "Zargarzadeh", "Hassan", "" ] ]
Pavement condition evaluation is essential to time the preventative or rehabilitative actions and control distress propagation. Failing to conduct timely evaluations can lead to severe structural and financial loss of the infrastructure and complete reconstructions. Automated computer-aided surveying measures can provide a database of road damage patterns and their locations. This database can be utilized for timely road repairs to gain the minimum cost of maintenance and the asphalt's maximum durability. This paper introduces a deep learning-based surveying scheme to analyze the image-based distress data in real-time. A database consisting of a diverse population of crack distress types such as longitudinal, transverse, and alligator cracks, photographed using mobile-device is used. Then, a family of efficient and scalable models that are tuned for pavement crack detection is trained, and various augmentation policies are explored. Proposed models, resulted in F1-scores, ranging from 52% to 56%, and average inference time from 178-10 images per second. Finally, the performance of the object detectors are examined, and error analysis is reported against various images. The source code is available at https://github.com/mahdi65/roadDamageDetection2020.
2010.03108
Pan Ji
Pengfei Fang, Pan Ji, Jieming Zhou, Lars Petersson, Mehrtash Harandi
Channel Recurrent Attention Networks for Video Pedestrian Retrieval
To appear in ACCV 2020
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Full attention, which generates an attention value per element of the input feature maps, has been successfully demonstrated to be beneficial in visual tasks. In this work, we propose a fully attentional network, termed {\it channel recurrent attention network}, for the task of video pedestrian retrieval. The main attention unit, \textit{channel recurrent attention}, identifies attention maps at the frame level by jointly leveraging spatial and channel patterns via a recurrent neural network. This channel recurrent attention is designed to build a global receptive field by recurrently receiving and learning the spatial vectors. Then, a \textit{set aggregation} cell is employed to generate a compact video representation. Empirical experimental results demonstrate the superior performance of the proposed deep network, outperforming current state-of-the-art results across standard video person retrieval benchmarks, and a thorough ablation study shows the effectiveness of the proposed units.
[ { "created": "Wed, 7 Oct 2020 02:01:13 GMT", "version": "v1" } ]
2020-10-08
[ [ "Fang", "Pengfei", "" ], [ "Ji", "Pan", "" ], [ "Zhou", "Jieming", "" ], [ "Petersson", "Lars", "" ], [ "Harandi", "Mehrtash", "" ] ]
Full attention, which generates an attention value per element of the input feature maps, has been successfully demonstrated to be beneficial in visual tasks. In this work, we propose a fully attentional network, termed {\it channel recurrent attention network}, for the task of video pedestrian retrieval. The main attention unit, \textit{channel recurrent attention}, identifies attention maps at the frame level by jointly leveraging spatial and channel patterns via a recurrent neural network. This channel recurrent attention is designed to build a global receptive field by recurrently receiving and learning the spatial vectors. Then, a \textit{set aggregation} cell is employed to generate a compact video representation. Empirical experimental results demonstrate the superior performance of the proposed deep network, outperforming current state-of-the-art results across standard video person retrieval benchmarks, and a thorough ablation study shows the effectiveness of the proposed units.
1710.06785
Ramviyas Parasuraman
Ramviyas Parasuraman, Sergio Caccamo, Fredrik B{\aa}berg, Petter \"Ogren and Mark Neerincx
A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings
Accepted for publication in the Journal of Human-Robot Interaction (JHRI)
null
null
null
cs.RO cs.HC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A reliable wireless connection between the operator and the teleoperated Unmanned Ground Vehicle (UGV) is critical in many Urban Search and Rescue (USAR) missions. Unfortunately, as was seen in e.g. the Fukushima disaster, the networks available in areas where USAR missions take place are often severely limited in range and coverage. Therefore, during mission execution, the operator needs to keep track of not only the physical parts of the mission, such as navigating through an area or searching for victims, but also the variations in network connectivity across the environment. In this paper, we propose and evaluate a new teleoperation User Interface (UI) that includes a way of estimating the Direction of Arrival (DoA) of the Radio Signal Strength (RSS) and integrating the DoA information in the interface. The evaluation shows that using the interface results in more objects found, and less aborted missions due to connectivity problems, as compared to a standard interface. The proposed interface is an extension to an existing interface centered around the video stream captured by the UGV. But instead of just showing the network signal strength in terms of percent and a set of bars, the additional information of DoA is added in terms of a color bar surrounding the video feed. With this information, the operator knows what movement directions are safe, even when moving in regions close to the connectivity threshold.
[ { "created": "Wed, 4 Oct 2017 21:39:53 GMT", "version": "v1" }, { "created": "Thu, 26 Oct 2017 17:50:45 GMT", "version": "v2" }, { "created": "Sun, 5 Nov 2017 08:57:19 GMT", "version": "v3" }, { "created": "Tue, 7 Nov 2017 19:34:45 GMT", "version": "v4" } ]
2017-11-09
[ [ "Parasuraman", "Ramviyas", "" ], [ "Caccamo", "Sergio", "" ], [ "Båberg", "Fredrik", "" ], [ "Ögren", "Petter", "" ], [ "Neerincx", "Mark", "" ] ]
A reliable wireless connection between the operator and the teleoperated Unmanned Ground Vehicle (UGV) is critical in many Urban Search and Rescue (USAR) missions. Unfortunately, as was seen in e.g. the Fukushima disaster, the networks available in areas where USAR missions take place are often severely limited in range and coverage. Therefore, during mission execution, the operator needs to keep track of not only the physical parts of the mission, such as navigating through an area or searching for victims, but also the variations in network connectivity across the environment. In this paper, we propose and evaluate a new teleoperation User Interface (UI) that includes a way of estimating the Direction of Arrival (DoA) of the Radio Signal Strength (RSS) and integrating the DoA information in the interface. The evaluation shows that using the interface results in more objects found, and less aborted missions due to connectivity problems, as compared to a standard interface. The proposed interface is an extension to an existing interface centered around the video stream captured by the UGV. But instead of just showing the network signal strength in terms of percent and a set of bars, the additional information of DoA is added in terms of a color bar surrounding the video feed. With this information, the operator knows what movement directions are safe, even when moving in regions close to the connectivity threshold.
2305.19894
Zhongwei Wan
Zhongwei Wan, Che Liu, Mi Zhang, Jie Fu, Benyou Wang, Sibo Cheng, Lei Ma, C\'esar Quilodr\'an-Casas, Rossella Arcucci
Med-UniC: Unifying Cross-Lingual Medical Vision-Language Pre-Training by Diminishing Bias
NeurIPS 2023 Main track
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scarcity of data presents a critical obstacle to the efficacy of medical visionlanguage pre-training (VLP). A potential solution lies in the combination of datasets from various language communities. Nevertheless, the main challenge stems from the complexity of integrating diverse syntax and semantics, language-specific medical terminology, and culture-specific implicit knowledge. Therefore, one crucial aspect to consider is the presence of community bias caused by different languages. This paper presents a novel framework named Unifying Cross-Lingual Medical Vision-Language Pre-Training (Med-UniC), designed to integrate multimodal medical data from the two most prevalent languages, English and Spanish. Specifically, we propose Cross-lingual Text Alignment Regularization (CTR) to explicitly unify cross-lingual semantic representations of medical reports originating from diverse language communities. CTR is optimized through latent language disentanglement, rendering our optimization objective to not depend on negative samples, thereby significantly mitigating the bias from determining positive-negative sample pairs within analogous medical reports. Furthermore, it ensures that the cross-lingual representation is not biased toward any specific language community. Med-UniC reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases, offering a versatile framework for unifying multi-modal medical data within diverse linguistic communities. The experimental outcomes highlight the presence of community bias in cross-lingual VLP. Reducing this bias enhances the performance not only in vision-language tasks but also in uni-modal visual tasks.
[ { "created": "Wed, 31 May 2023 14:28:19 GMT", "version": "v1" }, { "created": "Mon, 25 Sep 2023 18:58:36 GMT", "version": "v2" }, { "created": "Sat, 17 Feb 2024 19:49:54 GMT", "version": "v3" } ]
2024-02-20
[ [ "Wan", "Zhongwei", "" ], [ "Liu", "Che", "" ], [ "Zhang", "Mi", "" ], [ "Fu", "Jie", "" ], [ "Wang", "Benyou", "" ], [ "Cheng", "Sibo", "" ], [ "Ma", "Lei", "" ], [ "Quilodrán-Casas", "César", "" ], [ "Arcucci", "Rossella", "" ] ]
The scarcity of data presents a critical obstacle to the efficacy of medical visionlanguage pre-training (VLP). A potential solution lies in the combination of datasets from various language communities. Nevertheless, the main challenge stems from the complexity of integrating diverse syntax and semantics, language-specific medical terminology, and culture-specific implicit knowledge. Therefore, one crucial aspect to consider is the presence of community bias caused by different languages. This paper presents a novel framework named Unifying Cross-Lingual Medical Vision-Language Pre-Training (Med-UniC), designed to integrate multimodal medical data from the two most prevalent languages, English and Spanish. Specifically, we propose Cross-lingual Text Alignment Regularization (CTR) to explicitly unify cross-lingual semantic representations of medical reports originating from diverse language communities. CTR is optimized through latent language disentanglement, rendering our optimization objective to not depend on negative samples, thereby significantly mitigating the bias from determining positive-negative sample pairs within analogous medical reports. Furthermore, it ensures that the cross-lingual representation is not biased toward any specific language community. Med-UniC reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases, offering a versatile framework for unifying multi-modal medical data within diverse linguistic communities. The experimental outcomes highlight the presence of community bias in cross-lingual VLP. Reducing this bias enhances the performance not only in vision-language tasks but also in uni-modal visual tasks.
1902.02139
Anton Pirogov
Christof L\"oding, Anton Pirogov
Determinization of B\"uchi Automata: Unifying the Approaches of Safra and Muller-Schupp
Full version of ICALP 2019 paper
null
10.4230/LIPIcs.ICALP.2019.120
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determinization of B\"uchi automata is a long-known difficult problem and after the seminal result of Safra, who developed the first asymptotically optimal construction from B\"uchi into Rabin automata, much work went into improving, simplifying or avoiding Safra's construction. A different, less known determinization construction was derived by Muller and Schupp and appears to be unrelated to Safra's construction on the first sight. In this paper we propose a new meta-construction from nondeterministic B\"uchi to deterministic parity automata which strictly subsumes both the construction of Safra and the construction of Muller and Schupp. It is based on a correspondence between structures that are encoded in the macrostates of the determinization procedures - Safra trees on one hand, and levels of the split-tree, which underlies the Muller and Schupp construction, on the other. Our construction allows for combining the mentioned constructions and opens up new directions for the development of heuristics.
[ { "created": "Wed, 6 Feb 2019 12:31:09 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2019 07:47:34 GMT", "version": "v2" } ]
2020-04-30
[ [ "Löding", "Christof", "" ], [ "Pirogov", "Anton", "" ] ]
Determinization of B\"uchi automata is a long-known difficult problem and after the seminal result of Safra, who developed the first asymptotically optimal construction from B\"uchi into Rabin automata, much work went into improving, simplifying or avoiding Safra's construction. A different, less known determinization construction was derived by Muller and Schupp and appears to be unrelated to Safra's construction on the first sight. In this paper we propose a new meta-construction from nondeterministic B\"uchi to deterministic parity automata which strictly subsumes both the construction of Safra and the construction of Muller and Schupp. It is based on a correspondence between structures that are encoded in the macrostates of the determinization procedures - Safra trees on one hand, and levels of the split-tree, which underlies the Muller and Schupp construction, on the other. Our construction allows for combining the mentioned constructions and opens up new directions for the development of heuristics.
2006.01358
Sandra Ramirez
Sandra L. Ram\'irez-Mora, Hanna Oktaba, Helena G\'omez-Adorno
Descriptions of issues and comments for predicting issue success in software projects
65 pages; 15 figures
Journal of Systems and Software, Vol. 168, 2020, 110663, ISSN 0164-1212
10.1016/j.jss.2020.110663
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Software development tasks must be performed successfully to achieve software quality and customer satisfaction. Knowing whether software tasks are likely to fail is essential to ensure the success of software projects. Issue Tracking Systems store information of software tasks (issues) and comments, which can be useful to predict issue success; however; almost no research on this topic exists. This work studies the usefulness of textual descriptions of issues and comments for predicting whether issues will be resolved successfully or not. Issues and comments of 588 software projects were extracted from four popular Issue Tracking Systems. Seven machine learning classifiers were trained on 30k issues and more than 120k comments, and more than 6000 experiments were performed to predict the success of three types of issues: bugs, improvements and new features. The results provided evidence that descriptions of issues and comments are useful for predicting issue success with more than 85% of accuracy and precision, and that the predictions of issue success vary over time. Words related to software development were particularly relevant for predicting issue success. Other communication aspects and their relationship to the success of software projects must be researched in detail using data from software tools.
[ { "created": "Tue, 2 Jun 2020 02:49:22 GMT", "version": "v1" } ]
2020-06-03
[ [ "Ramírez-Mora", "Sandra L.", "" ], [ "Oktaba", "Hanna", "" ], [ "Gómez-Adorno", "Helena", "" ] ]
Software development tasks must be performed successfully to achieve software quality and customer satisfaction. Knowing whether software tasks are likely to fail is essential to ensure the success of software projects. Issue Tracking Systems store information of software tasks (issues) and comments, which can be useful to predict issue success; however; almost no research on this topic exists. This work studies the usefulness of textual descriptions of issues and comments for predicting whether issues will be resolved successfully or not. Issues and comments of 588 software projects were extracted from four popular Issue Tracking Systems. Seven machine learning classifiers were trained on 30k issues and more than 120k comments, and more than 6000 experiments were performed to predict the success of three types of issues: bugs, improvements and new features. The results provided evidence that descriptions of issues and comments are useful for predicting issue success with more than 85% of accuracy and precision, and that the predictions of issue success vary over time. Words related to software development were particularly relevant for predicting issue success. Other communication aspects and their relationship to the success of software projects must be researched in detail using data from software tools.
2406.16250
Prerana Khatiwada
Prerana Khatiwada, Pranjal Dhakal
Evaluating Serverless Machine Learning Performance on Google Cloud Run
5 pages, 12 figures
null
null
null
cs.DC cs.OS
http://creativecommons.org/licenses/by/4.0/
End-users can get functions-as-a-service from serverless platforms, which promise lower hosting costs, high availability, fault tolerance, and dynamic flexibility for hosting individual functions known as microservices. Machine learning tools are seen to be reliably useful, and the services created using these tools are in increasing demand on a large scale. The serverless platforms are uniquely suited for hosting these machine learning services to be used for large-scale applications. These platforms are well known for their cost efficiency, fault tolerance, resource scaling, robust APIs for communication, and global reach. However, machine learning services are different from the web-services in that these serverless platforms were originally designed to host web services. We aimed to understand how these serverless platforms handle machine learning workloads with our study. We examine machine learning performance on one of the serverless platforms - Google Cloud Run, which is a GPU-less infrastructure that is not designed for machine learning application deployment.
[ { "created": "Mon, 24 Jun 2024 01:10:20 GMT", "version": "v1" } ]
2024-06-25
[ [ "Khatiwada", "Prerana", "" ], [ "Dhakal", "Pranjal", "" ] ]
End-users can get functions-as-a-service from serverless platforms, which promise lower hosting costs, high availability, fault tolerance, and dynamic flexibility for hosting individual functions known as microservices. Machine learning tools are seen to be reliably useful, and the services created using these tools are in increasing demand on a large scale. The serverless platforms are uniquely suited for hosting these machine learning services to be used for large-scale applications. These platforms are well known for their cost efficiency, fault tolerance, resource scaling, robust APIs for communication, and global reach. However, machine learning services are different from the web-services in that these serverless platforms were originally designed to host web services. We aimed to understand how these serverless platforms handle machine learning workloads with our study. We examine machine learning performance on one of the serverless platforms - Google Cloud Run, which is a GPU-less infrastructure that is not designed for machine learning application deployment.
2407.10725
Xiaoyuan Yi
Jing Yao, Xiaoyuan Yi, Xing Xie
CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The rapid progress in Large Language Models (LLMs) poses potential risks such as generating unethical content. Assessing LLMs' values can help expose their misalignment, but relies on reference-free evaluators, e.g., fine-tuned LLMs or close-source ones like GPT-4, to identify values reflected in generated responses. Nevertheless, these evaluators face two challenges in open-ended value evaluation: they should align with changing human value definitions with minimal annotation, against their own bias (adaptability), and detect varying value expressions and scenarios robustly (generalizability). To handle these challenges, we introduce CLAVE, a novel framework which integrates two complementary LLMs, a large one to extract high-level value concepts from a few human labels, leveraging its extensive knowledge and generalizability, and a smaller one fine-tuned on such concepts to better align with human value understanding. This dual-model approach enables calibration with any value systems using <100 human-labeled samples per value type. Then we present ValEval, a comprehensive dataset comprising 13k+ (text,value,label) tuples across diverse domains, covering three major value systems. We benchmark the capabilities of 12+ popular LLM evaluators and analyze their strengths and weaknesses. Our findings reveal that combining fine-tuned small models and prompt-based large ones serves as a superior balance in value evaluation.
[ { "created": "Mon, 15 Jul 2024 13:51:37 GMT", "version": "v1" } ]
2024-07-16
[ [ "Yao", "Jing", "" ], [ "Yi", "Xiaoyuan", "" ], [ "Xie", "Xing", "" ] ]
The rapid progress in Large Language Models (LLMs) poses potential risks such as generating unethical content. Assessing LLMs' values can help expose their misalignment, but relies on reference-free evaluators, e.g., fine-tuned LLMs or close-source ones like GPT-4, to identify values reflected in generated responses. Nevertheless, these evaluators face two challenges in open-ended value evaluation: they should align with changing human value definitions with minimal annotation, against their own bias (adaptability), and detect varying value expressions and scenarios robustly (generalizability). To handle these challenges, we introduce CLAVE, a novel framework which integrates two complementary LLMs, a large one to extract high-level value concepts from a few human labels, leveraging its extensive knowledge and generalizability, and a smaller one fine-tuned on such concepts to better align with human value understanding. This dual-model approach enables calibration with any value systems using <100 human-labeled samples per value type. Then we present ValEval, a comprehensive dataset comprising 13k+ (text,value,label) tuples across diverse domains, covering three major value systems. We benchmark the capabilities of 12+ popular LLM evaluators and analyze their strengths and weaknesses. Our findings reveal that combining fine-tuned small models and prompt-based large ones serves as a superior balance in value evaluation.
1807.11618
Kamal Al-Sabahi Ph.D.
Kamal Al-Sabahi, Zuping Zhang, Jun Long, Khaled Alwesabi
An Enhanced Latent Semantic Analysis Approach for Arabic Document Summarization
This is a pre-print of an article published in Arabian Journal for Science and Engineering. The final authenticated version is available online at: https://doi.org/10.1007/s13369-018-3286-z
K. Al-Sabahi, Z. Zhang, J. Long, and K. Alwesabi, "An Enhanced Latent Semantic Analysis Approach for Arabic Document Summarization," Arabian Journal for Science and Engineering, journal article May 05 2018
10.1007/s13369-018-3286-z
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fast-growing amount of information on the Internet makes the research in automatic document summarization very urgent. It is an effective solution for information overload. Many approaches have been proposed based on different strategies, such as latent semantic analysis (LSA). However, LSA, when applied to document summarization, has some limitations which diminish its performance. In this work, we try to overcome these limitations by applying statistic and linear algebraic approaches combined with syntactic and semantic processing of text. First, the part of speech tagger is utilized to reduce the dimension of LSA. Then, the weight of the term in four adjacent sentences is added to the weighting schemes while calculating the input matrix to take into account the word order and the syntactic relations. In addition, a new LSA-based sentence selection algorithm is proposed, in which the term description is combined with sentence description for each topic which in turn makes the generated summary more informative and diverse. To ensure the effectiveness of the proposed LSA-based sentence selection algorithm, extensive experiment on Arabic and English are done. Four datasets are used to evaluate the new model, Linguistic Data Consortium (LDC) Arabic Newswire-a corpus, Essex Arabic Summaries Corpus (EASC), DUC2002, and Multilingual MSS 2015 dataset. Experimental results on the four datasets show the effectiveness of the proposed model on Arabic and English datasets. It performs comprehensively better compared to the state-of-the-art methods.
[ { "created": "Tue, 31 Jul 2018 00:50:15 GMT", "version": "v1" } ]
2018-08-01
[ [ "Al-Sabahi", "Kamal", "" ], [ "Zhang", "Zuping", "" ], [ "Long", "Jun", "" ], [ "Alwesabi", "Khaled", "" ] ]
The fast-growing amount of information on the Internet makes the research in automatic document summarization very urgent. It is an effective solution for information overload. Many approaches have been proposed based on different strategies, such as latent semantic analysis (LSA). However, LSA, when applied to document summarization, has some limitations which diminish its performance. In this work, we try to overcome these limitations by applying statistic and linear algebraic approaches combined with syntactic and semantic processing of text. First, the part of speech tagger is utilized to reduce the dimension of LSA. Then, the weight of the term in four adjacent sentences is added to the weighting schemes while calculating the input matrix to take into account the word order and the syntactic relations. In addition, a new LSA-based sentence selection algorithm is proposed, in which the term description is combined with sentence description for each topic which in turn makes the generated summary more informative and diverse. To ensure the effectiveness of the proposed LSA-based sentence selection algorithm, extensive experiment on Arabic and English are done. Four datasets are used to evaluate the new model, Linguistic Data Consortium (LDC) Arabic Newswire-a corpus, Essex Arabic Summaries Corpus (EASC), DUC2002, and Multilingual MSS 2015 dataset. Experimental results on the four datasets show the effectiveness of the proposed model on Arabic and English datasets. It performs comprehensively better compared to the state-of-the-art methods.
2204.08935
Xinyue Shen
Xinyue Shen, Xinlei He, Michael Backes, Jeremy Blackburn, Savvas Zannettou, Yang Zhang
On Xing Tian and the Perseverance of Anti-China Sentiment Online
To Appear in the 16th International Conference on Web and Social Media (ICWSM), 2022
null
null
null
cs.SI cs.CY
http://creativecommons.org/licenses/by/4.0/
Sinophobia, anti-Chinese sentiment, has existed on the Web for a long time. The outbreak of COVID-19 and the extended quarantine has further amplified it. However, we lack a quantitative understanding of the cause of Sinophobia as well as how it evolves over time. In this paper, we conduct a large-scale longitudinal measurement of Sinophobia, between 2016 and 2021, on two mainstream and fringe Web communities. By analyzing 8B posts from Reddit and 206M posts from 4chan's /pol/, we investigate the origins, evolution, and content of Sinophobia. We find that, anti-Chinese content may be evoked by political events not directly related to China, e.g., the U.S. withdrawal from the Paris Agreement. And during the COVID-19 pandemic, daily usage of Sinophobic slurs has significantly increased even with the hate-speech ban policy. We also show that the semantic meaning of the words "China" and "Chinese" are shifting towards Sinophobic slurs with the rise of COVID-19 and remain the same in the pandemic period. We further use topic modeling to show the topics of Sinophobic discussion are pretty diverse and broad. We find that both Web communities share some common Sinophobic topics like ethnics, economics and commerce, weapons and military, foreign relations, etc. However, compared to 4chan's /pol/, more daily life-related topics including food, game, and stock are found in Reddit. Our finding also reveals that the topics related to COVID-19 and blaming the Chinese government are more prevalent in the pandemic period. To the best of our knowledge, this paper is the longest quantitative measurement of Sinophobia.
[ { "created": "Tue, 19 Apr 2022 15:17:28 GMT", "version": "v1" } ]
2022-04-20
[ [ "Shen", "Xinyue", "" ], [ "He", "Xinlei", "" ], [ "Backes", "Michael", "" ], [ "Blackburn", "Jeremy", "" ], [ "Zannettou", "Savvas", "" ], [ "Zhang", "Yang", "" ] ]
Sinophobia, anti-Chinese sentiment, has existed on the Web for a long time. The outbreak of COVID-19 and the extended quarantine has further amplified it. However, we lack a quantitative understanding of the cause of Sinophobia as well as how it evolves over time. In this paper, we conduct a large-scale longitudinal measurement of Sinophobia, between 2016 and 2021, on two mainstream and fringe Web communities. By analyzing 8B posts from Reddit and 206M posts from 4chan's /pol/, we investigate the origins, evolution, and content of Sinophobia. We find that, anti-Chinese content may be evoked by political events not directly related to China, e.g., the U.S. withdrawal from the Paris Agreement. And during the COVID-19 pandemic, daily usage of Sinophobic slurs has significantly increased even with the hate-speech ban policy. We also show that the semantic meaning of the words "China" and "Chinese" are shifting towards Sinophobic slurs with the rise of COVID-19 and remain the same in the pandemic period. We further use topic modeling to show the topics of Sinophobic discussion are pretty diverse and broad. We find that both Web communities share some common Sinophobic topics like ethnics, economics and commerce, weapons and military, foreign relations, etc. However, compared to 4chan's /pol/, more daily life-related topics including food, game, and stock are found in Reddit. Our finding also reveals that the topics related to COVID-19 and blaming the Chinese government are more prevalent in the pandemic period. To the best of our knowledge, this paper is the longest quantitative measurement of Sinophobia.
1601.03295
Gabriela Csurka
Gabriela Csurka
Document image classification, with a specific view on applications of patent images
Paper submitted in 2014 as book chapter of Current Challenges in Patent Information Retrieval, Second edition by M. Lupu et al (eds.). To appear in 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main focus of this paper is document image classification and retrieval, where we analyze and compare different parameters for the RunLeght Histogram (RL) and Fisher Vector (FV) based image representations. We do an exhaustive experimental study using different document image datasets, including the MARG benchmarks, two datasets built on customer data and the images from the Patent Image Classification task of the Clef-IP 2011. The aim of the study is to give guidelines on how to best choose the parameters such that the same features perform well on different tasks. As an example of such need, we describe the Image-based Patent Retrieval task's of Clef-IP 2011, where we used the same image representation to predict the image type and retrieve relevant patents.
[ { "created": "Wed, 13 Jan 2016 16:02:13 GMT", "version": "v1" } ]
2016-01-14
[ [ "Csurka", "Gabriela", "" ] ]
The main focus of this paper is document image classification and retrieval, where we analyze and compare different parameters for the RunLeght Histogram (RL) and Fisher Vector (FV) based image representations. We do an exhaustive experimental study using different document image datasets, including the MARG benchmarks, two datasets built on customer data and the images from the Patent Image Classification task of the Clef-IP 2011. The aim of the study is to give guidelines on how to best choose the parameters such that the same features perform well on different tasks. As an example of such need, we describe the Image-based Patent Retrieval task's of Clef-IP 2011, where we used the same image representation to predict the image type and retrieve relevant patents.
2105.11134
Jiacheng Ye
Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, Qi Zhang
One2Set: Generating Diverse Keyphrases as a Set
Accepted by ACL 2021
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, the sequence-to-sequence models have made remarkable progress on the task of keyphrase generation (KG) by concatenating multiple keyphrases in a predefined order as a target sequence during training. However, the keyphrases are inherently an unordered set rather than an ordered sequence. Imposing a predefined order will introduce wrong bias during training, which can highly penalize shifts in the order between keyphrases. In this work, we propose a new training paradigm One2Set without predefining an order to concatenate the keyphrases. To fit this paradigm, we propose a novel model that utilizes a fixed set of learned control codes as conditions to generate a set of keyphrases in parallel. To solve the problem that there is no correspondence between each prediction and target during training, we propose a $K$-step target assignment mechanism via bipartite matching, which greatly increases the diversity and reduces the duplication ratio of generated keyphrases. The experimental results on multiple benchmarks demonstrate that our approach significantly outperforms the state-of-the-art methods.
[ { "created": "Mon, 24 May 2021 07:29:47 GMT", "version": "v1" } ]
2021-05-25
[ [ "Ye", "Jiacheng", "" ], [ "Gui", "Tao", "" ], [ "Luo", "Yichao", "" ], [ "Xu", "Yige", "" ], [ "Zhang", "Qi", "" ] ]
Recently, the sequence-to-sequence models have made remarkable progress on the task of keyphrase generation (KG) by concatenating multiple keyphrases in a predefined order as a target sequence during training. However, the keyphrases are inherently an unordered set rather than an ordered sequence. Imposing a predefined order will introduce wrong bias during training, which can highly penalize shifts in the order between keyphrases. In this work, we propose a new training paradigm One2Set without predefining an order to concatenate the keyphrases. To fit this paradigm, we propose a novel model that utilizes a fixed set of learned control codes as conditions to generate a set of keyphrases in parallel. To solve the problem that there is no correspondence between each prediction and target during training, we propose a $K$-step target assignment mechanism via bipartite matching, which greatly increases the diversity and reduces the duplication ratio of generated keyphrases. The experimental results on multiple benchmarks demonstrate that our approach significantly outperforms the state-of-the-art methods.
2201.09377
Ali Emami Mr.
Darren Abramson and Ali Emami
An Application of Pseudo-Log-Likelihoods to Natural Language Scoring
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Language models built using semi-supervised machine learning on large corpora of natural language have very quickly enveloped the fields of natural language generation and understanding. In this paper we apply a zero-shot approach independently developed by a number of researchers now gaining recognition as a significant alternative to fine-tuning for evaluation on common sense tasks. A language model with relatively few parameters and training steps compared to a more recent language model (T5) can outperform it on a recent large data set (TimeDial), while displaying robustness in its performance across a similar class of language tasks. Surprisingly, this result is achieved by using a hyperparameter-free zero-shot method with the smaller model, compared to fine-tuning to the larger model. We argue that robustness of the smaller model ought to be understood in terms of compositionality, in a sense that we draw from recent literature on a class of similar models. We identify a practical cost for our method and model: high GPU-time for natural language evaluation. The zero-shot measurement technique that produces remarkable stability, both for ALBERT and other BERT variants, is an application of pseudo-log-likelihoods to masked language models for the relative measurement of probability for substitution alternatives in forced choice language tasks such as the Winograd Schema Challenge, Winogrande, and others. One contribution of this paper is to bring together a number of similar, but independent strands of research. We produce some absolute state-of-the-art results for common sense reasoning in binary choice tasks, performing better than any published result in the literature, including fine-tuned efforts. We show a remarkable consistency of the model's performance under adversarial settings, which we argue is best explained by the model's compositionality of representations.
[ { "created": "Sun, 23 Jan 2022 22:00:54 GMT", "version": "v1" } ]
2022-01-25
[ [ "Abramson", "Darren", "" ], [ "Emami", "Ali", "" ] ]
Language models built using semi-supervised machine learning on large corpora of natural language have very quickly enveloped the fields of natural language generation and understanding. In this paper we apply a zero-shot approach independently developed by a number of researchers now gaining recognition as a significant alternative to fine-tuning for evaluation on common sense tasks. A language model with relatively few parameters and training steps compared to a more recent language model (T5) can outperform it on a recent large data set (TimeDial), while displaying robustness in its performance across a similar class of language tasks. Surprisingly, this result is achieved by using a hyperparameter-free zero-shot method with the smaller model, compared to fine-tuning to the larger model. We argue that robustness of the smaller model ought to be understood in terms of compositionality, in a sense that we draw from recent literature on a class of similar models. We identify a practical cost for our method and model: high GPU-time for natural language evaluation. The zero-shot measurement technique that produces remarkable stability, both for ALBERT and other BERT variants, is an application of pseudo-log-likelihoods to masked language models for the relative measurement of probability for substitution alternatives in forced choice language tasks such as the Winograd Schema Challenge, Winogrande, and others. One contribution of this paper is to bring together a number of similar, but independent strands of research. We produce some absolute state-of-the-art results for common sense reasoning in binary choice tasks, performing better than any published result in the literature, including fine-tuned efforts. We show a remarkable consistency of the model's performance under adversarial settings, which we argue is best explained by the model's compositionality of representations.
2402.02211
Guangmo Tong
Guangmo Tong, Peng Zhao, Mina Samizadeh
Query-decision Regression between Shortest Path and Minimum Steiner Tree
PAKDD 2024
null
null
null
cs.LG cs.DS
http://creativecommons.org/licenses/by/4.0/
Considering a graph with unknown weights, can we find the shortest path for a pair of nodes if we know the minimal Steiner trees associated with some subset of nodes? That is, with respect to a fixed latent decision-making system (e.g., a weighted graph), we seek to solve one optimization problem (e.g., the shortest path problem) by leveraging information associated with another optimization problem (e.g., the minimal Steiner tree problem). In this paper, we study such a prototype problem called \textit{query-decision regression with task shifts}, focusing on the shortest path problem and the minimum Steiner tree problem. We provide theoretical insights regarding the design of realizable hypothesis spaces for building scoring models, and present two principled learning frameworks. Our experimental studies show that such problems can be solved to a decent extent with statistical significance.
[ { "created": "Sat, 3 Feb 2024 17:05:01 GMT", "version": "v1" } ]
2024-02-06
[ [ "Tong", "Guangmo", "" ], [ "Zhao", "Peng", "" ], [ "Samizadeh", "Mina", "" ] ]
Considering a graph with unknown weights, can we find the shortest path for a pair of nodes if we know the minimal Steiner trees associated with some subset of nodes? That is, with respect to a fixed latent decision-making system (e.g., a weighted graph), we seek to solve one optimization problem (e.g., the shortest path problem) by leveraging information associated with another optimization problem (e.g., the minimal Steiner tree problem). In this paper, we study such a prototype problem called \textit{query-decision regression with task shifts}, focusing on the shortest path problem and the minimum Steiner tree problem. We provide theoretical insights regarding the design of realizable hypothesis spaces for building scoring models, and present two principled learning frameworks. Our experimental studies show that such problems can be solved to a decent extent with statistical significance.
2003.06773
Jinnan Piao
Jinnan Piao, Kai Niu, Jincheng Dai, and Chao Dong
Sphere Constraint based Enumeration Methods to Analyze the Minimum Weight Distribution of Polar Codes
11 pages, 6 figures. Submitted to IEEE Transactions on Vehicular Technology
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the minimum weight distributions (MWDs) of polar codes and concatenated polar codes are exactly enumerated according to the distance property of codewords. We first propose a sphere constraint based enumeration method (SCEM) to analyze the MWD of polar codes with moderate complexity. The SCEM exploits the distance property that all the codewords with the identical Hamming weight are distributed on a spherical shell. Then, based on the SCEM and the Plotkin's construction of polar codes, a sphere constraint based recursive enumeration method (SCREM) is proposed to recursively calculate the MWD with a lower complexity. Finally, we propose a parity-check SCEM (PC-SCEM) to analyze the MWD of concatenated polar codes by introducing the parity-check equations of outer codes. Moreover, due to the distance property of codewords, the proposed three methods can exactly enumerate all the codewords belonging to the MWD. The enumeration results show that the SCREM can enumerate the MWD of polar codes with code length up to $2^{14}$ and the PC-SCEM can be used to optimize CRC-polar concatenated codes.
[ { "created": "Sun, 15 Mar 2020 07:34:29 GMT", "version": "v1" } ]
2020-03-17
[ [ "Piao", "Jinnan", "" ], [ "Niu", "Kai", "" ], [ "Dai", "Jincheng", "" ], [ "Dong", "Chao", "" ] ]
In this paper, the minimum weight distributions (MWDs) of polar codes and concatenated polar codes are exactly enumerated according to the distance property of codewords. We first propose a sphere constraint based enumeration method (SCEM) to analyze the MWD of polar codes with moderate complexity. The SCEM exploits the distance property that all the codewords with the identical Hamming weight are distributed on a spherical shell. Then, based on the SCEM and the Plotkin's construction of polar codes, a sphere constraint based recursive enumeration method (SCREM) is proposed to recursively calculate the MWD with a lower complexity. Finally, we propose a parity-check SCEM (PC-SCEM) to analyze the MWD of concatenated polar codes by introducing the parity-check equations of outer codes. Moreover, due to the distance property of codewords, the proposed three methods can exactly enumerate all the codewords belonging to the MWD. The enumeration results show that the SCREM can enumerate the MWD of polar codes with code length up to $2^{14}$ and the PC-SCEM can be used to optimize CRC-polar concatenated codes.
2006.09205
Andrew Dowsey
William Andrew, Jing Gao, Siobhan Mullan, Neill Campbell, Andrew W Dowsey, Tilo Burghardt
Visual Identification of Individual Holstein-Friesian Cattle via Deep Metric Learning
41 pages, 18 figures, 2 tables; Submitted to Computers and Electronics in Agriculture ; Source code and network weights available at https://github.com/CWOA/MetricLearningIdentification ; OpenCows2020 dataset available at https://doi.org/10.5523/bris.10m32xl88x2b61zlkkgz3fml17
Computers and Electronics in Agriculture 185, 106133 (2021)
10.1016/j.compag.2021.106133
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems. This work takes advantage of these natural markings in order to automate visual detection and biometric identification of individual Holstein-Friesians via convolutional neural networks and deep metric learning techniques. Existing approaches rely on markings, tags or wearables with a variety of maintenance requirements, whereas we present a totally hands-off method for the automated detection, localisation, and identification of individual animals from overhead imaging in an open herd setting, i.e. where new additions to the herd are identified without re-training. We propose the use of SoftMax-based reciprocal triplet loss to address the identification problem and evaluate the techniques in detail against fixed herd paradigms. We find that deep metric learning systems show strong performance even when many cattle unseen during system training are to be identified and re-identified -- achieving 93.8% accuracy when trained on just half of the population. This work paves the way for facilitating the non-intrusive monitoring of cattle applicable to precision farming and surveillance for automated productivity, health and welfare monitoring, and to veterinary research such as behavioural analysis, disease outbreak tracing, and more. Key parts of the source code, network weights and datasets are available publicly.
[ { "created": "Tue, 16 Jun 2020 14:41:55 GMT", "version": "v1" }, { "created": "Sat, 4 Jul 2020 11:38:09 GMT", "version": "v2" }, { "created": "Wed, 14 Oct 2020 10:58:30 GMT", "version": "v3" } ]
2021-05-04
[ [ "Andrew", "William", "" ], [ "Gao", "Jing", "" ], [ "Mullan", "Siobhan", "" ], [ "Campbell", "Neill", "" ], [ "Dowsey", "Andrew W", "" ], [ "Burghardt", "Tilo", "" ] ]
Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems. This work takes advantage of these natural markings in order to automate visual detection and biometric identification of individual Holstein-Friesians via convolutional neural networks and deep metric learning techniques. Existing approaches rely on markings, tags or wearables with a variety of maintenance requirements, whereas we present a totally hands-off method for the automated detection, localisation, and identification of individual animals from overhead imaging in an open herd setting, i.e. where new additions to the herd are identified without re-training. We propose the use of SoftMax-based reciprocal triplet loss to address the identification problem and evaluate the techniques in detail against fixed herd paradigms. We find that deep metric learning systems show strong performance even when many cattle unseen during system training are to be identified and re-identified -- achieving 93.8% accuracy when trained on just half of the population. This work paves the way for facilitating the non-intrusive monitoring of cattle applicable to precision farming and surveillance for automated productivity, health and welfare monitoring, and to veterinary research such as behavioural analysis, disease outbreak tracing, and more. Key parts of the source code, network weights and datasets are available publicly.
2104.11057
Lie Ju
Lie Ju, Xin Wang, Lin Wang, Tongliang Liu, Xin Zhao, Tom Drummond, Dwarikanath Mahapatra, Zongyuan Ge
Relational Subsets Knowledge Distillation for Long-tailed Retinal Diseases Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the real world, medical datasets often exhibit a long-tailed data distribution (i.e., a few classes occupy most of the data, while most classes have rarely few samples), which results in a challenging imbalance learning scenario. For example, there are estimated more than 40 different kinds of retinal diseases with variable morbidity, however with more than 30+ conditions are very rare from the global patient cohorts, which results in a typical long-tailed learning problem for deep learning-based screening models. In this study, we propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge, such as regions and phenotype information. It enforces the model to focus on learning the subset-specific knowledge. More specifically, there are some relational classes that reside in the fixed retinal regions, or some common pathological features are observed in both the majority and minority conditions. With those subsets learnt teacher models, then we are able to distill the multiple teacher models into a unified model with weighted knowledge distillation loss. The proposed framework proved to be effective for the long-tailed retinal diseases recognition task. The experimental results on two different datasets demonstrate that our method is flexible and can be easily plugged into many other state-of-the-art techniques with significant improvements.
[ { "created": "Thu, 22 Apr 2021 13:39:33 GMT", "version": "v1" } ]
2021-04-23
[ [ "Ju", "Lie", "" ], [ "Wang", "Xin", "" ], [ "Wang", "Lin", "" ], [ "Liu", "Tongliang", "" ], [ "Zhao", "Xin", "" ], [ "Drummond", "Tom", "" ], [ "Mahapatra", "Dwarikanath", "" ], [ "Ge", "Zongyuan", "" ] ]
In the real world, medical datasets often exhibit a long-tailed data distribution (i.e., a few classes occupy most of the data, while most classes have rarely few samples), which results in a challenging imbalance learning scenario. For example, there are estimated more than 40 different kinds of retinal diseases with variable morbidity, however with more than 30+ conditions are very rare from the global patient cohorts, which results in a typical long-tailed learning problem for deep learning-based screening models. In this study, we propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge, such as regions and phenotype information. It enforces the model to focus on learning the subset-specific knowledge. More specifically, there are some relational classes that reside in the fixed retinal regions, or some common pathological features are observed in both the majority and minority conditions. With those subsets learnt teacher models, then we are able to distill the multiple teacher models into a unified model with weighted knowledge distillation loss. The proposed framework proved to be effective for the long-tailed retinal diseases recognition task. The experimental results on two different datasets demonstrate that our method is flexible and can be easily plugged into many other state-of-the-art techniques with significant improvements.
1605.00313
Konstantin Kobylkin S.
Konstantin Kobylkin
Stabbing line segments with disks: complexity and approximation algorithms
12 pages, 1 appendix, 15 bibliography items, 6th International Conference on Analysis of Images, Social Networks and Texts (AIST-2017)
Kobylkin K.Stabbing Line Segments with Disks: Complexity and Approximation Algorithms. // Lecture Notes in Computer Science, 2018. vol 10716. pp 356-367 Springer
10.1007/978-3-319-73013-4_33
Eng21
cs.CG cs.CC cs.DM
http://creativecommons.org/licenses/by/4.0/
Computational complexity and approximation algorithms are reported for a problem of stabbing a set of straight line segments with the least cardinality set of disks of fixed radii $r>0$ where the set of segments forms a straight line drawing $G=(V,E)$ of a planar graph without edge crossings. Close geometric problems arise in network security applications. We give strong NP-hardness of the problem for edge sets of Delaunay triangulations, Gabriel graphs and other subgraphs (which are often used in network design) for $r\in [d_{\min},\eta d_{\max}]$ and some constant $\eta$ where $d_{\max}$ and $d_{\min}$ are Euclidean lengths of the longest and shortest graph edges respectively. Fast $O(|E|\log|E|)$-time $O(1)$-approximation algorithm is proposed within the class of straight line drawings of planar graphs for which the inequality $r\geq \eta d_{\max}$ holds uniformly for some constant $\eta>0,$ i.e. when lengths of edges of $G$ are uniformly bounded from above by some linear function of $r.$
[ { "created": "Sun, 1 May 2016 21:54:15 GMT", "version": "v1" }, { "created": "Wed, 4 May 2016 14:06:50 GMT", "version": "v2" }, { "created": "Tue, 26 Jul 2016 09:32:56 GMT", "version": "v3" }, { "created": "Thu, 20 Jul 2017 08:56:24 GMT", "version": "v4" } ]
2018-03-23
[ [ "Kobylkin", "Konstantin", "" ] ]
Computational complexity and approximation algorithms are reported for a problem of stabbing a set of straight line segments with the least cardinality set of disks of fixed radii $r>0$ where the set of segments forms a straight line drawing $G=(V,E)$ of a planar graph without edge crossings. Close geometric problems arise in network security applications. We give strong NP-hardness of the problem for edge sets of Delaunay triangulations, Gabriel graphs and other subgraphs (which are often used in network design) for $r\in [d_{\min},\eta d_{\max}]$ and some constant $\eta$ where $d_{\max}$ and $d_{\min}$ are Euclidean lengths of the longest and shortest graph edges respectively. Fast $O(|E|\log|E|)$-time $O(1)$-approximation algorithm is proposed within the class of straight line drawings of planar graphs for which the inequality $r\geq \eta d_{\max}$ holds uniformly for some constant $\eta>0,$ i.e. when lengths of edges of $G$ are uniformly bounded from above by some linear function of $r.$
2307.11412
Weiyu Zhang
Weiyu Zhang
Hybrid deliberation: Citizen dialogues in a post-pandemic era
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This report first provides a brief review of various forms of dialogue-based participation, e.g., Citizen Assembly, Citizen Lottery, Citizen Jury, Deliberative Polling, and Participatory Budgeting. Challenges associated with these long-lasting practices are identified and hybrid deliberation is proposed as a concept to address the challenges. The report then analyzes six leading examples of digital or hybrid formats of citizen dialogues. Through the comparison of the cases, the report concludes about the hurdles/risks, success factors/opportunities, and best practices for a complementary use of digital and analogue participation formats. Hybrid deliberation is proposed to be the future direction for dialogue-based participation that involves masses and generates high-quality outcomes.
[ { "created": "Fri, 21 Jul 2023 08:13:53 GMT", "version": "v1" } ]
2023-07-24
[ [ "Zhang", "Weiyu", "" ] ]
This report first provides a brief review of various forms of dialogue-based participation, e.g., Citizen Assembly, Citizen Lottery, Citizen Jury, Deliberative Polling, and Participatory Budgeting. Challenges associated with these long-lasting practices are identified and hybrid deliberation is proposed as a concept to address the challenges. The report then analyzes six leading examples of digital or hybrid formats of citizen dialogues. Through the comparison of the cases, the report concludes about the hurdles/risks, success factors/opportunities, and best practices for a complementary use of digital and analogue participation formats. Hybrid deliberation is proposed to be the future direction for dialogue-based participation that involves masses and generates high-quality outcomes.
1306.4037
Travis Gagie
H. Ferrada, T. Gagie, T. Hirvola and S. J. Puglisi
Hybrid Indexes for Repetitive Datasets
null
null
10.1098/rsta.2013.0137
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in DNA sequencing mean databases of thousands of human genomes will soon be commonplace. In this paper we introduce a simple technique for reducing the size of conventional indexes on such highly repetitive texts. Given upper bounds on pattern lengths and edit distances, we preprocess the text with LZ77 to obtain a filtered text, for which we store a conventional index. Later, given a query, we find all matches in the filtered text, then use their positions and the structure of the LZ77 parse to find all matches in the original text. Our experiments show this also significantly reduces query times.
[ { "created": "Mon, 17 Jun 2013 22:48:15 GMT", "version": "v1" } ]
2015-06-16
[ [ "Ferrada", "H.", "" ], [ "Gagie", "T.", "" ], [ "Hirvola", "T.", "" ], [ "Puglisi", "S. J.", "" ] ]
Advances in DNA sequencing mean databases of thousands of human genomes will soon be commonplace. In this paper we introduce a simple technique for reducing the size of conventional indexes on such highly repetitive texts. Given upper bounds on pattern lengths and edit distances, we preprocess the text with LZ77 to obtain a filtered text, for which we store a conventional index. Later, given a query, we find all matches in the filtered text, then use their positions and the structure of the LZ77 parse to find all matches in the original text. Our experiments show this also significantly reduces query times.
2405.14106
Emiliano De Cristofaro
Meenatchi Sundaram Muthu Selva Annamalai and Emiliano De Cristofaro
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a nearly tight audit of the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box model. Our auditing procedure empirically estimates the privacy leakage from DP-SGD using membership inference attacks; unlike prior work, the estimates are appreciably close to the theoretical DP bounds. The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters. For models trained with theoretical $\varepsilon=10.0$ on MNIST and CIFAR-10, our auditing procedure yields empirical estimates of $7.21$ and $6.95$, respectively, on 1,000-record samples and $6.48$ and $4.96$ on the full datasets. By contrast, previous work achieved tight audits only in stronger (i.e., less realistic) white-box models that allow the adversary to access the model's inner parameters and insert arbitrary gradients. Our auditing procedure can be used to detect bugs and DP violations more easily and offers valuable insight into how the privacy analysis of DP-SGD can be further improved.
[ { "created": "Thu, 23 May 2024 02:24:52 GMT", "version": "v1" } ]
2024-05-24
[ [ "Annamalai", "Meenatchi Sundaram Muthu Selva", "" ], [ "De Cristofaro", "Emiliano", "" ] ]
This paper presents a nearly tight audit of the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box model. Our auditing procedure empirically estimates the privacy leakage from DP-SGD using membership inference attacks; unlike prior work, the estimates are appreciably close to the theoretical DP bounds. The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters. For models trained with theoretical $\varepsilon=10.0$ on MNIST and CIFAR-10, our auditing procedure yields empirical estimates of $7.21$ and $6.95$, respectively, on 1,000-record samples and $6.48$ and $4.96$ on the full datasets. By contrast, previous work achieved tight audits only in stronger (i.e., less realistic) white-box models that allow the adversary to access the model's inner parameters and insert arbitrary gradients. Our auditing procedure can be used to detect bugs and DP violations more easily and offers valuable insight into how the privacy analysis of DP-SGD can be further improved.
1907.05560
Vahid Noormofidi
Vahid Noormofidi
Simulating Nonlinear Neutrino Oscillations on Next-Generation Many-Core Architectures
null
null
null
null
cs.DC cs.CE cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work an astrophysical simulation code, XFLAT, is developed to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both the CPU and the Xeon Phi co-processor, the latter of which is based on the Intel Many Integrated Core Architecture (MIC). The performance of XFLAT on configurations and scenarios has been analyzed. In addition, the impact of I/O and the multi-node configuration on the Xeon Phi-equipped heterogeneous supercomputers such as Stampede at the Texas Advanced Computing Center (TACC) was investigated.
[ { "created": "Fri, 12 Jul 2019 03:21:54 GMT", "version": "v1" } ]
2019-07-15
[ [ "Noormofidi", "Vahid", "" ] ]
In this work an astrophysical simulation code, XFLAT, is developed to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both the CPU and the Xeon Phi co-processor, the latter of which is based on the Intel Many Integrated Core Architecture (MIC). The performance of XFLAT on configurations and scenarios has been analyzed. In addition, the impact of I/O and the multi-node configuration on the Xeon Phi-equipped heterogeneous supercomputers such as Stampede at the Texas Advanced Computing Center (TACC) was investigated.
2404.14465
Ilias Siniosoglou
Dimitris Asimopoulos, Ilias Siniosoglou, Vasileios Argyriou, Thomai Karamitsou, Eleftherios Fountoukidis, Sotirios K. Goudos, Ioannis D. Moscholios, Konstantinos E. Psannis, Panagiotis Sarigiannidis
Benchmarking Advanced Text Anonymisation Methods: A Comparative Study on Novel and Traditional Approaches
null
null
null
null
cs.CL cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the realm of data privacy, the ability to effectively anonymise text is paramount. With the proliferation of deep learning and, in particular, transformer architectures, there is a burgeoning interest in leveraging these advanced models for text anonymisation tasks. This paper presents a comprehensive benchmarking study comparing the performance of transformer-based models and Large Language Models(LLM) against traditional architectures for text anonymisation. Utilising the CoNLL-2003 dataset, known for its robustness and diversity, we evaluate several models. Our results showcase the strengths and weaknesses of each approach, offering a clear perspective on the efficacy of modern versus traditional methods. Notably, while modern models exhibit advanced capabilities in capturing con textual nuances, certain traditional architectures still keep high performance. This work aims to guide researchers in selecting the most suitable model for their anonymisation needs, while also shedding light on potential paths for future advancements in the field.
[ { "created": "Mon, 22 Apr 2024 12:06:54 GMT", "version": "v1" } ]
2024-04-24
[ [ "Asimopoulos", "Dimitris", "" ], [ "Siniosoglou", "Ilias", "" ], [ "Argyriou", "Vasileios", "" ], [ "Karamitsou", "Thomai", "" ], [ "Fountoukidis", "Eleftherios", "" ], [ "Goudos", "Sotirios K.", "" ], [ "Moscholios", "Ioannis D.", "" ], [ "Psannis", "Konstantinos E.", "" ], [ "Sarigiannidis", "Panagiotis", "" ] ]
In the realm of data privacy, the ability to effectively anonymise text is paramount. With the proliferation of deep learning and, in particular, transformer architectures, there is a burgeoning interest in leveraging these advanced models for text anonymisation tasks. This paper presents a comprehensive benchmarking study comparing the performance of transformer-based models and Large Language Models(LLM) against traditional architectures for text anonymisation. Utilising the CoNLL-2003 dataset, known for its robustness and diversity, we evaluate several models. Our results showcase the strengths and weaknesses of each approach, offering a clear perspective on the efficacy of modern versus traditional methods. Notably, while modern models exhibit advanced capabilities in capturing con textual nuances, certain traditional architectures still keep high performance. This work aims to guide researchers in selecting the most suitable model for their anonymisation needs, while also shedding light on potential paths for future advancements in the field.
2010.12723
Yuning Mao
Yuning Mao, Xiang Ren, Heng Ji, Jiawei Han
Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document. In this paper, we propose Constrained Abstractive Summarization (CAS), a general setup that preserves the factual consistency of abstractive summarization by specifying tokens as constraints that must be present in the summary. We adopt lexically constrained decoding, a technique generally applicable to autoregressive generative models, to fulfill CAS and conduct experiments in two scenarios: (1) automatic summarization without human involvement, where keyphrases are extracted from the source document and used as constraints; (2) human-guided interactive summarization, where human feedback in the form of manual constraints are used to guide summary generation. Automatic and human evaluations on two benchmark datasets demonstrate that CAS improves both lexical overlap (ROUGE) and factual consistency of abstractive summarization. In particular, we observe up to 13.8 ROUGE-2 gains when only one manual constraint is used in interactive summarization.
[ { "created": "Sat, 24 Oct 2020 00:27:44 GMT", "version": "v1" }, { "created": "Thu, 16 Dec 2021 05:20:15 GMT", "version": "v2" } ]
2021-12-17
[ [ "Mao", "Yuning", "" ], [ "Ren", "Xiang", "" ], [ "Ji", "Heng", "" ], [ "Han", "Jiawei", "" ] ]
Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document. In this paper, we propose Constrained Abstractive Summarization (CAS), a general setup that preserves the factual consistency of abstractive summarization by specifying tokens as constraints that must be present in the summary. We adopt lexically constrained decoding, a technique generally applicable to autoregressive generative models, to fulfill CAS and conduct experiments in two scenarios: (1) automatic summarization without human involvement, where keyphrases are extracted from the source document and used as constraints; (2) human-guided interactive summarization, where human feedback in the form of manual constraints are used to guide summary generation. Automatic and human evaluations on two benchmark datasets demonstrate that CAS improves both lexical overlap (ROUGE) and factual consistency of abstractive summarization. In particular, we observe up to 13.8 ROUGE-2 gains when only one manual constraint is used in interactive summarization.
1210.1630
Herbert Tanner
Jie Fu, Herbert G. Tanner, Jeffrey Heinz, Jane Chandlee, Konstantinos Karydis, and Cesar Koirala
Symbolic Planning and Control Using Game Theory and Grammatical Inference
null
null
null
null
cs.RO cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an approach that brings together game theory with grammatical inference and discrete abstractions in order to synthesize control strategies for hybrid dynamical systems performing tasks in partially unknown but rule-governed adversarial environments. The combined formulation guarantees that a system specification is met if (a) the true model of the environment is in the class of models inferable from a positive presentation, (b) a characteristic sample is observed, and (c) the task specification is satisfiable given the capabilities of the system (agent) and the environment.
[ { "created": "Fri, 5 Oct 2012 02:40:39 GMT", "version": "v1" } ]
2012-10-08
[ [ "Fu", "Jie", "" ], [ "Tanner", "Herbert G.", "" ], [ "Heinz", "Jeffrey", "" ], [ "Chandlee", "Jane", "" ], [ "Karydis", "Konstantinos", "" ], [ "Koirala", "Cesar", "" ] ]
This paper presents an approach that brings together game theory with grammatical inference and discrete abstractions in order to synthesize control strategies for hybrid dynamical systems performing tasks in partially unknown but rule-governed adversarial environments. The combined formulation guarantees that a system specification is met if (a) the true model of the environment is in the class of models inferable from a positive presentation, (b) a characteristic sample is observed, and (c) the task specification is satisfiable given the capabilities of the system (agent) and the environment.