id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1703.08651
Xuanyi Dong
Xuanyi Dong, Junshi Huang, Yi Yang, Shuicheng Yan
More is Less: A More Complicated Network with Less Inference Complexity
This paper has been accepted by the IEEE CVPR 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32\% on average with negligible performance drop.
[ { "created": "Sat, 25 Mar 2017 05:51:42 GMT", "version": "v1" }, { "created": "Mon, 15 May 2017 07:56:20 GMT", "version": "v2" } ]
2017-05-16
[ [ "Dong", "Xuanyi", "" ], [ "Huang", "Junshi", "" ], [ "Yang", "Yi", "" ], [ "Yan", "Shuicheng", "" ] ]
In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32\% on average with negligible performance drop.
1912.04734
Angshul Majumdar Dr.
Jyoti Maggu, Angshul Majumdar and Emilie Chouzenoux
Transformed Subspace Clustering
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subspace clustering assumes that the data is sepa-rable into separate subspaces. Such a simple as-sumption, does not always hold. We assume that, even if the raw data is not separable into subspac-es, one can learn a representation (transform coef-ficients) such that the learnt representation is sep-arable into subspaces. To achieve the intended goal, we embed subspace clustering techniques (locally linear manifold clustering, sparse sub-space clustering and low rank representation) into transform learning. The entire formulation is jointly learnt; giving rise to a new class of meth-ods called transformed subspace clustering (TSC). In order to account for non-linearity, ker-nelized extensions of TSC are also proposed. To test the performance of the proposed techniques, benchmarking is performed on image clustering and document clustering datasets. Comparison with state-of-the-art clustering techniques shows that our formulation improves upon them.
[ { "created": "Tue, 10 Dec 2019 14:57:14 GMT", "version": "v1" } ]
2019-12-11
[ [ "Maggu", "Jyoti", "" ], [ "Majumdar", "Angshul", "" ], [ "Chouzenoux", "Emilie", "" ] ]
Subspace clustering assumes that the data is sepa-rable into separate subspaces. Such a simple as-sumption, does not always hold. We assume that, even if the raw data is not separable into subspac-es, one can learn a representation (transform coef-ficients) such that the learnt representation is sep-arable into subspaces. To achieve the intended goal, we embed subspace clustering techniques (locally linear manifold clustering, sparse sub-space clustering and low rank representation) into transform learning. The entire formulation is jointly learnt; giving rise to a new class of meth-ods called transformed subspace clustering (TSC). In order to account for non-linearity, ker-nelized extensions of TSC are also proposed. To test the performance of the proposed techniques, benchmarking is performed on image clustering and document clustering datasets. Comparison with state-of-the-art clustering techniques shows that our formulation improves upon them.
1611.01873
Carlos Enrique Frasser Mr.
Carlos E. Frasser and George N. Vostrov
Geodetic Graphs Homeomorphic to a Given Geodetic Graph
28 pages, 8 Figures
International Journal of Graph Theory and its Applications 3(1) (2020) pp. 13-44
null
https://www.mililink.com/upload/article/107744499ijgta_v3i1_13-44.pdf
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a new approach to the problem of generating the class of all geodetic graphs homeomorphic to a given geodetic one. An algorithmic procedure is elaborated to carry out a systematic finding of such a class of graphs. As a result, the enumeration of the class of geodetic graphs homeomorphic to certain Moore graphs has been performed.
[ { "created": "Mon, 7 Nov 2016 02:02:23 GMT", "version": "v1" } ]
2023-06-21
[ [ "Frasser", "Carlos E.", "" ], [ "Vostrov", "George N.", "" ] ]
This paper describes a new approach to the problem of generating the class of all geodetic graphs homeomorphic to a given geodetic one. An algorithmic procedure is elaborated to carry out a systematic finding of such a class of graphs. As a result, the enumeration of the class of geodetic graphs homeomorphic to certain Moore graphs has been performed.
2405.07781
Andreas Vogelsang
Adrian Bajraktari, Michelle Binder, Andreas Vogelsang
Requirements Engineering for Research Software: A Vision
Accepted at the 32nd IEEE International Requirements Engineering 2024 (RE) conference
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern science is relying on software more than ever. The behavior and outcomes of this software shape the scientific and public discourse on important topics like climate change, economic growth, or the spread of infections. Most researchers creating software for scientific purposes are not trained in Software Engineering. As a consequence, research software is often developed ad hoc without following stringent processes. With this paper, we want to characterize research software as a new application domain that needs attention from the Requirements Engineering community. We conducted an exploratory study based on 8 interviews with 12 researchers who develop software. We describe how researchers elicit, document, and analyze requirements for research software and what processes they follow. From this, we derive specific challenges and describe a vision of Requirements Engineering for research software.
[ { "created": "Mon, 13 May 2024 14:25:01 GMT", "version": "v1" } ]
2024-05-14
[ [ "Bajraktari", "Adrian", "" ], [ "Binder", "Michelle", "" ], [ "Vogelsang", "Andreas", "" ] ]
Modern science is relying on software more than ever. The behavior and outcomes of this software shape the scientific and public discourse on important topics like climate change, economic growth, or the spread of infections. Most researchers creating software for scientific purposes are not trained in Software Engineering. As a consequence, research software is often developed ad hoc without following stringent processes. With this paper, we want to characterize research software as a new application domain that needs attention from the Requirements Engineering community. We conducted an exploratory study based on 8 interviews with 12 researchers who develop software. We describe how researchers elicit, document, and analyze requirements for research software and what processes they follow. From this, we derive specific challenges and describe a vision of Requirements Engineering for research software.
2110.01880
Shailza Sharma
Shailza Sharma, Abhinav Dhall, and Vinay Kumar
Frequency Aware Face Hallucination Generative Adversarial Network with Semantic Structural Constraint
12 pages, 12 figures, submitted to IEEE Transactions on Computational Imaging
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the issue of face hallucination. Most current face hallucination methods rely on two-dimensional facial priors to generate high resolution face images from low resolution face images. These methods are only capable of assimilating global information into the generated image. Still there exist some inherent problems in these methods; such as, local features, subtle structural details and missing depth information in final output image. Present work proposes a Generative Adversarial Network (GAN) based novel progressive Face Hallucination (FH) network to address these issues present among current methods. The generator of the proposed model comprises of FH network and two sub-networks, assisting FH network to generate high resolution images. The first sub-network leverages on explicitly adding high frequency components into the model. To explicitly encode the high frequency components, an auto encoder is proposed to generate high resolution coefficients of Discrete Cosine Transform (DCT). To add three dimensional parametric information into the network, second sub-network is proposed. This network uses a shape model of 3D Morphable Models (3DMM) to add structural constraint to the FH network. Extensive experimentation results in the paper shows that the proposed model outperforms the state-of-the-art methods.
[ { "created": "Tue, 5 Oct 2021 08:51:29 GMT", "version": "v1" } ]
2021-10-06
[ [ "Sharma", "Shailza", "" ], [ "Dhall", "Abhinav", "" ], [ "Kumar", "Vinay", "" ] ]
In this paper, we address the issue of face hallucination. Most current face hallucination methods rely on two-dimensional facial priors to generate high resolution face images from low resolution face images. These methods are only capable of assimilating global information into the generated image. Still there exist some inherent problems in these methods; such as, local features, subtle structural details and missing depth information in final output image. Present work proposes a Generative Adversarial Network (GAN) based novel progressive Face Hallucination (FH) network to address these issues present among current methods. The generator of the proposed model comprises of FH network and two sub-networks, assisting FH network to generate high resolution images. The first sub-network leverages on explicitly adding high frequency components into the model. To explicitly encode the high frequency components, an auto encoder is proposed to generate high resolution coefficients of Discrete Cosine Transform (DCT). To add three dimensional parametric information into the network, second sub-network is proposed. This network uses a shape model of 3D Morphable Models (3DMM) to add structural constraint to the FH network. Extensive experimentation results in the paper shows that the proposed model outperforms the state-of-the-art methods.
2405.10765
Leon Tolksdorf
Leon Tolksdorf, Christian Birkner, Arturo Tejada, Nathan van de Wouw
Fast Collision Probability Estimation for Automated Driving using Multi-circular Shape Approximations
Accepted for the 2024 Intelligent Vehicles Symposium, 8 pages
null
null
null
cs.RO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many state-of-the-art methods for safety assessment and motion planning for automated driving require estimation of the probability of collision (POC). To estimate the POC, a shape approximation of the colliding actors and probability density functions of the associated uncertain kinematic variables are required. Even with such information available, the derivation of the POC is in general, i.e., for any shape and density, only possible with Monte Carlo sampling (MCS). Random sampling of the POC, however, is challenging as computational resources are limited in real-world applications. We present expressions for the POC in the presence of Gaussian uncertainties, based on multi-circular shape approximations. In addition, we show that the proposed approach is computationally more efficient than MCS. Lastly, we provide a method for upper and lower bounding the estimation error for the POC induced by the used shape approximations.
[ { "created": "Fri, 17 May 2024 13:27:14 GMT", "version": "v1" }, { "created": "Wed, 22 May 2024 06:22:11 GMT", "version": "v2" } ]
2024-05-24
[ [ "Tolksdorf", "Leon", "" ], [ "Birkner", "Christian", "" ], [ "Tejada", "Arturo", "" ], [ "van de Wouw", "Nathan", "" ] ]
Many state-of-the-art methods for safety assessment and motion planning for automated driving require estimation of the probability of collision (POC). To estimate the POC, a shape approximation of the colliding actors and probability density functions of the associated uncertain kinematic variables are required. Even with such information available, the derivation of the POC is in general, i.e., for any shape and density, only possible with Monte Carlo sampling (MCS). Random sampling of the POC, however, is challenging as computational resources are limited in real-world applications. We present expressions for the POC in the presence of Gaussian uncertainties, based on multi-circular shape approximations. In addition, we show that the proposed approach is computationally more efficient than MCS. Lastly, we provide a method for upper and lower bounding the estimation error for the POC induced by the used shape approximations.
2405.12252
Canh Pham Van
Canh V. Pham
Enhanced Deterministic Approximation Algorithm for Non-monotone Submodular Maximization under Knapsack Constraint with Linear Query Complexity
null
null
null
null
cs.DS cs.AI
http://creativecommons.org/licenses/by/4.0/
In this work, we consider the Submodular Maximization under Knapsack (SMK) constraint problem over the ground set of size $n$. The problem recently attracted a lot of attention due to its applications in various domains of combination optimization, artificial intelligence, and machine learning. We improve the approximation factor of the fastest deterministic algorithm from $6+\epsilon$ to $5+\epsilon$ while keeping the best query complexity of $O(n)$, where $\epsilon >0$ is a constant parameter. Our technique is based on optimizing the performance of two components: the threshold greedy subroutine and the building of two disjoint sets as candidate solutions. Besides, by carefully analyzing the cost of candidate solutions, we obtain a tighter approximation factor.
[ { "created": "Mon, 20 May 2024 02:24:58 GMT", "version": "v1" } ]
2024-05-22
[ [ "Pham", "Canh V.", "" ] ]
In this work, we consider the Submodular Maximization under Knapsack (SMK) constraint problem over the ground set of size $n$. The problem recently attracted a lot of attention due to its applications in various domains of combination optimization, artificial intelligence, and machine learning. We improve the approximation factor of the fastest deterministic algorithm from $6+\epsilon$ to $5+\epsilon$ while keeping the best query complexity of $O(n)$, where $\epsilon >0$ is a constant parameter. Our technique is based on optimizing the performance of two components: the threshold greedy subroutine and the building of two disjoint sets as candidate solutions. Besides, by carefully analyzing the cost of candidate solutions, we obtain a tighter approximation factor.
2212.10276
Shashank Srivastava
Graham Caron and Shashank Srivastava
Identifying and Manipulating the Personality Traits of Language Models
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Psychology research has long explored aspects of human personality such as extroversion, agreeableness and emotional stability. Categorizations like the `Big Five' personality traits are commonly used to assess and diagnose personality types. In this work, we explore the question of whether the perceived personality in language models is exhibited consistently in their language generation. For example, is a language model such as GPT2 likely to respond in a consistent way if asked to go out to a party? We also investigate whether such personality traits can be controlled. We show that when provided different types of contexts (such as personality descriptions, or answers to diagnostic questions about personality traits), language models such as BERT and GPT2 can consistently identify and reflect personality markers in those contexts. This behavior illustrates an ability to be manipulated in a highly predictable way, and frames them as tools for identifying personality traits and controlling personas in applications such as dialog systems. We also contribute a crowd-sourced data-set of personality descriptions of human subjects paired with their `Big Five' personality assessment data, and a data-set of personality descriptions collated from Reddit.
[ { "created": "Tue, 20 Dec 2022 14:24:11 GMT", "version": "v1" } ]
2022-12-21
[ [ "Caron", "Graham", "" ], [ "Srivastava", "Shashank", "" ] ]
Psychology research has long explored aspects of human personality such as extroversion, agreeableness and emotional stability. Categorizations like the `Big Five' personality traits are commonly used to assess and diagnose personality types. In this work, we explore the question of whether the perceived personality in language models is exhibited consistently in their language generation. For example, is a language model such as GPT2 likely to respond in a consistent way if asked to go out to a party? We also investigate whether such personality traits can be controlled. We show that when provided different types of contexts (such as personality descriptions, or answers to diagnostic questions about personality traits), language models such as BERT and GPT2 can consistently identify and reflect personality markers in those contexts. This behavior illustrates an ability to be manipulated in a highly predictable way, and frames them as tools for identifying personality traits and controlling personas in applications such as dialog systems. We also contribute a crowd-sourced data-set of personality descriptions of human subjects paired with their `Big Five' personality assessment data, and a data-set of personality descriptions collated from Reddit.
1911.06889
Tristan Pollner
Andrei Graur, Tristan Pollner, Vidhya Ramaswamy, and S. Matthew Weinberg
New Query Lower Bounds for Submodular Function MInimization
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider submodular function minimization in the oracle model: given black-box access to a submodular set function $f:2^{[n]}\rightarrow \mathbb{R}$, find an element of $\arg\min_S \{f(S)\}$ using as few queries to $f(\cdot)$ as possible. State-of-the-art algorithms succeed with $\tilde{O}(n^2)$ queries [LeeSW15], yet the best-known lower bound has never been improved beyond $n$ [Harvey08]. We provide a query lower bound of $2n$ for submodular function minimization, a $3n/2-2$ query lower bound for the non-trivial minimizer of a symmetric submodular function, and a $\binom{n}{2}$ query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our $3n/2-2$ lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a $3n/2-2$ cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than $n+1$ for $s$-$t$ mincut, even in a directed, weighted graph.
[ { "created": "Fri, 15 Nov 2019 21:45:14 GMT", "version": "v1" } ]
2019-11-19
[ [ "Graur", "Andrei", "" ], [ "Pollner", "Tristan", "" ], [ "Ramaswamy", "Vidhya", "" ], [ "Weinberg", "S. Matthew", "" ] ]
We consider submodular function minimization in the oracle model: given black-box access to a submodular set function $f:2^{[n]}\rightarrow \mathbb{R}$, find an element of $\arg\min_S \{f(S)\}$ using as few queries to $f(\cdot)$ as possible. State-of-the-art algorithms succeed with $\tilde{O}(n^2)$ queries [LeeSW15], yet the best-known lower bound has never been improved beyond $n$ [Harvey08]. We provide a query lower bound of $2n$ for submodular function minimization, a $3n/2-2$ query lower bound for the non-trivial minimizer of a symmetric submodular function, and a $\binom{n}{2}$ query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our $3n/2-2$ lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a $3n/2-2$ cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than $n+1$ for $s$-$t$ mincut, even in a directed, weighted graph.
1907.09755
Erik Daniel
Erik Daniel, Elias Rohrer and Florian Tschorsch
Map-Z: Exposing the Zcash Network in Times of Transition
8 pages, 6 Figures, accepted at 2019 IEEE 44th Conference on Local Computer Networks (LCN) for publication
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zcash is a privacy-preserving cryptocurrency that provides anonymous monetary transactions. While Zcash's anonymity is part of a rigorous scientific discussion, information on the underlying peer-to-peer network are missing. In this paper, we provide the first long-term measurement study of the Zcash network to capture key metrics such as the network size and node distribution as well as deeper insights on the centralization of the network. Furthermore, we present an inference method based on a timing analysis of block arrivals that we use to determine interconnections of nodes. We evaluate and verify our method through simulations and real-world experiments, yielding a precision of 50 % with a recall of 82 % in the real-world scenario. By adjusting the parameters, the topology inference model is adaptable to the conditions found in other cryptocurrencies and therefore also contributes to the broader discussion of topology hiding in general.
[ { "created": "Tue, 23 Jul 2019 08:39:11 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2019 13:58:18 GMT", "version": "v2" } ]
2019-07-29
[ [ "Daniel", "Erik", "" ], [ "Rohrer", "Elias", "" ], [ "Tschorsch", "Florian", "" ] ]
Zcash is a privacy-preserving cryptocurrency that provides anonymous monetary transactions. While Zcash's anonymity is part of a rigorous scientific discussion, information on the underlying peer-to-peer network are missing. In this paper, we provide the first long-term measurement study of the Zcash network to capture key metrics such as the network size and node distribution as well as deeper insights on the centralization of the network. Furthermore, we present an inference method based on a timing analysis of block arrivals that we use to determine interconnections of nodes. We evaluate and verify our method through simulations and real-world experiments, yielding a precision of 50 % with a recall of 82 % in the real-world scenario. By adjusting the parameters, the topology inference model is adaptable to the conditions found in other cryptocurrencies and therefore also contributes to the broader discussion of topology hiding in general.
2211.05364
Chao Hu
Chao Hu, Liqiang Zhu
Efficient Unsupervised Video Object Segmentation Network Based on Motion Guidance
The 10th International Conference on Information Systems and Computing Technology
2022-The 10th International Conference on Information Systems and Computing Technology
null
ID: ISCT-2022-0052
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the problem of performance constraints of unsupervised video object detection, its large-scale application is limited. In response to this pain point, we propose another excellent method to solve this problematic point. By incorporating motion characterization in unsupervised video object detection, detection accuracy is improved while reducing the computational amount of the network. The whole network structure consists of dual-stream network, motion guidance module, and multi-scale progressive fusion module. The appearance and motion representations of the detection target are obtained through a dual-stream network. Then, the semantic features of the motion representation are obtained through the local attention mechanism in the motion guidance module to obtain the high-level semantic features of the appearance representation. The multi-scale progressive fusion module then fuses the features of different deep semantic features in the dual-stream network further to improve the detection effect of the overall network. We have conducted numerous experiments on the three datasets of DAVIS 16, FBMS, and ViSal. The verification results show that the proposed method achieves superior accuracy and performance and proves the superiority and robustness of the algorithm.
[ { "created": "Thu, 10 Nov 2022 06:13:23 GMT", "version": "v1" }, { "created": "Mon, 21 Nov 2022 04:36:07 GMT", "version": "v2" } ]
2022-11-22
[ [ "Hu", "Chao", "" ], [ "Zhu", "Liqiang", "" ] ]
Due to the problem of performance constraints of unsupervised video object detection, its large-scale application is limited. In response to this pain point, we propose another excellent method to solve this problematic point. By incorporating motion characterization in unsupervised video object detection, detection accuracy is improved while reducing the computational amount of the network. The whole network structure consists of dual-stream network, motion guidance module, and multi-scale progressive fusion module. The appearance and motion representations of the detection target are obtained through a dual-stream network. Then, the semantic features of the motion representation are obtained through the local attention mechanism in the motion guidance module to obtain the high-level semantic features of the appearance representation. The multi-scale progressive fusion module then fuses the features of different deep semantic features in the dual-stream network further to improve the detection effect of the overall network. We have conducted numerous experiments on the three datasets of DAVIS 16, FBMS, and ViSal. The verification results show that the proposed method achieves superior accuracy and performance and proves the superiority and robustness of the algorithm.
1904.00058
Andrey Rivkin
Marco Montali, Andrey Rivkin
From DB-nets to Coloured Petri Nets with Priorities (Extended Version)
null
null
null
null
cs.LO cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recently introduced formalism of DB-nets has brought in a new conceptual way of modelling complex dynamic systems that equally account for the process and data dimensions, considering local data as well as persistent, transactional data. DB-nets combine a coloured variant of Petri nets with name creation and management (which we call nu-CPN), with a relational database. The integration of these two components is realized by equipping the net with special ``view'' places that query the database and expose the resulting answers to the net, with actions that allow transitions to update the content of the database, and with special arcs capturing compensation in case of transaction failure. In this work, we study whether this sophisticated model can be encoded back into nu-CPNs. In particular, we show that the meaningful fragment of DB-nets where database queries are expressed using unions of conjunctive queries with inequalities can be faithfully encoded into $\nu$-CPNs with transition priorities. This allows us to directly exploit state-of-the-art technologies such as CPN Tools to simulate and analyse this relevant class of DB-nets.
[ { "created": "Fri, 29 Mar 2019 19:11:42 GMT", "version": "v1" } ]
2019-04-02
[ [ "Montali", "Marco", "" ], [ "Rivkin", "Andrey", "" ] ]
The recently introduced formalism of DB-nets has brought in a new conceptual way of modelling complex dynamic systems that equally account for the process and data dimensions, considering local data as well as persistent, transactional data. DB-nets combine a coloured variant of Petri nets with name creation and management (which we call nu-CPN), with a relational database. The integration of these two components is realized by equipping the net with special ``view'' places that query the database and expose the resulting answers to the net, with actions that allow transitions to update the content of the database, and with special arcs capturing compensation in case of transaction failure. In this work, we study whether this sophisticated model can be encoded back into nu-CPNs. In particular, we show that the meaningful fragment of DB-nets where database queries are expressed using unions of conjunctive queries with inequalities can be faithfully encoded into $\nu$-CPNs with transition priorities. This allows us to directly exploit state-of-the-art technologies such as CPN Tools to simulate and analyse this relevant class of DB-nets.
2010.04933
Yue Zheng
Yue Zheng, Tianyi Yang, Wen Zhang and Dengji Zhao
Barter Exchange via Friends' Friends
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Barter exchange studies the setting where each agent owns a good, and they can exchange with each other if that gives them more preferred goods. This exchange will give better outcomes if there are more participants. The challenge here is how to get more participants and our goal is to incentivize the existing participants to invite new participants. However, new participants might be competitors for the existing participants. Therefore, we design an exchange mechanism based on the classical Top Trading Cycle (TTC) algorithm to solve their conflicts. Our mechanism is truthful in terms of revealing their preferences and also guarantees that inviting all their neighbors is a dominant strategy for all participants. The mechanism can be applied in settings where more participants are preferred but no extra budget to reach new participants.
[ { "created": "Sat, 10 Oct 2020 07:53:18 GMT", "version": "v1" } ]
2020-10-13
[ [ "Zheng", "Yue", "" ], [ "Yang", "Tianyi", "" ], [ "Zhang", "Wen", "" ], [ "Zhao", "Dengji", "" ] ]
Barter exchange studies the setting where each agent owns a good, and they can exchange with each other if that gives them more preferred goods. This exchange will give better outcomes if there are more participants. The challenge here is how to get more participants and our goal is to incentivize the existing participants to invite new participants. However, new participants might be competitors for the existing participants. Therefore, we design an exchange mechanism based on the classical Top Trading Cycle (TTC) algorithm to solve their conflicts. Our mechanism is truthful in terms of revealing their preferences and also guarantees that inviting all their neighbors is a dominant strategy for all participants. The mechanism can be applied in settings where more participants are preferred but no extra budget to reach new participants.
1010.4281
Vijay Vazirani
Vijay V. Vazirani
Non-Separable, Quasiconcave Utilities are Easy -- in a Perfect Price Discrimination Market Model
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent results, establishing evidence of intractability for such restrictive utility functions as additively separable, piecewise-linear and concave, under both Fisher and Arrow-Debreu market models, have prompted the question of whether we have failed to capture some essential elements of real markets, which seem to do a good job of finding prices that maintain parity between supply and demand. The main point of this paper is to show that even non-separable, quasiconcave utility functions can be handled efficiently in a suitably chosen, though natural, realistic and useful, market model; our model allows for perfect price discrimination. Our model supports unique equilibrium prices and, for the restriction to concave utilities, satisfies both welfare theorems.
[ { "created": "Wed, 20 Oct 2010 19:13:15 GMT", "version": "v1" } ]
2010-10-21
[ [ "Vazirani", "Vijay V.", "" ] ]
Recent results, establishing evidence of intractability for such restrictive utility functions as additively separable, piecewise-linear and concave, under both Fisher and Arrow-Debreu market models, have prompted the question of whether we have failed to capture some essential elements of real markets, which seem to do a good job of finding prices that maintain parity between supply and demand. The main point of this paper is to show that even non-separable, quasiconcave utility functions can be handled efficiently in a suitably chosen, though natural, realistic and useful, market model; our model allows for perfect price discrimination. Our model supports unique equilibrium prices and, for the restriction to concave utilities, satisfies both welfare theorems.
2306.06051
Alessandro Wollek
Alessandro Wollek, Sardi Hyska, Bastian Sabel, Michael Ingrisch, Tobias Lasser
Higher Chest X-ray Resolution Improves Classification Performance
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep learning models for image classification are often trained at a resolution of 224 x 224 pixels for historical and efficiency reasons. However, chest X-rays are acquired at a much higher resolution to display subtle pathologies. This study investigates the effect of training resolution on chest X-ray classification performance, using the chest X-ray 14 dataset. The results show that training with a higher image resolution, specifically 1024 x 1024 pixels, results in the best overall classification performance with a mean AUC of 84.2 % compared to 82.7 % when trained with 256 x 256 pixel images. Additionally, comparison of bounding boxes and GradCAM saliency maps suggest that low resolutions, such as 256 x 256 pixels, are insufficient for identifying small pathologies and force the model to use spurious discriminating features. Our code is publicly available at https://gitlab.lrz.de/IP/cxr-resolution
[ { "created": "Fri, 9 Jun 2023 17:21:52 GMT", "version": "v1" }, { "created": "Thu, 3 Aug 2023 17:58:20 GMT", "version": "v2" } ]
2023-08-04
[ [ "Wollek", "Alessandro", "" ], [ "Hyska", "Sardi", "" ], [ "Sabel", "Bastian", "" ], [ "Ingrisch", "Michael", "" ], [ "Lasser", "Tobias", "" ] ]
Deep learning models for image classification are often trained at a resolution of 224 x 224 pixels for historical and efficiency reasons. However, chest X-rays are acquired at a much higher resolution to display subtle pathologies. This study investigates the effect of training resolution on chest X-ray classification performance, using the chest X-ray 14 dataset. The results show that training with a higher image resolution, specifically 1024 x 1024 pixels, results in the best overall classification performance with a mean AUC of 84.2 % compared to 82.7 % when trained with 256 x 256 pixel images. Additionally, comparison of bounding boxes and GradCAM saliency maps suggest that low resolutions, such as 256 x 256 pixels, are insufficient for identifying small pathologies and force the model to use spurious discriminating features. Our code is publicly available at https://gitlab.lrz.de/IP/cxr-resolution
2406.05505
Georgina Cosma Professor
Mohit Kumar Singh, Georgina Cosma, Patrick Waterson, Jonathan Back, Gyuchan Thomas Jun
I-SIRch: AI-Powered Concept Annotation Tool For Equitable Extraction And Analysis Of Safety Insights From Maternity Investigations
null
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Maternity care is a complex system involving treatments and interactions between patients, providers, and the care environment. To improve patient safety and outcomes, understanding the human factors (e.g. individuals decisions, local facilities) influencing healthcare delivery is crucial. However, most current tools for analysing healthcare data focus only on biomedical concepts (e.g. health conditions, procedures and tests), overlooking the importance of human factors. We developed a new approach called I-SIRch, using artificial intelligence to automatically identify and label human factors concepts in maternity healthcare investigation reports describing adverse maternity incidents produced by England's Healthcare Safety Investigation Branch (HSIB). These incident investigation reports aim to identify opportunities for learning and improving maternal safety across the entire healthcare system. I-SIRch was trained using real data and tested on both real and simulated data to evaluate its performance in identifying human factors concepts. When applied to real reports, the model achieved a high level of accuracy, correctly identifying relevant concepts in 90\% of the sentences from 97 reports. Applying I-SIRch to analyse these reports revealed that certain human factors disproportionately affected mothers from different ethnic groups. Our work demonstrates the potential of using automated tools to identify human factors concepts in maternity incident investigation reports, rather than focusing solely on biomedical concepts. This approach opens up new possibilities for understanding the complex interplay between social, technical, and organisational factors influencing maternal safety and population health outcomes. By taking a more comprehensive view of maternal healthcare delivery, we can develop targeted interventions to address disparities and improve maternal outcomes.
[ { "created": "Sat, 8 Jun 2024 16:05:31 GMT", "version": "v1" } ]
2024-06-11
[ [ "Singh", "Mohit Kumar", "" ], [ "Cosma", "Georgina", "" ], [ "Waterson", "Patrick", "" ], [ "Back", "Jonathan", "" ], [ "Jun", "Gyuchan Thomas", "" ] ]
Maternity care is a complex system involving treatments and interactions between patients, providers, and the care environment. To improve patient safety and outcomes, understanding the human factors (e.g. individuals decisions, local facilities) influencing healthcare delivery is crucial. However, most current tools for analysing healthcare data focus only on biomedical concepts (e.g. health conditions, procedures and tests), overlooking the importance of human factors. We developed a new approach called I-SIRch, using artificial intelligence to automatically identify and label human factors concepts in maternity healthcare investigation reports describing adverse maternity incidents produced by England's Healthcare Safety Investigation Branch (HSIB). These incident investigation reports aim to identify opportunities for learning and improving maternal safety across the entire healthcare system. I-SIRch was trained using real data and tested on both real and simulated data to evaluate its performance in identifying human factors concepts. When applied to real reports, the model achieved a high level of accuracy, correctly identifying relevant concepts in 90\% of the sentences from 97 reports. Applying I-SIRch to analyse these reports revealed that certain human factors disproportionately affected mothers from different ethnic groups. Our work demonstrates the potential of using automated tools to identify human factors concepts in maternity incident investigation reports, rather than focusing solely on biomedical concepts. This approach opens up new possibilities for understanding the complex interplay between social, technical, and organisational factors influencing maternal safety and population health outcomes. By taking a more comprehensive view of maternal healthcare delivery, we can develop targeted interventions to address disparities and improve maternal outcomes.
2011.10653
Abe Leite
Abe Leite and Sa\'ul A. Blanco
Effects of Human vs. Automatic Feedback on Students' Understanding of AI Concepts and Programming Style
Published in SIGCSE '20: Proceedings of the 51st ACM Technical Symposium on Computer Science Education
SIGCSE '20: Proceedings of the 51st ACM Technical Symposium on Computer Science Education (Feb 2020) 44-50
10.1145/3328778.3366921
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses, and recent work has focused on improving the quality of automatically generated feedback. However, there is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback. This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance. The class is an intro to AI with programming HW assignments. One group of students received detailed computer-generated feedback on their programming assignments describing which parts of the algorithms' logic was missing; the other group additionally received human-written feedback describing how their programs' syntax relates to issues with their logic, and qualitative (style) recommendations for improving their code. Results on quizzes and exam questions suggest that human feedback helps students obtain a better conceptual understanding, but analyses found no difference between the groups' ability to collaborate on the final project. The course grade distribution revealed that students who received human-written feedback performed better overall; this effect was the most pronounced in the middle two quartiles of each group. These results suggest that feedback about the syntax-logic relation may be a primary mechanism by which human feedback improves student outcomes.
[ { "created": "Fri, 20 Nov 2020 21:40:32 GMT", "version": "v1" } ]
2020-11-24
[ [ "Leite", "Abe", "" ], [ "Blanco", "Saúl A.", "" ] ]
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses, and recent work has focused on improving the quality of automatically generated feedback. However, there is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback. This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance. The class is an intro to AI with programming HW assignments. One group of students received detailed computer-generated feedback on their programming assignments describing which parts of the algorithms' logic was missing; the other group additionally received human-written feedback describing how their programs' syntax relates to issues with their logic, and qualitative (style) recommendations for improving their code. Results on quizzes and exam questions suggest that human feedback helps students obtain a better conceptual understanding, but analyses found no difference between the groups' ability to collaborate on the final project. The course grade distribution revealed that students who received human-written feedback performed better overall; this effect was the most pronounced in the middle two quartiles of each group. These results suggest that feedback about the syntax-logic relation may be a primary mechanism by which human feedback improves student outcomes.
2203.15601
Christian Sigg
Christian Sigg, Flavia Cavallaro, Tobias G\"unther and Martin R. Oswald
Photographic Visualization of Weather Forecasts with Generative Adversarial Networks
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outdoor webcam images are an information-dense yet accessible visualization of past and present weather conditions, and are consulted by meteorologists and the general public alike. Weather forecasts, however, are still communicated as text, pictograms or charts. We therefore introduce a novel method that uses photographic images to also visualize future weather conditions. This is challenging, because photographic visualizations of weather forecasts should look real, be free of obvious artifacts, and should match the predicted weather conditions. The transition from observation to forecast should be seamless, and there should be visual continuity between images for consecutive lead times. We use conditional Generative Adversarial Networks to synthesize such visualizations. The generator network, conditioned on the analysis and the forecasting state of the numerical weather prediction (NWP) model, transforms the present camera image into the future. The discriminator network judges whether a given image is the real image of the future, or whether it has been synthesized. Training the two networks against each other results in a visualization method that scores well on all four evaluation criteria. We present results for three camera sites across Switzerland that differ in climatology and terrain. We show that users find it challenging to distinguish real from generated images, performing not much better than if they guessed randomly. The generated images match the atmospheric, ground and illumination conditions of the COSMO-1 NWP model forecast in at least 89 % of the examined cases. Nowcasting sequences of generated images achieve a seamless transition from observation to forecast and attain visual continuity.
[ { "created": "Tue, 29 Mar 2022 14:10:29 GMT", "version": "v1" } ]
2022-03-30
[ [ "Sigg", "Christian", "" ], [ "Cavallaro", "Flavia", "" ], [ "Günther", "Tobias", "" ], [ "Oswald", "Martin R.", "" ] ]
Outdoor webcam images are an information-dense yet accessible visualization of past and present weather conditions, and are consulted by meteorologists and the general public alike. Weather forecasts, however, are still communicated as text, pictograms or charts. We therefore introduce a novel method that uses photographic images to also visualize future weather conditions. This is challenging, because photographic visualizations of weather forecasts should look real, be free of obvious artifacts, and should match the predicted weather conditions. The transition from observation to forecast should be seamless, and there should be visual continuity between images for consecutive lead times. We use conditional Generative Adversarial Networks to synthesize such visualizations. The generator network, conditioned on the analysis and the forecasting state of the numerical weather prediction (NWP) model, transforms the present camera image into the future. The discriminator network judges whether a given image is the real image of the future, or whether it has been synthesized. Training the two networks against each other results in a visualization method that scores well on all four evaluation criteria. We present results for three camera sites across Switzerland that differ in climatology and terrain. We show that users find it challenging to distinguish real from generated images, performing not much better than if they guessed randomly. The generated images match the atmospheric, ground and illumination conditions of the COSMO-1 NWP model forecast in at least 89 % of the examined cases. Nowcasting sequences of generated images achieve a seamless transition from observation to forecast and attain visual continuity.
1811.08271
Zhitao Guan
Jing Li, Zhitao Guan, Xiaojiang Du, Zijian Zhang, Zhenyu Zhou
A Low-latency Secure Data Outsourcing Scheme for Cloud-WSN
arXiv admin note: text overlap with arXiv:1810.10746
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the support of cloud computing, large quantities of data collected from various WSN applications can be managed efficiently. However, maintaining data security and efficiency of data processing in cloud-WSN (C-WSN) are important and challenging issues. In this paper, we present an efficient data outsourcing scheme based on CP-ABE, which can not only guarantee secure data access, but also reduce overall data processing time. In our proposed scheme, a large file is divided into several data blocks by data owner (DO) firstly. Then, the data blocks are encrypted and transferred to the cloud server in parallel. For data receiver (DR), data decryption and data transmission is also processed in parallel. In addition, data integrity can be checked by DR without any master key components. The security analysis shows that the proposed scheme can meet the security requirement of C-WSN. By performance evaluation, it shows that our scheme can dramatically improve data processing efficiency compared to the traditional CP-ABE method.
[ { "created": "Thu, 25 Oct 2018 07:49:13 GMT", "version": "v1" } ]
2018-11-21
[ [ "Li", "Jing", "" ], [ "Guan", "Zhitao", "" ], [ "Du", "Xiaojiang", "" ], [ "Zhang", "Zijian", "" ], [ "Zhou", "Zhenyu", "" ] ]
With the support of cloud computing, large quantities of data collected from various WSN applications can be managed efficiently. However, maintaining data security and efficiency of data processing in cloud-WSN (C-WSN) are important and challenging issues. In this paper, we present an efficient data outsourcing scheme based on CP-ABE, which can not only guarantee secure data access, but also reduce overall data processing time. In our proposed scheme, a large file is divided into several data blocks by data owner (DO) firstly. Then, the data blocks are encrypted and transferred to the cloud server in parallel. For data receiver (DR), data decryption and data transmission is also processed in parallel. In addition, data integrity can be checked by DR without any master key components. The security analysis shows that the proposed scheme can meet the security requirement of C-WSN. By performance evaluation, it shows that our scheme can dramatically improve data processing efficiency compared to the traditional CP-ABE method.
2307.03347
Qing Xu
Qing Xu, Min Wu, Xiaoli Li, Kezhi Mao, Zhenghua Chen
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data
Accepted by IJCAI 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For many real-world time series tasks, the computational complexity of prevalent deep leaning models often hinders the deployment on resource-limited environments (e.g., smartphones). Moreover, due to the inevitable domain shift between model training (source) and deploying (target) stages, compressing those deep models under cross-domain scenarios becomes more challenging. Although some of existing works have already explored cross-domain knowledge distillation for model compression, they are either biased to source data or heavily tangled between source and target data. To this end, we design a novel end-to-end framework called Universal and joint knowledge distillation (UNI-KD) for cross-domain model compression. In particular, we propose to transfer both the universal feature-level knowledge across source and target domains and the joint logit-level knowledge shared by both domains from the teacher to the student model via an adversarial learning scheme. More specifically, a feature-domain discriminator is employed to align teacher's and student's representations for universal knowledge transfer. A data-domain discriminator is utilized to prioritize the domain-shared samples for joint knowledge transfer. Extensive experimental results on four time series datasets demonstrate the superiority of our proposed method over state-of-the-art (SOTA) benchmarks.
[ { "created": "Fri, 7 Jul 2023 01:48:02 GMT", "version": "v1" } ]
2023-07-10
[ [ "Xu", "Qing", "" ], [ "Wu", "Min", "" ], [ "Li", "Xiaoli", "" ], [ "Mao", "Kezhi", "" ], [ "Chen", "Zhenghua", "" ] ]
For many real-world time series tasks, the computational complexity of prevalent deep leaning models often hinders the deployment on resource-limited environments (e.g., smartphones). Moreover, due to the inevitable domain shift between model training (source) and deploying (target) stages, compressing those deep models under cross-domain scenarios becomes more challenging. Although some of existing works have already explored cross-domain knowledge distillation for model compression, they are either biased to source data or heavily tangled between source and target data. To this end, we design a novel end-to-end framework called Universal and joint knowledge distillation (UNI-KD) for cross-domain model compression. In particular, we propose to transfer both the universal feature-level knowledge across source and target domains and the joint logit-level knowledge shared by both domains from the teacher to the student model via an adversarial learning scheme. More specifically, a feature-domain discriminator is employed to align teacher's and student's representations for universal knowledge transfer. A data-domain discriminator is utilized to prioritize the domain-shared samples for joint knowledge transfer. Extensive experimental results on four time series datasets demonstrate the superiority of our proposed method over state-of-the-art (SOTA) benchmarks.
cs/0006005
Stephen Marsand
Stephen Marsland, Ulrich Nehmzow and Jonathan Shapiro
Novelty Detection for Robot Neotaxis
7 pages, 5 figures. In Proceedings of the Second International Conference on Neural Computation, 2000
null
null
null
cs.RO cs.NE nlin.AO
null
The ability of a robot to detect and respond to changes in its environment is potentially very useful, as it draws attention to new and potentially important features. We describe an algorithm for learning to filter out previously experienced stimuli to allow further concentration on novel features. The algorithm uses a model of habituation, a biological process which causes a decrement in response with repeated presentation. Experiments with a mobile robot are presented in which the robot detects the most novel stimulus and turns towards it (`neotaxis').
[ { "created": "Fri, 2 Jun 2000 11:32:17 GMT", "version": "v1" } ]
2007-05-23
[ [ "Marsland", "Stephen", "" ], [ "Nehmzow", "Ulrich", "" ], [ "Shapiro", "Jonathan", "" ] ]
The ability of a robot to detect and respond to changes in its environment is potentially very useful, as it draws attention to new and potentially important features. We describe an algorithm for learning to filter out previously experienced stimuli to allow further concentration on novel features. The algorithm uses a model of habituation, a biological process which causes a decrement in response with repeated presentation. Experiments with a mobile robot are presented in which the robot detects the most novel stimulus and turns towards it (`neotaxis').
2201.06170
Phillip Benjamin Str\"obel
Phillip Benjamin Str\"obel, Simon Clematide, Martin Volk, Raphael Schwitter, Tobias Hodel, David Schoch
Evaluation of HTR models without Ground Truth Material
Accepted at LREC 2022. Final version submitted to LREC 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The evaluation of Handwritten Text Recognition (HTR) models during their development is straightforward: because HTR is a supervised problem, the usual data split into training, validation, and test data sets allows the evaluation of models in terms of accuracy or error rates. However, the evaluation process becomes tricky as soon as we switch from development to application. A compilation of a new (and forcibly smaller) ground truth (GT) from a sample of the data that we want to apply the model on and the subsequent evaluation of models thereon only provides hints about the quality of the recognised text, as do confidence scores (if available) the models return. Moreover, if we have several models at hand, we face a model selection problem since we want to obtain the best possible result during the application phase. This calls for GT-free metrics to select the best model, which is why we (re-)introduce and compare different metrics, from simple, lexicon-based to more elaborate ones using standard language models and masked language models (MLM). We show that MLM-based evaluation can compete with lexicon-based methods, with the advantage that large and multilingual transformers are readily available, thus making compiling lexical resources for other metrics superfluous.
[ { "created": "Mon, 17 Jan 2022 01:26:09 GMT", "version": "v1" }, { "created": "Fri, 29 Apr 2022 09:59:29 GMT", "version": "v2" } ]
2022-05-02
[ [ "Ströbel", "Phillip Benjamin", "" ], [ "Clematide", "Simon", "" ], [ "Volk", "Martin", "" ], [ "Schwitter", "Raphael", "" ], [ "Hodel", "Tobias", "" ], [ "Schoch", "David", "" ] ]
The evaluation of Handwritten Text Recognition (HTR) models during their development is straightforward: because HTR is a supervised problem, the usual data split into training, validation, and test data sets allows the evaluation of models in terms of accuracy or error rates. However, the evaluation process becomes tricky as soon as we switch from development to application. A compilation of a new (and forcibly smaller) ground truth (GT) from a sample of the data that we want to apply the model on and the subsequent evaluation of models thereon only provides hints about the quality of the recognised text, as do confidence scores (if available) the models return. Moreover, if we have several models at hand, we face a model selection problem since we want to obtain the best possible result during the application phase. This calls for GT-free metrics to select the best model, which is why we (re-)introduce and compare different metrics, from simple, lexicon-based to more elaborate ones using standard language models and masked language models (MLM). We show that MLM-based evaluation can compete with lexicon-based methods, with the advantage that large and multilingual transformers are readily available, thus making compiling lexical resources for other metrics superfluous.
2312.01105
Patrick Ruhkamp
Patrick Ruhkamp, Daoyi Gao, Nassir Navab, Benjamin Busam
S2P3: Self-Supervised Polarimetric Pose Prediction
Accepted at IJCV
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper proposes the first self-supervised 6D object pose prediction from multimodal RGB+polarimetric images. The novel training paradigm comprises 1) a physical model to extract geometric information of polarized light, 2) a teacher-student knowledge distillation scheme and 3) a self-supervised loss formulation through differentiable rendering and an invertible physical constraint. Both networks leverage the physical properties of polarized light to learn robust geometric representations by encoding shape priors and polarization characteristics derived from our physical model. Geometric pseudo-labels from the teacher support the student network without the need for annotated real data. Dense appearance and geometric information of objects are obtained through a differentiable renderer with the predicted pose for self-supervised direct coupling. The student network additionally features our proposed invertible formulation of the physical shape priors that enables end-to-end self-supervised training through physical constraints of derived polarization characteristics compared against polarimetric input images. We specifically focus on photometrically challenging objects with texture-less or reflective surfaces and transparent materials for which the most prominent performance gain is reported.
[ { "created": "Sat, 2 Dec 2023 10:46:40 GMT", "version": "v1" } ]
2023-12-05
[ [ "Ruhkamp", "Patrick", "" ], [ "Gao", "Daoyi", "" ], [ "Navab", "Nassir", "" ], [ "Busam", "Benjamin", "" ] ]
This paper proposes the first self-supervised 6D object pose prediction from multimodal RGB+polarimetric images. The novel training paradigm comprises 1) a physical model to extract geometric information of polarized light, 2) a teacher-student knowledge distillation scheme and 3) a self-supervised loss formulation through differentiable rendering and an invertible physical constraint. Both networks leverage the physical properties of polarized light to learn robust geometric representations by encoding shape priors and polarization characteristics derived from our physical model. Geometric pseudo-labels from the teacher support the student network without the need for annotated real data. Dense appearance and geometric information of objects are obtained through a differentiable renderer with the predicted pose for self-supervised direct coupling. The student network additionally features our proposed invertible formulation of the physical shape priors that enables end-to-end self-supervised training through physical constraints of derived polarization characteristics compared against polarimetric input images. We specifically focus on photometrically challenging objects with texture-less or reflective surfaces and transparent materials for which the most prominent performance gain is reported.
2105.14427
Tianhao Wang
Salil Vadhan, Tianhao Wang
Concurrent Composition of Differential Privacy
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We initiate a study of the composition properties of interactive differentially private mechanisms. An interactive differentially private mechanism is an algorithm that allows an analyst to adaptively ask queries about a sensitive dataset, with the property that an adversarial analyst's view of the interaction is approximately the same regardless of whether or not any individual's data is in the dataset. Previous studies of composition of differential privacy have focused on non-interactive algorithms, but interactive mechanisms are needed to capture many of the intended applications of differential privacy and a number of the important differentially private primitives. We focus on concurrent composition, where an adversary can arbitrarily interleave its queries to several differentially private mechanisms, which may be feasible when differentially private query systems are deployed in practice. We prove that when the interactive mechanisms being composed are pure differentially private, their concurrent composition achieves privacy parameters (with respect to pure or approximate differential privacy) that match the (optimal) composition theorem for noninteractive differential privacy. We also prove a composition theorem for interactive mechanisms that satisfy approximate differential privacy. That bound is weaker than even the basic (suboptimal) composition theorem for noninteractive differential privacy, and we leave closing the gap as a direction for future research, along with understanding concurrent composition for other variants of differential privacy.
[ { "created": "Sun, 30 May 2021 04:35:50 GMT", "version": "v1" }, { "created": "Wed, 15 Sep 2021 18:50:12 GMT", "version": "v2" } ]
2021-09-17
[ [ "Vadhan", "Salil", "" ], [ "Wang", "Tianhao", "" ] ]
We initiate a study of the composition properties of interactive differentially private mechanisms. An interactive differentially private mechanism is an algorithm that allows an analyst to adaptively ask queries about a sensitive dataset, with the property that an adversarial analyst's view of the interaction is approximately the same regardless of whether or not any individual's data is in the dataset. Previous studies of composition of differential privacy have focused on non-interactive algorithms, but interactive mechanisms are needed to capture many of the intended applications of differential privacy and a number of the important differentially private primitives. We focus on concurrent composition, where an adversary can arbitrarily interleave its queries to several differentially private mechanisms, which may be feasible when differentially private query systems are deployed in practice. We prove that when the interactive mechanisms being composed are pure differentially private, their concurrent composition achieves privacy parameters (with respect to pure or approximate differential privacy) that match the (optimal) composition theorem for noninteractive differential privacy. We also prove a composition theorem for interactive mechanisms that satisfy approximate differential privacy. That bound is weaker than even the basic (suboptimal) composition theorem for noninteractive differential privacy, and we leave closing the gap as a direction for future research, along with understanding concurrent composition for other variants of differential privacy.
2211.01542
Shuhao Gu
Shuhao Gu, Bojie Hu, Yang Feng
Continual Learning of Neural Machine Translation within Low Forgetting Risk Regions
EMNLP 2022 Main Conference Long Paper
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper considers continual learning of large-scale pretrained neural machine translation model without accessing the previous training data or introducing model separation. We argue that the widely used regularization-based methods, which perform multi-objective learning with an auxiliary loss, suffer from the misestimate problem and cannot always achieve a good balance between the previous and new tasks. To solve the problem, we propose a two-stage training method based on the local features of the real loss. We first search low forgetting risk regions, where the model can retain the performance on the previous task as the parameters are updated, to avoid the catastrophic forgetting problem. Then we can continually train the model within this region only with the new training data to fit the new task. Specifically, we propose two methods to search the low forgetting risk regions, which are based on the curvature of loss and the impacts of the parameters on the model output, respectively. We conduct experiments on domain adaptation and more challenging language adaptation tasks, and the experimental results show that our method can achieve significant improvements compared with several strong baselines.
[ { "created": "Thu, 3 Nov 2022 01:21:10 GMT", "version": "v1" }, { "created": "Fri, 4 Nov 2022 02:10:02 GMT", "version": "v2" } ]
2022-11-07
[ [ "Gu", "Shuhao", "" ], [ "Hu", "Bojie", "" ], [ "Feng", "Yang", "" ] ]
This paper considers continual learning of large-scale pretrained neural machine translation model without accessing the previous training data or introducing model separation. We argue that the widely used regularization-based methods, which perform multi-objective learning with an auxiliary loss, suffer from the misestimate problem and cannot always achieve a good balance between the previous and new tasks. To solve the problem, we propose a two-stage training method based on the local features of the real loss. We first search low forgetting risk regions, where the model can retain the performance on the previous task as the parameters are updated, to avoid the catastrophic forgetting problem. Then we can continually train the model within this region only with the new training data to fit the new task. Specifically, we propose two methods to search the low forgetting risk regions, which are based on the curvature of loss and the impacts of the parameters on the model output, respectively. We conduct experiments on domain adaptation and more challenging language adaptation tasks, and the experimental results show that our method can achieve significant improvements compared with several strong baselines.
2406.18176
Robert-Jan Bruintjes
Robert-Jan Bruintjes, Attila Lengyel, Marcos Baptista Rios, Osman Semih Kayhan, Davide Zambrano, Nergis Tomen, Jan van Gemert
VIPriors 4: Visual Inductive Priors for Data-Efficient Deep Learning Challenges
arXiv admin note: text overlap with arXiv:2305.19688
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The fourth edition of the "VIPriors: Visual Inductive Priors for Data-Efficient Deep Learning" workshop features two data-impaired challenges. These challenges address the problem of training deep learning models for computer vision tasks with limited data. Participants are limited to training models from scratch using a low number of training samples and are not allowed to use any form of transfer learning. We aim to stimulate the development of novel approaches that incorporate inductive biases to improve the data efficiency of deep learning models. Significant advancements are made compared to the provided baselines, where winning solutions surpass the baselines by a considerable margin in both tasks. As in previous editions, these achievements are primarily attributed to heavy use of data augmentation policies and large model ensembles, though novel prior-based methods seem to contribute more to successful solutions compared to last year. This report highlights the key aspects of the challenges and their outcomes.
[ { "created": "Wed, 26 Jun 2024 08:50:51 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2024 07:59:13 GMT", "version": "v2" } ]
2024-07-02
[ [ "Bruintjes", "Robert-Jan", "" ], [ "Lengyel", "Attila", "" ], [ "Rios", "Marcos Baptista", "" ], [ "Kayhan", "Osman Semih", "" ], [ "Zambrano", "Davide", "" ], [ "Tomen", "Nergis", "" ], [ "van Gemert", "Jan", "" ] ]
The fourth edition of the "VIPriors: Visual Inductive Priors for Data-Efficient Deep Learning" workshop features two data-impaired challenges. These challenges address the problem of training deep learning models for computer vision tasks with limited data. Participants are limited to training models from scratch using a low number of training samples and are not allowed to use any form of transfer learning. We aim to stimulate the development of novel approaches that incorporate inductive biases to improve the data efficiency of deep learning models. Significant advancements are made compared to the provided baselines, where winning solutions surpass the baselines by a considerable margin in both tasks. As in previous editions, these achievements are primarily attributed to heavy use of data augmentation policies and large model ensembles, though novel prior-based methods seem to contribute more to successful solutions compared to last year. This report highlights the key aspects of the challenges and their outcomes.
2012.10151
Wenjun Mei
Wenjun Mei, Ge Chen, Noah E. Friedkin, Florian D\"orfler
Structural Balance and Interpersonal Appraisals Dynamics: Beyond All-to-All and Two-Faction Networks
null
null
null
null
cs.SI cs.DM cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structural balance theory describes stable configurations of topologies of signed interpersonal appraisal networks. Existing models explaining the convergence of appraisal networks to structural balance either diverge in finite time, or could get stuck in jammed states, or converge to only complete graphs. In this paper, we study the open problem how steady non-all-to-all structural balance emerges via local dynamics of interpersonal appraisals. We first compare two well-justified definitions of structural balance for general non-all-to-all graphs, i.e., the triad-wise structural balance and the two-faction structural balance, and thoroughly study their relations. Secondly, based on three widely adopted sociological mechanisms: the symmetry mechanism, the influence mechanism, and the homophily mechanism, we propose two simple models of gossip-like appraisal dynamics, the symmetry-influence-homophily (SIH) dynamics and the symmetry-influence-opinion-homophily (SIOH) dynamics. In these models, the appraisal network starting from any initial condition almost surely achieves non-all-to-all triad-wise and two-faction structural balance in finite time respectively. Moreover, the SIOH dynamics capture the co-evolution of interpersonal appraisals and individuals' opinions. Regarding the theoretical contributions, we show that the equilibrium set of the SIH (SIOH resp.) dynamics corresponds to the set of all the possible triad-wise (two-faction resp.) structural balance configurations of the appraisal networks. Moreover, we prove that, for any initial condition, the appraisal networks in the SIH (SIOH resp.) dynamics almost surely achieve triad-wise (two-faction resp.) structural balance in finite time. Numerical studies of the SIH dynamics also imply some insightful take-home messages on whether multilateral relations reduce or exacerbate conflicts.
[ { "created": "Fri, 18 Dec 2020 10:28:15 GMT", "version": "v1" }, { "created": "Wed, 23 Dec 2020 14:54:47 GMT", "version": "v2" } ]
2020-12-24
[ [ "Mei", "Wenjun", "" ], [ "Chen", "Ge", "" ], [ "Friedkin", "Noah E.", "" ], [ "Dörfler", "Florian", "" ] ]
Structural balance theory describes stable configurations of topologies of signed interpersonal appraisal networks. Existing models explaining the convergence of appraisal networks to structural balance either diverge in finite time, or could get stuck in jammed states, or converge to only complete graphs. In this paper, we study the open problem how steady non-all-to-all structural balance emerges via local dynamics of interpersonal appraisals. We first compare two well-justified definitions of structural balance for general non-all-to-all graphs, i.e., the triad-wise structural balance and the two-faction structural balance, and thoroughly study their relations. Secondly, based on three widely adopted sociological mechanisms: the symmetry mechanism, the influence mechanism, and the homophily mechanism, we propose two simple models of gossip-like appraisal dynamics, the symmetry-influence-homophily (SIH) dynamics and the symmetry-influence-opinion-homophily (SIOH) dynamics. In these models, the appraisal network starting from any initial condition almost surely achieves non-all-to-all triad-wise and two-faction structural balance in finite time respectively. Moreover, the SIOH dynamics capture the co-evolution of interpersonal appraisals and individuals' opinions. Regarding the theoretical contributions, we show that the equilibrium set of the SIH (SIOH resp.) dynamics corresponds to the set of all the possible triad-wise (two-faction resp.) structural balance configurations of the appraisal networks. Moreover, we prove that, for any initial condition, the appraisal networks in the SIH (SIOH resp.) dynamics almost surely achieve triad-wise (two-faction resp.) structural balance in finite time. Numerical studies of the SIH dynamics also imply some insightful take-home messages on whether multilateral relations reduce or exacerbate conflicts.
2101.03539
Glauco Carneiro
Glauco de Figueiredo Carneiro, Rafael Antonio Lima Cardoso, Antonio Pedro Dores, Jos\'e Euclimar Xavier Menezes
Perspectives and Challenges in the Analysis of Prison Systems Data: A Systematic Mapping
Submitted to Dialogos Poss\'iveis Journal
2021
null
Dialogos Poss\'iveis v. 20, n. 1 (2021)
cs.CY
http://creativecommons.org/licenses/by/4.0/
Context: Open public data enable different stakeholders to perform analysis and uncover information from different perspectives. The identification and analysis of data from prison systems is not a trivial task. It raises the need for the research community to know how these data have been produced and used. Goal: Analyze prison systems data for the purpose of characterizing its use with respect to data sources, purpose and availability. Method: We performed a systematic mapping on existing evidence on prison systems original data from peer-reviewed studies published between 2000 and 2019. Results: Out of the 531 records, 196 articles were selected from the literature. Conclusion: The vast majority of the analyzed papers (75%) used restricted data. Only 18 studies (9%) provided data, which hampers replication initiatives. This indicates the need to analyze prison system in an integrated fashion, in which multidisciplinary and transparency are relevant issues to consider in such studies.
[ { "created": "Sun, 10 Jan 2021 13:03:40 GMT", "version": "v1" }, { "created": "Thu, 25 Feb 2021 22:51:23 GMT", "version": "v2" }, { "created": "Fri, 5 Nov 2021 01:12:14 GMT", "version": "v3" } ]
2021-12-09
[ [ "Carneiro", "Glauco de Figueiredo", "" ], [ "Cardoso", "Rafael Antonio Lima", "" ], [ "Dores", "Antonio Pedro", "" ], [ "Menezes", "José Euclimar Xavier", "" ] ]
Context: Open public data enable different stakeholders to perform analysis and uncover information from different perspectives. The identification and analysis of data from prison systems is not a trivial task. It raises the need for the research community to know how these data have been produced and used. Goal: Analyze prison systems data for the purpose of characterizing its use with respect to data sources, purpose and availability. Method: We performed a systematic mapping on existing evidence on prison systems original data from peer-reviewed studies published between 2000 and 2019. Results: Out of the 531 records, 196 articles were selected from the literature. Conclusion: The vast majority of the analyzed papers (75%) used restricted data. Only 18 studies (9%) provided data, which hampers replication initiatives. This indicates the need to analyze prison system in an integrated fashion, in which multidisciplinary and transparency are relevant issues to consider in such studies.
1507.02226
Quentin Stout
Quentin F. Stout
L infinity Isotonic Regression for Linear, Multidimensional, and Tree Orders
updated references, minor modifications
null
null
null
cs.DS stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithms are given for determining $L_\infty$ isotonic regression of weighted data. For a linear order, grid in multidimensional space, or tree, of $n$ vertices, optimal algorithms are given, taking $\Theta(n)$ time. These improve upon previous algorithms by a factor of $\Omega(\log n)$. For vertices at arbitrary positions in $d$-dimensional space a $\Theta(n \log^{d-1} n)$ algorithm employs iterative sorting to yield the functionality of a multidimensional structure while using only $\Theta(n)$ space. The algorithms utilize a new non-constructive feasibility test on a rendezvous graph, with bounded error envelopes at each vertex.
[ { "created": "Wed, 8 Jul 2015 17:16:28 GMT", "version": "v1" }, { "created": "Thu, 22 Jun 2017 22:06:37 GMT", "version": "v2" } ]
2017-06-26
[ [ "Stout", "Quentin F.", "" ] ]
Algorithms are given for determining $L_\infty$ isotonic regression of weighted data. For a linear order, grid in multidimensional space, or tree, of $n$ vertices, optimal algorithms are given, taking $\Theta(n)$ time. These improve upon previous algorithms by a factor of $\Omega(\log n)$. For vertices at arbitrary positions in $d$-dimensional space a $\Theta(n \log^{d-1} n)$ algorithm employs iterative sorting to yield the functionality of a multidimensional structure while using only $\Theta(n)$ space. The algorithms utilize a new non-constructive feasibility test on a rendezvous graph, with bounded error envelopes at each vertex.
2308.06663
Md Abul Bashar
Md Abul Bashar, Richi Nayak
ALGAN: Time Series Anomaly Detection with Adjusted-LSTM GAN
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomaly detection in time series data, to identify points that deviate from normal behaviour, is a common problem in various domains such as manufacturing, medical imaging, and cybersecurity. Recently, Generative Adversarial Networks (GANs) are shown to be effective in detecting anomalies in time series data. The neural network architecture of GANs (i.e. Generator and Discriminator) can significantly improve anomaly detection accuracy. In this paper, we propose a new GAN model, named Adjusted-LSTM GAN (ALGAN), which adjusts the output of an LSTM network for improved anomaly detection in both univariate and multivariate time series data in an unsupervised setting. We evaluate the performance of ALGAN on 46 real-world univariate time series datasets and a large multivariate dataset that spans multiple domains. Our experiments demonstrate that ALGAN outperforms traditional, neural network-based, and other GAN-based methods for anomaly detection in time series data.
[ { "created": "Sun, 13 Aug 2023 02:17:19 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2023 02:00:10 GMT", "version": "v2" } ]
2023-11-02
[ [ "Bashar", "Md Abul", "" ], [ "Nayak", "Richi", "" ] ]
Anomaly detection in time series data, to identify points that deviate from normal behaviour, is a common problem in various domains such as manufacturing, medical imaging, and cybersecurity. Recently, Generative Adversarial Networks (GANs) are shown to be effective in detecting anomalies in time series data. The neural network architecture of GANs (i.e. Generator and Discriminator) can significantly improve anomaly detection accuracy. In this paper, we propose a new GAN model, named Adjusted-LSTM GAN (ALGAN), which adjusts the output of an LSTM network for improved anomaly detection in both univariate and multivariate time series data in an unsupervised setting. We evaluate the performance of ALGAN on 46 real-world univariate time series datasets and a large multivariate dataset that spans multiple domains. Our experiments demonstrate that ALGAN outperforms traditional, neural network-based, and other GAN-based methods for anomaly detection in time series data.
2401.03491
Arshiya Khan
Sarah Alharbi, Arshiya Khan
Ensemble Defense System: A Hybrid IDS Approach for Effective Cyber Threat Detection
null
2023 33rd International Telecommunication Networks and Applications Conference, Melbourne, Australia, 2023, pp. 267-270
10.1109/ITNAC59571.2023.10368510
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Sophisticated cyber attacks present significant challenges for organizations in detecting and preventing such threats. To address this critical need for advanced defense mechanisms, we propose an Ensemble Defense System (EDS). An EDS is a cybersecurity framework aggregating multiple security tools designed to monitor and alert an organization during cyber attacks. The proposed EDS leverages a comprehensive range of Intrusion Detection System (IDS) capabilities by introducing a hybrid of signature-based IDS and anomaly-based IDS tools. It also incorporates Elasticsearch, an open-source Security Information and Event Management (SIEM) tool, to facilitate data analysis and interactive visualization of alerts generated from IDSs. The effectiveness of the EDS is evaluated through a payload from a bash script that executes various attacks, including port scanning, privilege escalation, and Denial-of-Service (DoS). The evaluation demonstrates the EDS's ability to detect diverse cyber attacks.
[ { "created": "Sun, 7 Jan 2024 14:07:00 GMT", "version": "v1" } ]
2024-01-09
[ [ "Alharbi", "Sarah", "" ], [ "Khan", "Arshiya", "" ] ]
Sophisticated cyber attacks present significant challenges for organizations in detecting and preventing such threats. To address this critical need for advanced defense mechanisms, we propose an Ensemble Defense System (EDS). An EDS is a cybersecurity framework aggregating multiple security tools designed to monitor and alert an organization during cyber attacks. The proposed EDS leverages a comprehensive range of Intrusion Detection System (IDS) capabilities by introducing a hybrid of signature-based IDS and anomaly-based IDS tools. It also incorporates Elasticsearch, an open-source Security Information and Event Management (SIEM) tool, to facilitate data analysis and interactive visualization of alerts generated from IDSs. The effectiveness of the EDS is evaluated through a payload from a bash script that executes various attacks, including port scanning, privilege escalation, and Denial-of-Service (DoS). The evaluation demonstrates the EDS's ability to detect diverse cyber attacks.
2208.08382
Sreeraj Ramachandran
Sreeraj Ramachandran and Ajita Rattani
Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups
20 pages, 4 figures, 9 tables, ICPR workshop
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender classifiers, the vision community has developed several strategies. However, the efficacy of these mitigation strategies is demonstrated for a limited number of races mostly, Caucasian and African-American. Further, these strategies often offer a trade-off between bias and classification accuracy. To further advance the state-of-the-art, we leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias. We demonstrate the superiority of our bias mitigation strategy in improving classification accuracy and reducing bias across gender-racial groups through extensive experimental validation, resulting in state-of-the-art performance in intra- and cross dataset evaluations.
[ { "created": "Wed, 17 Aug 2022 16:23:35 GMT", "version": "v1" } ]
2022-08-18
[ [ "Ramachandran", "Sreeraj", "" ], [ "Rattani", "Ajita", "" ] ]
Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender classifiers, the vision community has developed several strategies. However, the efficacy of these mitigation strategies is demonstrated for a limited number of races mostly, Caucasian and African-American. Further, these strategies often offer a trade-off between bias and classification accuracy. To further advance the state-of-the-art, we leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias. We demonstrate the superiority of our bias mitigation strategy in improving classification accuracy and reducing bias across gender-racial groups through extensive experimental validation, resulting in state-of-the-art performance in intra- and cross dataset evaluations.
2203.00134
Saba Ahmadi
Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
Setting Fair Incentives to Maximize Improvement
null
null
null
null
cs.GT cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of helping agents improve by setting short-term goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach or do nothing if no target level is within reach. We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and fairness objectives are considered where the agents belong to different underlying populations. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve. This is especially challenging when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we show counterexamples in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.
[ { "created": "Mon, 28 Feb 2022 23:09:40 GMT", "version": "v1" } ]
2022-03-02
[ [ "Ahmadi", "Saba", "" ], [ "Beyhaghi", "Hedyeh", "" ], [ "Blum", "Avrim", "" ], [ "Naggita", "Keziah", "" ] ]
We consider the problem of helping agents improve by setting short-term goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach or do nothing if no target level is within reach. We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and fairness objectives are considered where the agents belong to different underlying populations. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve. This is especially challenging when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we show counterexamples in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents.
cs/0505042
Matthew Earl
Matthew Earl and Raffaello D'Andrea
Iterative MILP Methods for Vehicle Control Problems
22 pages, 9 figures, submitted to IEEE Transactions on Robotics, for associated web page see http://control.mae.cornell.edu/earl/milp2
M. G. Earl and R. D'Andrea, "Iterative MILP Methods for Vehicle Control Problems," IEEE Transactions on Robotics, Volume 21, Issue 6, pages 1158-1167, Dec. 2005.
null
null
cs.RO
null
Mixed integer linear programming (MILP) is a powerful tool for planning and control problems because of its modeling capability and the availability of good solvers. However, for large models, MILP methods suffer computationally. In this paper, we present iterative MILP algorithms that address this issue. We consider trajectory generation problems with obstacle avoidance requirements and minimum time trajectory generation problems. The algorithms use fewer binary variables than standard MILP methods and require less computational effort.
[ { "created": "Mon, 16 May 2005 03:54:08 GMT", "version": "v1" } ]
2007-05-23
[ [ "Earl", "Matthew", "" ], [ "D'Andrea", "Raffaello", "" ] ]
Mixed integer linear programming (MILP) is a powerful tool for planning and control problems because of its modeling capability and the availability of good solvers. However, for large models, MILP methods suffer computationally. In this paper, we present iterative MILP algorithms that address this issue. We consider trajectory generation problems with obstacle avoidance requirements and minimum time trajectory generation problems. The algorithms use fewer binary variables than standard MILP methods and require less computational effort.
1709.04073
Chandrashekar Lakshminarayanan
Chandrashekar Lakshminarayanan and Csaba Szepesv\'ari
Linear Stochastic Approximation: Constant Step-Size and Iterate Averaging
16 pages, 2 figures, was submitted to NIPS 2017
null
null
null
cs.LG cs.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider $d$-dimensional linear stochastic approximation algorithms (LSAs) with a constant step-size and the so called Polyak-Ruppert (PR) averaging of iterates. LSAs are widely applied in machine learning and reinforcement learning (RL), where the aim is to compute an appropriate $\theta_{*} \in \mathbb{R}^d$ (that is an optimum or a fixed point) using noisy data and $O(d)$ updates per iteration. In this paper, we are motivated by the problem (in RL) of policy evaluation from experience replay using the \emph{temporal difference} (TD) class of learning algorithms that are also LSAs. For LSAs with a constant step-size, and PR averaging, we provide bounds for the mean squared error (MSE) after $t$ iterations. We assume that data is \iid with finite variance (underlying distribution being $P$) and that the expected dynamics is Hurwitz. For a given LSA with PR averaging, and data distribution $P$ satisfying the said assumptions, we show that there exists a range of constant step-sizes such that its MSE decays as $O(\frac{1}{t})$. We examine the conditions under which a constant step-size can be chosen uniformly for a class of data distributions $\mathcal{P}$, and show that not all data distributions `admit' such a uniform constant step-size. We also suggest a heuristic step-size tuning algorithm to choose a constant step-size of a given LSA for a given data distribution $P$. We compare our results with related work and also discuss the implication of our results in the context of TD algorithms that are LSAs.
[ { "created": "Tue, 12 Sep 2017 22:34:09 GMT", "version": "v1" } ]
2017-09-14
[ [ "Lakshminarayanan", "Chandrashekar", "" ], [ "Szepesvári", "Csaba", "" ] ]
We consider $d$-dimensional linear stochastic approximation algorithms (LSAs) with a constant step-size and the so called Polyak-Ruppert (PR) averaging of iterates. LSAs are widely applied in machine learning and reinforcement learning (RL), where the aim is to compute an appropriate $\theta_{*} \in \mathbb{R}^d$ (that is an optimum or a fixed point) using noisy data and $O(d)$ updates per iteration. In this paper, we are motivated by the problem (in RL) of policy evaluation from experience replay using the \emph{temporal difference} (TD) class of learning algorithms that are also LSAs. For LSAs with a constant step-size, and PR averaging, we provide bounds for the mean squared error (MSE) after $t$ iterations. We assume that data is \iid with finite variance (underlying distribution being $P$) and that the expected dynamics is Hurwitz. For a given LSA with PR averaging, and data distribution $P$ satisfying the said assumptions, we show that there exists a range of constant step-sizes such that its MSE decays as $O(\frac{1}{t})$. We examine the conditions under which a constant step-size can be chosen uniformly for a class of data distributions $\mathcal{P}$, and show that not all data distributions `admit' such a uniform constant step-size. We also suggest a heuristic step-size tuning algorithm to choose a constant step-size of a given LSA for a given data distribution $P$. We compare our results with related work and also discuss the implication of our results in the context of TD algorithms that are LSAs.
2407.04549
Jane Pan
Jane Pan, He He, Samuel R. Bowman, Shi Feng
Spontaneous Reward Hacking in Iterative Self-Refinement
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Language models are capable of iteratively improving their outputs based on natural language feedback, thus enabling in-context optimization of user preference. In place of human users, a second language model can be used as an evaluator, providing feedback along with numerical ratings which the generator attempts to optimize. However, because the evaluator is an imperfect proxy of user preference, this optimization can lead to reward hacking, where the evaluator's ratings improve while the generation quality remains stagnant or even decreases as judged by actual user preference. The concern of reward hacking is heightened in iterative self-refinement where the generator and the evaluator use the same underlying language model, in which case the optimization pressure can drive them to exploit shared vulnerabilities. Using an essay editing task, we show that iterative self-refinement leads to deviation between the language model evaluator and human judgment, demonstrating that reward hacking can occur spontaneously in-context with the use of iterative self-refinement. In addition, we study conditions under which reward hacking occurs and observe two factors that affect reward hacking severity: model size and context sharing between the generator and the evaluator.
[ { "created": "Fri, 5 Jul 2024 14:34:50 GMT", "version": "v1" } ]
2024-07-08
[ [ "Pan", "Jane", "" ], [ "He", "He", "" ], [ "Bowman", "Samuel R.", "" ], [ "Feng", "Shi", "" ] ]
Language models are capable of iteratively improving their outputs based on natural language feedback, thus enabling in-context optimization of user preference. In place of human users, a second language model can be used as an evaluator, providing feedback along with numerical ratings which the generator attempts to optimize. However, because the evaluator is an imperfect proxy of user preference, this optimization can lead to reward hacking, where the evaluator's ratings improve while the generation quality remains stagnant or even decreases as judged by actual user preference. The concern of reward hacking is heightened in iterative self-refinement where the generator and the evaluator use the same underlying language model, in which case the optimization pressure can drive them to exploit shared vulnerabilities. Using an essay editing task, we show that iterative self-refinement leads to deviation between the language model evaluator and human judgment, demonstrating that reward hacking can occur spontaneously in-context with the use of iterative self-refinement. In addition, we study conditions under which reward hacking occurs and observe two factors that affect reward hacking severity: model size and context sharing between the generator and the evaluator.
2007.10200
Ahmed Arafa
Ahmed Arafa, Karim Banawan, Karim G. Seddik, H. Vincent Poor
Sample, Quantize and Encode: Timely Estimation Over Noisy Channels
Accepted for publication in the IEEE Transactions on Communications. arXiv admin note: substantial text overlap with arXiv:2004.12982
null
null
null
cs.IT cs.NI eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effects of quantization and coding on the estimation quality of Gauss-Markov processes are considered, with a special attention to the Ornstein-Uhlenbeck process. Samples are acquired from the process, quantized, and then encoded for transmission using either infinite incremental redundancy (IIR) or fixed redundancy (FR) coding schemes. A fixed processing time is consumed at the receiver for decoding and sending feedback to the transmitter. Decoded messages are used to construct a minimum mean square error (MMSE) estimate of the process as a function of time. This is shown to be an increasing functional of the age-of-information (AoI), defined as the time elapsed since the sampling time pertaining to the latest successfully decoded message. Such functional depends on the quantization bits, codewords lengths and receiver processing time. The goal, for each coding scheme, is to optimize sampling times such that the long-term average MMSE is minimized. This is then characterized in the setting of general increasing functionals of AoI, not necessarily corresponding to MMSE, which may be of independent interest in other contexts. We first show that the optimal sampling policy for IIR is such that a new sample is generated only if the AoI exceeds a certain threshold, while for FR it is such that a new sample is delivered just-in-time as the receiver finishes processing the previous one. Enhanced transmissions schemes are then developed in order to exploit the processing times to make new data available at the receiver sooner. For both IIR and FR, it is shown that there exists an optimal number of quantization bits that balances AoI and quantization errors, and hence minimizes the MMSE. It is also shown that for longer receiver processing times, the relatively simpler FR scheme outperforms IIR.
[ { "created": "Thu, 16 Jul 2020 17:50:41 GMT", "version": "v1" }, { "created": "Mon, 21 Jun 2021 20:56:34 GMT", "version": "v2" } ]
2021-06-23
[ [ "Arafa", "Ahmed", "" ], [ "Banawan", "Karim", "" ], [ "Seddik", "Karim G.", "" ], [ "Poor", "H. Vincent", "" ] ]
The effects of quantization and coding on the estimation quality of Gauss-Markov processes are considered, with a special attention to the Ornstein-Uhlenbeck process. Samples are acquired from the process, quantized, and then encoded for transmission using either infinite incremental redundancy (IIR) or fixed redundancy (FR) coding schemes. A fixed processing time is consumed at the receiver for decoding and sending feedback to the transmitter. Decoded messages are used to construct a minimum mean square error (MMSE) estimate of the process as a function of time. This is shown to be an increasing functional of the age-of-information (AoI), defined as the time elapsed since the sampling time pertaining to the latest successfully decoded message. Such functional depends on the quantization bits, codewords lengths and receiver processing time. The goal, for each coding scheme, is to optimize sampling times such that the long-term average MMSE is minimized. This is then characterized in the setting of general increasing functionals of AoI, not necessarily corresponding to MMSE, which may be of independent interest in other contexts. We first show that the optimal sampling policy for IIR is such that a new sample is generated only if the AoI exceeds a certain threshold, while for FR it is such that a new sample is delivered just-in-time as the receiver finishes processing the previous one. Enhanced transmissions schemes are then developed in order to exploit the processing times to make new data available at the receiver sooner. For both IIR and FR, it is shown that there exists an optimal number of quantization bits that balances AoI and quantization errors, and hence minimizes the MMSE. It is also shown that for longer receiver processing times, the relatively simpler FR scheme outperforms IIR.
2305.06615
Nikolay Mikhaylovskiy
Nikolay Mikhaylovskiy and Ilya Churilov
Autocorrelations Decay in Texts and Applicability Limits of Language Models
Accepted to Dialog-2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that the laws of autocorrelations decay in texts are closely related to applicability limits of language models. Using distributional semantics we empirically demonstrate that autocorrelations of words in texts decay according to a power law. We show that distributional semantics provides coherent autocorrelations decay exponents for texts translated to multiple languages. The autocorrelations decay in generated texts is quantitatively and often qualitatively different from the literary texts. We conclude that language models exhibiting Markov behavior, including large autoregressive language models, may have limitations when applied to long texts, whether analysis or generation.
[ { "created": "Thu, 11 May 2023 07:23:01 GMT", "version": "v1" } ]
2023-05-12
[ [ "Mikhaylovskiy", "Nikolay", "" ], [ "Churilov", "Ilya", "" ] ]
We show that the laws of autocorrelations decay in texts are closely related to applicability limits of language models. Using distributional semantics we empirically demonstrate that autocorrelations of words in texts decay according to a power law. We show that distributional semantics provides coherent autocorrelations decay exponents for texts translated to multiple languages. The autocorrelations decay in generated texts is quantitatively and often qualitatively different from the literary texts. We conclude that language models exhibiting Markov behavior, including large autoregressive language models, may have limitations when applied to long texts, whether analysis or generation.
2208.12327
Xiaoyu Lin
Xiaoyu Lin, Baran Ozaydin, Vidit Vidit, Majed El Helou and Sabine S\"usstrunk
DSR: Towards Drone Image Super-Resolution
Accepted at ECCVW 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite achieving remarkable progress in recent years, single-image super-resolution methods are developed with several limitations. Specifically, they are trained on fixed content domains with certain degradations (whether synthetic or real). The priors they learn are prone to overfitting the training configuration. Therefore, the generalization to novel domains such as drone top view data, and across altitudes, is currently unknown. Nonetheless, pairing drones with proper image super-resolution is of great value. It would enable drones to fly higher covering larger fields of view, while maintaining a high image quality. To answer these questions and pave the way towards drone image super-resolution, we explore this application with particular focus on the single-image case. We propose a novel drone image dataset, with scenes captured at low and high resolutions, and across a span of altitudes. Our results show that off-the-shelf state-of-the-art networks witness a significant drop in performance on this different domain. We additionally show that simple fine-tuning, and incorporating altitude awareness into the network's architecture, both improve the reconstruction performance.
[ { "created": "Thu, 25 Aug 2022 19:58:54 GMT", "version": "v1" } ]
2022-08-29
[ [ "Lin", "Xiaoyu", "" ], [ "Ozaydin", "Baran", "" ], [ "Vidit", "Vidit", "" ], [ "Helou", "Majed El", "" ], [ "Süsstrunk", "Sabine", "" ] ]
Despite achieving remarkable progress in recent years, single-image super-resolution methods are developed with several limitations. Specifically, they are trained on fixed content domains with certain degradations (whether synthetic or real). The priors they learn are prone to overfitting the training configuration. Therefore, the generalization to novel domains such as drone top view data, and across altitudes, is currently unknown. Nonetheless, pairing drones with proper image super-resolution is of great value. It would enable drones to fly higher covering larger fields of view, while maintaining a high image quality. To answer these questions and pave the way towards drone image super-resolution, we explore this application with particular focus on the single-image case. We propose a novel drone image dataset, with scenes captured at low and high resolutions, and across a span of altitudes. Our results show that off-the-shelf state-of-the-art networks witness a significant drop in performance on this different domain. We additionally show that simple fine-tuning, and incorporating altitude awareness into the network's architecture, both improve the reconstruction performance.
1703.09928
Qiong Zeng
Qiong Zeng and Baoquan Chen and Yanir Kleiman and Daniel Cohen-Or and Yangyan Li
Bundle Optimization for Multi-aspect Embedding
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding semantic similarity among images is the core of a wide range of computer vision applications. An important step towards this goal is to collect and learn human perceptions. Interestingly, the semantic context of images is often ambiguous as images can be perceived with emphasis on different aspects, which may be contradictory to each other. In this paper, we present a method for learning the semantic similarity among images, inferring their latent aspects and embedding them into multi-spaces corresponding to their semantic aspects. We consider the multi-embedding problem as an optimization function that evaluates the embedded distances with respect to the qualitative clustering queries. The key idea of our approach is to collect and embed qualitative measures that share the same aspects in bundles. To ensure similarity aspect sharing among multiple measures, image classification queries are presented to, and solved by users. The collected image clusters are then converted into bundles of tuples, which are fed into our bundle optimization algorithm that jointly infers the aspect similarity and multi-aspect embedding. Extensive experimental results show that our approach significantly outperforms state-of-the-art multi-embedding approaches on various datasets, and scales well for large multi-aspect similarity measures.
[ { "created": "Wed, 29 Mar 2017 08:29:55 GMT", "version": "v1" }, { "created": "Wed, 5 Apr 2017 06:59:19 GMT", "version": "v2" }, { "created": "Sat, 16 Sep 2017 03:16:06 GMT", "version": "v3" } ]
2017-09-19
[ [ "Zeng", "Qiong", "" ], [ "Chen", "Baoquan", "" ], [ "Kleiman", "Yanir", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Li", "Yangyan", "" ] ]
Understanding semantic similarity among images is the core of a wide range of computer vision applications. An important step towards this goal is to collect and learn human perceptions. Interestingly, the semantic context of images is often ambiguous as images can be perceived with emphasis on different aspects, which may be contradictory to each other. In this paper, we present a method for learning the semantic similarity among images, inferring their latent aspects and embedding them into multi-spaces corresponding to their semantic aspects. We consider the multi-embedding problem as an optimization function that evaluates the embedded distances with respect to the qualitative clustering queries. The key idea of our approach is to collect and embed qualitative measures that share the same aspects in bundles. To ensure similarity aspect sharing among multiple measures, image classification queries are presented to, and solved by users. The collected image clusters are then converted into bundles of tuples, which are fed into our bundle optimization algorithm that jointly infers the aspect similarity and multi-aspect embedding. Extensive experimental results show that our approach significantly outperforms state-of-the-art multi-embedding approaches on various datasets, and scales well for large multi-aspect similarity measures.
2404.05205
Shubhabrata Mukherjee
Babak Poorebrahim Gilkalaye, Shubhabrata Mukherjee, Reza Derakhshani
A secure and private ensemble matcher using multi-vault obfuscated templates
This paper has been accepted in IJCB 2024 Special Session, Generative AI for Futuristic Biometrics
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Generative AI has revolutionized modern machine learning by providing unprecedented realism, diversity, and efficiency in data generation. This technology holds immense potential for biometrics, including for securing sensitive and personally identifiable information. Given the irrevocability of biometric samples and mounting privacy concerns, biometric template security and secure matching are among the most sought-after features of modern biometric systems. This paper proposes a novel obfuscation method using Generative AI to enhance biometric template security. Our approach utilizes synthetic facial images generated by a Generative Adversarial Network (GAN) as "random chaff points" within a secure vault system. Our method creates n sub-templates from the original template, each obfuscated with m GAN chaff points. During verification, s closest vectors to the biometric query are retrieved from each vault and combined to generate hash values, which are then compared with the stored hash value. Thus, our method safeguards user identities during the training and deployment phases by employing the GAN-generated synthetic images. Our protocol was tested using the AT&T, GT, and LFW face datasets, achieving ROC areas under the curve of 0.99, 0.99, and 0.90, respectively. Our results demonstrate that the proposed method can maintain high accuracy and reasonable computational complexity comparable to those unprotected template methods while significantly enhancing security and privacy, underscoring the potential of Generative AI in developing proactive defensive strategies for biometric systems.
[ { "created": "Mon, 8 Apr 2024 05:18:39 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2024 14:42:48 GMT", "version": "v2" } ]
2024-08-13
[ [ "Gilkalaye", "Babak Poorebrahim", "" ], [ "Mukherjee", "Shubhabrata", "" ], [ "Derakhshani", "Reza", "" ] ]
Generative AI has revolutionized modern machine learning by providing unprecedented realism, diversity, and efficiency in data generation. This technology holds immense potential for biometrics, including for securing sensitive and personally identifiable information. Given the irrevocability of biometric samples and mounting privacy concerns, biometric template security and secure matching are among the most sought-after features of modern biometric systems. This paper proposes a novel obfuscation method using Generative AI to enhance biometric template security. Our approach utilizes synthetic facial images generated by a Generative Adversarial Network (GAN) as "random chaff points" within a secure vault system. Our method creates n sub-templates from the original template, each obfuscated with m GAN chaff points. During verification, s closest vectors to the biometric query are retrieved from each vault and combined to generate hash values, which are then compared with the stored hash value. Thus, our method safeguards user identities during the training and deployment phases by employing the GAN-generated synthetic images. Our protocol was tested using the AT&T, GT, and LFW face datasets, achieving ROC areas under the curve of 0.99, 0.99, and 0.90, respectively. Our results demonstrate that the proposed method can maintain high accuracy and reasonable computational complexity comparable to those unprotected template methods while significantly enhancing security and privacy, underscoring the potential of Generative AI in developing proactive defensive strategies for biometric systems.
2305.15557
Riccardo Bonalli
Riccardo Bonalli and Alessandro Rudi
Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence
null
null
null
null
cs.LG cs.SY eess.SY math.OC
http://creativecommons.org/licenses/by/4.0/
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of multi-dimensional non-linear stochastic differential equations, which relies upon discrete-time observations of the state. The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations, yielding theoretical estimates of non-asymptotic learning rates which, unlike previous works, become increasingly tighter when the regularity of the unknown drift and diffusion coefficients becomes higher. Our method being kernel-based, offline pre-processing may be profitably leveraged to enable efficient numerical implementation, offering excellent balance between precision and computational complexity.
[ { "created": "Wed, 24 May 2023 20:43:47 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2024 11:34:52 GMT", "version": "v2" } ]
2024-04-24
[ [ "Bonalli", "Riccardo", "" ], [ "Rudi", "Alessandro", "" ] ]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of multi-dimensional non-linear stochastic differential equations, which relies upon discrete-time observations of the state. The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations, yielding theoretical estimates of non-asymptotic learning rates which, unlike previous works, become increasingly tighter when the regularity of the unknown drift and diffusion coefficients becomes higher. Our method being kernel-based, offline pre-processing may be profitably leveraged to enable efficient numerical implementation, offering excellent balance between precision and computational complexity.
2109.01156
Linqing Liu
Linqing Liu, Patrick Lewis, Sebastian Riedel, Pontus Stenetorp
Challenges in Generalization in Open Domain Question Answering
NAACL 2022 Findings
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions. However, it is unclear which aspects of novel questions make them challenging. Drawing upon studies on systematic generalization, we introduce and annotate questions according to three categories that measure different levels and kinds of generalization: training set overlap, compositional generalization (comp-gen), and novel-entity generalization (novel-entity). When evaluating six popular parametric and non-parametric models, we find that for the established Natural Questions and TriviaQA datasets, even the strongest model performance for comp-gen/novel-entity is 13.1/5.4% and 9.6/1.5% lower compared to that for the full test set -- indicating the challenge posed by these types of questions. Furthermore, we show that whilst non-parametric models can handle questions containing novel entities relatively well, they struggle with those requiring compositional generalization. Lastly, we find that key question difficulty factors are: cascading errors from the retrieval component, frequency of question pattern, and frequency of the entity.
[ { "created": "Thu, 2 Sep 2021 18:04:10 GMT", "version": "v1" }, { "created": "Wed, 15 Dec 2021 18:37:48 GMT", "version": "v2" }, { "created": "Sun, 15 May 2022 10:30:54 GMT", "version": "v3" } ]
2022-05-17
[ [ "Liu", "Linqing", "" ], [ "Lewis", "Patrick", "" ], [ "Riedel", "Sebastian", "" ], [ "Stenetorp", "Pontus", "" ] ]
Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions. However, it is unclear which aspects of novel questions make them challenging. Drawing upon studies on systematic generalization, we introduce and annotate questions according to three categories that measure different levels and kinds of generalization: training set overlap, compositional generalization (comp-gen), and novel-entity generalization (novel-entity). When evaluating six popular parametric and non-parametric models, we find that for the established Natural Questions and TriviaQA datasets, even the strongest model performance for comp-gen/novel-entity is 13.1/5.4% and 9.6/1.5% lower compared to that for the full test set -- indicating the challenge posed by these types of questions. Furthermore, we show that whilst non-parametric models can handle questions containing novel entities relatively well, they struggle with those requiring compositional generalization. Lastly, we find that key question difficulty factors are: cascading errors from the retrieval component, frequency of question pattern, and frequency of the entity.
1812.03267
Zhe Wang
Zhe Wang, Lingjie Duan, and Rui Zhang
Adaptive Deployment for UAV-Aided Communication Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicle (UAV) as an aerial base station is a promising technology to rapidly provide wireless connectivity to ground users. Given UAV's agility and mobility, a key question is how to adapt UAV deployment to best cater to the instantaneous wireless traffic in a territory. In this paper, we propose an adaptive deployment scheme for a UAV-aided communication network, where the UAV adapts its displacement direction and distance to serve randomly moving users' instantaneous traffic in the target cell. In our adaptive scheme, the UAV does not need to learn users' exact locations in real time, but chooses its displacement direction based on a simple majority rule by flying to the spatial sector with the greatest number of users in the cell. To balance the service qualities of the users in different sectors, we further optimize the UAV's displacement distance in the chosen sector to maximize the average throughput and the successful transmission probability, respectively. We prove that the optimal displacement distance for average throughput maximization decreases with the user density: the UAV moves to the center of the chosen sector when the user density is small and the UAV displacement becomes mild when the user density is large. In contrast, the optimal displacement distance for success probability maximization does not necessarily decrease with the user density and further depends on the target signal-to-noise ratio (SNR) threshold. Extensive simulations show that the proposed adaptive deployment scheme outperforms the traditional non-adaptive scheme, especially when the user density is not large.
[ { "created": "Sat, 8 Dec 2018 04:32:28 GMT", "version": "v1" } ]
2018-12-11
[ [ "Wang", "Zhe", "" ], [ "Duan", "Lingjie", "" ], [ "Zhang", "Rui", "" ] ]
Unmanned aerial vehicle (UAV) as an aerial base station is a promising technology to rapidly provide wireless connectivity to ground users. Given UAV's agility and mobility, a key question is how to adapt UAV deployment to best cater to the instantaneous wireless traffic in a territory. In this paper, we propose an adaptive deployment scheme for a UAV-aided communication network, where the UAV adapts its displacement direction and distance to serve randomly moving users' instantaneous traffic in the target cell. In our adaptive scheme, the UAV does not need to learn users' exact locations in real time, but chooses its displacement direction based on a simple majority rule by flying to the spatial sector with the greatest number of users in the cell. To balance the service qualities of the users in different sectors, we further optimize the UAV's displacement distance in the chosen sector to maximize the average throughput and the successful transmission probability, respectively. We prove that the optimal displacement distance for average throughput maximization decreases with the user density: the UAV moves to the center of the chosen sector when the user density is small and the UAV displacement becomes mild when the user density is large. In contrast, the optimal displacement distance for success probability maximization does not necessarily decrease with the user density and further depends on the target signal-to-noise ratio (SNR) threshold. Extensive simulations show that the proposed adaptive deployment scheme outperforms the traditional non-adaptive scheme, especially when the user density is not large.
2305.01498
Miao Li
Miao Li, Eduard Hovy, Jey Han Lau
Summarizing Multiple Documents with Conversational Structure for Meta-Review Generation
Long paper; Accepted to EMNLP 2023; Soundness: 3, 3, 4; Excitement: 3, 4, 4
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We present PeerSum, a novel dataset for generating meta-reviews of scientific papers. The meta-reviews can be interpreted as abstractive summaries of reviews, multi-turn discussions and the paper abstract. These source documents have rich inter-document relationships with an explicit hierarchical conversational structure, cross-references and (occasionally) conflicting information. To introduce the structural inductive bias into pre-trained language models, we introduce Rammer ( Relationship-aware Multi-task Meta-review Generator), a model that uses sparse attention based on the conversational structure and a multi-task training objective that predicts metadata features (e.g., review ratings). Our experimental results show that Rammer outperforms other strong baseline models in terms of a suite of automatic evaluation metrics. Further analyses, however, reveal that RAMMER and other models struggle to handle conflicts in source documents of PeerSum, suggesting meta-review generation is a challenging task and a promising avenue for further research.
[ { "created": "Tue, 2 May 2023 15:18:18 GMT", "version": "v1" }, { "created": "Sat, 7 Oct 2023 22:57:34 GMT", "version": "v2" }, { "created": "Tue, 10 Oct 2023 03:19:16 GMT", "version": "v3" }, { "created": "Mon, 23 Oct 2023 06:18:09 GMT", "version": "v4" } ]
2023-10-24
[ [ "Li", "Miao", "" ], [ "Hovy", "Eduard", "" ], [ "Lau", "Jey Han", "" ] ]
We present PeerSum, a novel dataset for generating meta-reviews of scientific papers. The meta-reviews can be interpreted as abstractive summaries of reviews, multi-turn discussions and the paper abstract. These source documents have rich inter-document relationships with an explicit hierarchical conversational structure, cross-references and (occasionally) conflicting information. To introduce the structural inductive bias into pre-trained language models, we introduce Rammer ( Relationship-aware Multi-task Meta-review Generator), a model that uses sparse attention based on the conversational structure and a multi-task training objective that predicts metadata features (e.g., review ratings). Our experimental results show that Rammer outperforms other strong baseline models in terms of a suite of automatic evaluation metrics. Further analyses, however, reveal that RAMMER and other models struggle to handle conflicts in source documents of PeerSum, suggesting meta-review generation is a challenging task and a promising avenue for further research.
1801.02120
Somayeh Kafaie
Somayeh Kafaie, Yuanzhu Peter Chen, Octavia A. Dobre, Mohamed Hossam Ahmed
Network Coding Implementation Details: A Guidance Document
5 pages, 5 figures, 22nd Annual Newfoundland Electrical and Computer Engineering Conference (NECEC), 2013
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, network coding has become one of the most interesting fields and has attracted considerable attention from both industry and academia. The idea of network coding is based on the concept of allowing intermediate nodes to encode and combine incoming packets instead of only copy and forward them. This approach, by augmenting the multicast and broadcast efficiency of multi-hop wireless networks, increases the capacity of the network and improves its throughput and robustness. While a wide variety of papers described applications of network coding in different types of networks such as delay tolerant networks, peer to peer networks and wireless sensor networks, the detailed practical implementation of network coding has not been noted in most papers. Since applying network coding in real scenarios requires an acceptable understanding of mathematics and algebra, especially linear equations, reduced row echelon matrices, field and its operations, this paper provides a comprehensive guidance for the implementation of almost all required concepts in network coding. The paper explains the implementation details of network coding in real scenarios and describes the effect of the field size on network coding.
[ { "created": "Sun, 7 Jan 2018 03:26:04 GMT", "version": "v1" } ]
2018-01-09
[ [ "Kafaie", "Somayeh", "" ], [ "Chen", "Yuanzhu Peter", "" ], [ "Dobre", "Octavia A.", "" ], [ "Ahmed", "Mohamed Hossam", "" ] ]
In recent years, network coding has become one of the most interesting fields and has attracted considerable attention from both industry and academia. The idea of network coding is based on the concept of allowing intermediate nodes to encode and combine incoming packets instead of only copy and forward them. This approach, by augmenting the multicast and broadcast efficiency of multi-hop wireless networks, increases the capacity of the network and improves its throughput and robustness. While a wide variety of papers described applications of network coding in different types of networks such as delay tolerant networks, peer to peer networks and wireless sensor networks, the detailed practical implementation of network coding has not been noted in most papers. Since applying network coding in real scenarios requires an acceptable understanding of mathematics and algebra, especially linear equations, reduced row echelon matrices, field and its operations, this paper provides a comprehensive guidance for the implementation of almost all required concepts in network coding. The paper explains the implementation details of network coding in real scenarios and describes the effect of the field size on network coding.
2011.03779
Paul Hriljac
Paul Hriljac
Constructing Cryptographic Multilinear Maps Using Affine Automorphisms
null
null
null
null
cs.CR math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The point of this paper is to use affine automorphisms from algebraic geometry to build cryptographic multivariate mappings. We will construct groups G,H, both isomorphic to the cyclic group with a prime number of elements and multilinear pairings from the k-fold product of G to H. The construction is reminiscent of techniques in multivariate encryption. We display several different versions of the discrete logarithm problem for these groups. We show that the efficient solution of some of these problems result in efficient algorithms for inverting systems of multivariate polynomials corresponding to affine automorphisms, which implies that such problems are as computationally difficult as breaking multivariate encryption.
[ { "created": "Sat, 7 Nov 2020 14:22:06 GMT", "version": "v1" } ]
2020-11-10
[ [ "Hriljac", "Paul", "" ] ]
The point of this paper is to use affine automorphisms from algebraic geometry to build cryptographic multivariate mappings. We will construct groups G,H, both isomorphic to the cyclic group with a prime number of elements and multilinear pairings from the k-fold product of G to H. The construction is reminiscent of techniques in multivariate encryption. We display several different versions of the discrete logarithm problem for these groups. We show that the efficient solution of some of these problems result in efficient algorithms for inverting systems of multivariate polynomials corresponding to affine automorphisms, which implies that such problems are as computationally difficult as breaking multivariate encryption.
2105.14526
Nikhil Iyer
Nikhil Iyer, V Thejas, Nipun Kwatra, Ramachandran Ramjee, Muthian Sivathanu
LRTuner: A Learning Rate Tuner for Deep Neural Networks
17 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
One very important hyperparameter for training deep neural networks is the learning rate schedule of the optimizer. The choice of learning rate schedule determines the computational cost of getting close to a minima, how close you actually get to the minima, and most importantly the kind of local minima (wide/narrow) attained. The kind of minima attained has a significant impact on the generalization accuracy of the network. Current systems employ hand tuned learning rate schedules, which are painstakingly tuned for each network and dataset. Given that the state space of schedules is huge, finding a satisfactory learning rate schedule can be very time consuming. In this paper, we present LRTuner, a method for tuning the learning rate as training proceeds. Our method works with any optimizer, and we demonstrate results on SGD with Momentum, and Adam optimizers. We extensively evaluate LRTuner on multiple datasets, models, and across optimizers. We compare favorably against standard learning rate schedules for the given dataset and models, including ImageNet on Resnet-50, Cifar-10 on Resnet-18, and SQuAD fine-tuning on BERT. For example on ImageNet with Resnet-50, LRTuner shows up to 0.2% absolute gains in test accuracy compared to the hand-tuned baseline schedule. Moreover, LRTuner can achieve the same accuracy as the baseline schedule in 29% less optimization steps.
[ { "created": "Sun, 30 May 2021 13:06:26 GMT", "version": "v1" } ]
2021-06-01
[ [ "Iyer", "Nikhil", "" ], [ "Thejas", "V", "" ], [ "Kwatra", "Nipun", "" ], [ "Ramjee", "Ramachandran", "" ], [ "Sivathanu", "Muthian", "" ] ]
One very important hyperparameter for training deep neural networks is the learning rate schedule of the optimizer. The choice of learning rate schedule determines the computational cost of getting close to a minima, how close you actually get to the minima, and most importantly the kind of local minima (wide/narrow) attained. The kind of minima attained has a significant impact on the generalization accuracy of the network. Current systems employ hand tuned learning rate schedules, which are painstakingly tuned for each network and dataset. Given that the state space of schedules is huge, finding a satisfactory learning rate schedule can be very time consuming. In this paper, we present LRTuner, a method for tuning the learning rate as training proceeds. Our method works with any optimizer, and we demonstrate results on SGD with Momentum, and Adam optimizers. We extensively evaluate LRTuner on multiple datasets, models, and across optimizers. We compare favorably against standard learning rate schedules for the given dataset and models, including ImageNet on Resnet-50, Cifar-10 on Resnet-18, and SQuAD fine-tuning on BERT. For example on ImageNet with Resnet-50, LRTuner shows up to 0.2% absolute gains in test accuracy compared to the hand-tuned baseline schedule. Moreover, LRTuner can achieve the same accuracy as the baseline schedule in 29% less optimization steps.
2403.10167
Malte Luttermann
Malte Luttermann, Johann Machemer, Marcel Gehrke
Efficient Detection of Exchangeable Factors in Factor Graphs
Extended version of paper accepted to the Proceedings of the 37th International FLAIRS Conference (FLAIRS-24)
null
null
null
cs.AI cs.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
To allow for tractable probabilistic inference with respect to domain sizes, lifted probabilistic inference exploits symmetries in probabilistic graphical models. However, checking whether two factors encode equivalent semantics and hence are exchangeable is computationally expensive. In this paper, we efficiently solve the problem of detecting exchangeable factors in a factor graph. In particular, we introduce the detection of exchangeable factors (DEFT) algorithm, which allows us to drastically reduce the computational effort for checking whether two factors are exchangeable in practice. While previous approaches iterate all $O(n!)$ permutations of a factor's argument list in the worst case (where $n$ is the number of arguments of the factor), we prove that DEFT efficiently identifies restrictions to drastically reduce the number of permutations and validate the efficiency of DEFT in our empirical evaluation.
[ { "created": "Fri, 15 Mar 2024 10:20:56 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2024 16:02:40 GMT", "version": "v2" } ]
2024-04-08
[ [ "Luttermann", "Malte", "" ], [ "Machemer", "Johann", "" ], [ "Gehrke", "Marcel", "" ] ]
To allow for tractable probabilistic inference with respect to domain sizes, lifted probabilistic inference exploits symmetries in probabilistic graphical models. However, checking whether two factors encode equivalent semantics and hence are exchangeable is computationally expensive. In this paper, we efficiently solve the problem of detecting exchangeable factors in a factor graph. In particular, we introduce the detection of exchangeable factors (DEFT) algorithm, which allows us to drastically reduce the computational effort for checking whether two factors are exchangeable in practice. While previous approaches iterate all $O(n!)$ permutations of a factor's argument list in the worst case (where $n$ is the number of arguments of the factor), we prove that DEFT efficiently identifies restrictions to drastically reduce the number of permutations and validate the efficiency of DEFT in our empirical evaluation.
2105.13396
Zachary Neal
Zachary P. Neal, Rachel Domagalski, and Bruce Sagan
Comparing Alternatives to the Fixed Degree Sequence Model for Extracting the Backbone of Bipartite Projections
null
Scientific reports, 11(1), 1-13 (2021)
10.1038/s41598-021-03238-3
null
cs.SI stat.AP
http://creativecommons.org/licenses/by-sa/4.0/
Projections of bipartite or two-mode networks capture co-occurrences, and are used in diverse fields (e.g., ecology, economics, bibliometrics, politics) to represent unipartite networks. A key challenge in analyzing such networks is determining whether an observed number of co-occurrences between two nodes is significant, and therefore whether an edge exists between them. One approach, the fixed degree sequence model (FDSM), evaluates the significance of an edge's weight by comparison to a null model in which the degree sequences of the original bipartite network are fixed. Although the FDSM is an intuitive null model, it is computationally expensive because it requires Monte Carlo simulation to estimate each edge's $p$-value, and therefore is impractical for large projections. In this paper, we explore four potential alternatives to FDSM: fixed fill model (FFM), fixed row model (FRM), fixed column model (FCM), and stochastic degree sequence model (SDSM). We compare these models to FDSM in terms of accuracy, speed, statistical power, similarity, and ability to recover known communities. We find that the computationally-fast SDSM offers a statistically conservative but close approximation of the computationally-impractical FDSM under a wide range of conditions, and that it correctly recovers a known community structure even when the signal is weak. Therefore, although each backbone model may have particular applications, we recommend SDSM for extracting the backbone of bipartite projections when FDSM is impractical.
[ { "created": "Thu, 27 May 2021 18:56:04 GMT", "version": "v1" }, { "created": "Mon, 31 May 2021 12:24:53 GMT", "version": "v2" }, { "created": "Fri, 18 Jun 2021 15:02:14 GMT", "version": "v3" }, { "created": "Thu, 7 Oct 2021 17:55:14 GMT", "version": "v4" }, { "created": "Thu, 28 Oct 2021 18:06:03 GMT", "version": "v5" } ]
2022-02-22
[ [ "Neal", "Zachary P.", "" ], [ "Domagalski", "Rachel", "" ], [ "Sagan", "Bruce", "" ] ]
Projections of bipartite or two-mode networks capture co-occurrences, and are used in diverse fields (e.g., ecology, economics, bibliometrics, politics) to represent unipartite networks. A key challenge in analyzing such networks is determining whether an observed number of co-occurrences between two nodes is significant, and therefore whether an edge exists between them. One approach, the fixed degree sequence model (FDSM), evaluates the significance of an edge's weight by comparison to a null model in which the degree sequences of the original bipartite network are fixed. Although the FDSM is an intuitive null model, it is computationally expensive because it requires Monte Carlo simulation to estimate each edge's $p$-value, and therefore is impractical for large projections. In this paper, we explore four potential alternatives to FDSM: fixed fill model (FFM), fixed row model (FRM), fixed column model (FCM), and stochastic degree sequence model (SDSM). We compare these models to FDSM in terms of accuracy, speed, statistical power, similarity, and ability to recover known communities. We find that the computationally-fast SDSM offers a statistically conservative but close approximation of the computationally-impractical FDSM under a wide range of conditions, and that it correctly recovers a known community structure even when the signal is weak. Therefore, although each backbone model may have particular applications, we recommend SDSM for extracting the backbone of bipartite projections when FDSM is impractical.
1709.05510
Sainyam Galhotra Mr
Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, Barna Saha
The Geometric Block Model
A shorter version of this paper has appeared in 32nd AAAI Conference on Artificial Intelligence. The AAAI proceedings version as well as the previous version in arxiv contained some errors that have been corrected in this version
null
null
null
cs.SI cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To capture the inherent geometric features of many community detection problems, we propose to use a new random graph model of communities that we call a Geometric Block Model. The geometric block model generalizes the random geometric graphs in the same way that the well-studied stochastic block model generalizes the Erdos-Renyi random graphs. It is also a natural extension of random community models inspired by the recent theoretical and practical advancement in community detection. While being a topic of fundamental theoretical interest, our main contribution is to show that many practical community structures are better explained by the geometric block model. We also show that a simple triangle-counting algorithm to detect communities in the geometric block model is near-optimal. Indeed, even in the regime where the average degree of the graph grows only logarithmically with the number of vertices (sparse-graph), we show that this algorithm performs extremely well, both theoretically and practically. In contrast, the triangle-counting algorithm is far from being optimum for the stochastic block model. We simulate our results on both real and synthetic datasets to show superior performance of both the new model as well as our algorithm.
[ { "created": "Sat, 16 Sep 2017 13:38:03 GMT", "version": "v1" }, { "created": "Wed, 24 Jan 2018 16:26:40 GMT", "version": "v2" } ]
2018-01-25
[ [ "Galhotra", "Sainyam", "" ], [ "Mazumdar", "Arya", "" ], [ "Pal", "Soumyabrata", "" ], [ "Saha", "Barna", "" ] ]
To capture the inherent geometric features of many community detection problems, we propose to use a new random graph model of communities that we call a Geometric Block Model. The geometric block model generalizes the random geometric graphs in the same way that the well-studied stochastic block model generalizes the Erdos-Renyi random graphs. It is also a natural extension of random community models inspired by the recent theoretical and practical advancement in community detection. While being a topic of fundamental theoretical interest, our main contribution is to show that many practical community structures are better explained by the geometric block model. We also show that a simple triangle-counting algorithm to detect communities in the geometric block model is near-optimal. Indeed, even in the regime where the average degree of the graph grows only logarithmically with the number of vertices (sparse-graph), we show that this algorithm performs extremely well, both theoretically and practically. In contrast, the triangle-counting algorithm is far from being optimum for the stochastic block model. We simulate our results on both real and synthetic datasets to show superior performance of both the new model as well as our algorithm.
2103.16989
Shu Xincheng
Jiawei Shen, Xincheng Shu, Hu Yang
Bidirectional group random walk based network embedding for asymmetric proximity
18 pages, 7 figures
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Network embedding aims to represent a network into a low dimensional space where the network structural information and inherent properties are maximumly preserved. Random walk based network embedding methods such as DeepWalk and node2vec have shown outstanding performance in the aspect of preserving the network topological structure. However, these approaches either predict the distribution of a node's neighbors in both direction together, which makes them unable to capture any asymmetric relationship in a network; or preserve asymmetric relationship in only one direction and hence lose the one in another direction. To address these limitations, we propose bidirectional group random walk based network embedding method (BiGRW), which treats the distributions of a node's neighbors in the forward and backward direction in random walks as two different asymmetric network structural information. The basic idea of BiGRW is to learn a representation for each node that is useful to predict its distribution of neighbors in the forward and backward direction separately. Apart from that, a novel random walk sampling strategy is proposed with a parameter {\alpha} to flexibly control the trade-off between breadth-first sampling (BFS) and depth-first sampling (DFS). To learn representations from node attributes, we design an attributed version of BiGRW (BiGRW-AT). Experimental results on several benchmark datasets demonstrate that the proposed methods significantly outperform the state-of-the-art plain and attributed network embedding methods on tasks of node classification and clustering.
[ { "created": "Wed, 31 Mar 2021 11:11:53 GMT", "version": "v1" } ]
2021-04-01
[ [ "Shen", "Jiawei", "" ], [ "Shu", "Xincheng", "" ], [ "Yang", "Hu", "" ] ]
Network embedding aims to represent a network into a low dimensional space where the network structural information and inherent properties are maximumly preserved. Random walk based network embedding methods such as DeepWalk and node2vec have shown outstanding performance in the aspect of preserving the network topological structure. However, these approaches either predict the distribution of a node's neighbors in both direction together, which makes them unable to capture any asymmetric relationship in a network; or preserve asymmetric relationship in only one direction and hence lose the one in another direction. To address these limitations, we propose bidirectional group random walk based network embedding method (BiGRW), which treats the distributions of a node's neighbors in the forward and backward direction in random walks as two different asymmetric network structural information. The basic idea of BiGRW is to learn a representation for each node that is useful to predict its distribution of neighbors in the forward and backward direction separately. Apart from that, a novel random walk sampling strategy is proposed with a parameter {\alpha} to flexibly control the trade-off between breadth-first sampling (BFS) and depth-first sampling (DFS). To learn representations from node attributes, we design an attributed version of BiGRW (BiGRW-AT). Experimental results on several benchmark datasets demonstrate that the proposed methods significantly outperform the state-of-the-art plain and attributed network embedding methods on tasks of node classification and clustering.
2003.12296
Lei Qi
Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Generalizable Model-agnostic Semantic Segmentation via Target-specific Normalization
Accepted by Pattern Recognition (PR)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic segmentation in a supervised learning manner has achieved significant progress in recent years. However, its performance usually drops dramatically due to the data-distribution discrepancy between seen and unseen domains when we directly deploy the trained model to segment the images of unseen (or new coming) domains. To this end, we propose a novel domain generalization framework for the generalizable semantic segmentation task, which enhances the generalization ability of the model from two different views, including the training paradigm and the test strategy. Concretely, we exploit the model-agnostic learning to simulate the domain shift problem, which deals with the domain generalization from the training scheme perspective. Besides, considering the data-distribution discrepancy between seen source and unseen target domains, we develop the target-specific normalization scheme to enhance the generalization ability. Furthermore, when images come one by one in the test stage, we design the image-based memory bank (Image Bank in short) with style-based selection policy to select similar images to obtain more accurate statistics of normalization. Extensive experiments highlight that the proposed method produces state-of-the-art performance for the domain generalization of semantic segmentation on multiple benchmark segmentation datasets, i.e., Cityscapes, Mapillary.
[ { "created": "Fri, 27 Mar 2020 09:25:19 GMT", "version": "v1" }, { "created": "Tue, 31 Aug 2021 06:43:50 GMT", "version": "v2" } ]
2021-09-01
[ [ "Zhang", "Jian", "" ], [ "Qi", "Lei", "" ], [ "Shi", "Yinghuan", "" ], [ "Gao", "Yang", "" ] ]
Semantic segmentation in a supervised learning manner has achieved significant progress in recent years. However, its performance usually drops dramatically due to the data-distribution discrepancy between seen and unseen domains when we directly deploy the trained model to segment the images of unseen (or new coming) domains. To this end, we propose a novel domain generalization framework for the generalizable semantic segmentation task, which enhances the generalization ability of the model from two different views, including the training paradigm and the test strategy. Concretely, we exploit the model-agnostic learning to simulate the domain shift problem, which deals with the domain generalization from the training scheme perspective. Besides, considering the data-distribution discrepancy between seen source and unseen target domains, we develop the target-specific normalization scheme to enhance the generalization ability. Furthermore, when images come one by one in the test stage, we design the image-based memory bank (Image Bank in short) with style-based selection policy to select similar images to obtain more accurate statistics of normalization. Extensive experiments highlight that the proposed method produces state-of-the-art performance for the domain generalization of semantic segmentation on multiple benchmark segmentation datasets, i.e., Cityscapes, Mapillary.
2203.15121
Mohannad Ismail
Mohannad Ismail, Andrew Quach, Christopher Jelesnianski, Yeongjin Jang, Changwoo Min
Tightly Seal Your Sensitive Pointers with PACTight
Accepted for publication to USENIX Security 2022
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
ARM is becoming more popular in desktops and data centers, opening a new realm in terms of security attacks against ARM. ARM has released Pointer Authentication, a new hardware security feature that is intended to ensure pointer integrity with cryptographic primitives. In this paper, we utilize Pointer Authentication (PA) to build a novel scheme to completely prevent any misuse of security-sensitive pointers. We propose PACTight to tightly seal these pointers. PACTight utilizes a strong and unique modifier that addresses the current issues with the state-of-the-art PA defense mechanisms. We implement four defenses based on the PACTight mechanism. Our security and performance evaluation results show that PACTight defenses are more efficient and secure. Using real PA instructions, we evaluated PACTight on 30 different applications, including NGINX web server, with an average performance overhead of 4.07% even when enforcing our strongest defense. PACTight demonstrates its effectiveness and efficiency with real PA instructions on real hardware.
[ { "created": "Mon, 28 Mar 2022 21:55:51 GMT", "version": "v1" } ]
2022-03-30
[ [ "Ismail", "Mohannad", "" ], [ "Quach", "Andrew", "" ], [ "Jelesnianski", "Christopher", "" ], [ "Jang", "Yeongjin", "" ], [ "Min", "Changwoo", "" ] ]
ARM is becoming more popular in desktops and data centers, opening a new realm in terms of security attacks against ARM. ARM has released Pointer Authentication, a new hardware security feature that is intended to ensure pointer integrity with cryptographic primitives. In this paper, we utilize Pointer Authentication (PA) to build a novel scheme to completely prevent any misuse of security-sensitive pointers. We propose PACTight to tightly seal these pointers. PACTight utilizes a strong and unique modifier that addresses the current issues with the state-of-the-art PA defense mechanisms. We implement four defenses based on the PACTight mechanism. Our security and performance evaluation results show that PACTight defenses are more efficient and secure. Using real PA instructions, we evaluated PACTight on 30 different applications, including NGINX web server, with an average performance overhead of 4.07% even when enforcing our strongest defense. PACTight demonstrates its effectiveness and efficiency with real PA instructions on real hardware.
2302.01706
Zilong Zhao
Zilong Zhao, Han Wu, Aad Van Moorsel and Lydia Y. Chen
GTV: Generating Tabular Data via Vertical Federated Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Networks (GANs) have achieved state-of-the-art results in tabular data synthesis, under the presumption of direct accessible training data. Vertical Federated Learning (VFL) is a paradigm which allows to distributedly train machine learning model with clients possessing unique features pertaining to the same individuals, where the tabular data learning is the primary use case. However, it is unknown if tabular GANs can be learned in VFL. Demand for secure data transfer among clients and GAN during training and data synthesizing poses extra challenge. Conditional vector for tabular GANs is a valuable tool to control specific features of generated data. But it contains sensitive information from real data - risking privacy guarantees. In this paper, we propose GTV, a VFL framework for tabular GANs, whose key components are generator, discriminator and the conditional vector. GTV proposes an unique distributed training architecture for generator and discriminator to access training data in a privacy-preserving manner. To accommodate conditional vector into training without privacy leakage, GTV designs a mechanism training-with-shuffling to ensure that no party can reconstruct training data with conditional vector. We evaluate the effectiveness of GTV in terms of synthetic data quality, and overall training scalability. Results show that GTV can consistently generate high-fidelity synthetic tabular data of comparable quality to that generated by centralized GAN algorithm. The difference on machine learning utility can be as low as to 2.7%, even under extremely imbalanced data distributions across clients and different number of clients.
[ { "created": "Fri, 3 Feb 2023 13:04:12 GMT", "version": "v1" } ]
2023-02-06
[ [ "Zhao", "Zilong", "" ], [ "Wu", "Han", "" ], [ "Van Moorsel", "Aad", "" ], [ "Chen", "Lydia Y.", "" ] ]
Generative Adversarial Networks (GANs) have achieved state-of-the-art results in tabular data synthesis, under the presumption of direct accessible training data. Vertical Federated Learning (VFL) is a paradigm which allows to distributedly train machine learning model with clients possessing unique features pertaining to the same individuals, where the tabular data learning is the primary use case. However, it is unknown if tabular GANs can be learned in VFL. Demand for secure data transfer among clients and GAN during training and data synthesizing poses extra challenge. Conditional vector for tabular GANs is a valuable tool to control specific features of generated data. But it contains sensitive information from real data - risking privacy guarantees. In this paper, we propose GTV, a VFL framework for tabular GANs, whose key components are generator, discriminator and the conditional vector. GTV proposes an unique distributed training architecture for generator and discriminator to access training data in a privacy-preserving manner. To accommodate conditional vector into training without privacy leakage, GTV designs a mechanism training-with-shuffling to ensure that no party can reconstruct training data with conditional vector. We evaluate the effectiveness of GTV in terms of synthetic data quality, and overall training scalability. Results show that GTV can consistently generate high-fidelity synthetic tabular data of comparable quality to that generated by centralized GAN algorithm. The difference on machine learning utility can be as low as to 2.7%, even under extremely imbalanced data distributions across clients and different number of clients.
1007.0982
An Liu Dr
An Liu and Youjian (Eugene) Liu and Haige Xiang and Wu Luo
MIMO B-MAC Interference Network Optimization under Rate Constraints by Polite Water-filling and Duality
30 pages, 8 figures, and 5 tables. Submitted to IEEE Transactions on Signal Processing, Jun. 2010
An Liu; Youjian Liu; Haige Xiang; Wu Luo, "MIMO B-MAC Interference Network Optimization Under Rate Constraints by Polite Water-Filling and Duality," IEEE Transactions on Signal Processing, vol.59, no.1, pp.263,276, Jan. 2011
10.1109/TSP.2010.2088394
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We take two new approaches to design efficient algorithms for transmitter optimization under rate constraints, to guarantee the Quality of Service in general MIMO interference networks, which is a combination of multiple interfering broadcast channels (BC) and multiaccess channels (MAC) and is named B-MAC Networks. Two related optimization problems, maximizing the minimum of weighted rates under a sum-power constraint and minimizing the sum-power under rate constraints, are considered. The first approach takes advantage of existing efficient algorithms for SINR problems by building a bridge between rate and SINR through the design of optimal mappings between them. The approach can be applied to other optimization problems as well. The second approach employs polite water-filling, which is the optimal network version of water-filling that we recently found. It replaces most generic optimization algorithms currently used for networks and reduces the complexity while demonstrating superior performance even in non-convex cases. Both centralized and distributed algorithms are designed and the performance is analyzed in addition to numeric examples.
[ { "created": "Tue, 6 Jul 2010 19:31:00 GMT", "version": "v1" } ]
2013-11-26
[ [ "Liu", "An", "", "Eugene" ], [ "Youjian", "", "", "Eugene" ], [ "Liu", "", "" ], [ "Xiang", "Haige", "" ], [ "Luo", "Wu", "" ] ]
We take two new approaches to design efficient algorithms for transmitter optimization under rate constraints, to guarantee the Quality of Service in general MIMO interference networks, which is a combination of multiple interfering broadcast channels (BC) and multiaccess channels (MAC) and is named B-MAC Networks. Two related optimization problems, maximizing the minimum of weighted rates under a sum-power constraint and minimizing the sum-power under rate constraints, are considered. The first approach takes advantage of existing efficient algorithms for SINR problems by building a bridge between rate and SINR through the design of optimal mappings between them. The approach can be applied to other optimization problems as well. The second approach employs polite water-filling, which is the optimal network version of water-filling that we recently found. It replaces most generic optimization algorithms currently used for networks and reduces the complexity while demonstrating superior performance even in non-convex cases. Both centralized and distributed algorithms are designed and the performance is analyzed in addition to numeric examples.
1210.1771
A. Emre Cetin
A. Emre Cetin
In-place associative permutation sort
25 pages. arXiv admin note: substantial text overlap with arXiv:1209.0572, arXiv:1209.3668, arXiv:1209.1942, arXiv:1209.4714
null
null
null
cs.DS
http://creativecommons.org/licenses/by-nc-sa/3.0/
In-place associative integer sorting technique was developed, improved and specialized for distinct integers. The technique is suitable for integer sorting. Hence, given a list S of n integers S[0...n-1], the technique sorts the integers in ascending or descending order. It replaces bucket sort, distribution counting sort and address calculation sort family of algorithms and requires only constant amount of additional memory for storing counters and indices beside the input list. The technique was inspired from one of the ordinal theories of "serial order in behavior" and explained by the analogy with the three main stages in the formation and retrieval of memory in cognitive neuroscience: (i) practicing, (ii) storing and (iii) retrieval. In this study in-place associative permutation technique is introduced for integer key sorting problem. Given a list S of n elements S[0...n-1] each have an integer key in the range [0,m-1], the technique sorts the elements according to their integer keys in O(n) time using only O(1) amount of memory if m<=n. On the other hand, if m>n, it sorts in O(n+m) time for the worst, O(m) time for the average (uniformly distributed keys) and O(n) time for the best case using O(1) extra space.
[ { "created": "Fri, 5 Oct 2012 14:30:37 GMT", "version": "v1" } ]
2012-10-08
[ [ "Cetin", "A. Emre", "" ] ]
In-place associative integer sorting technique was developed, improved and specialized for distinct integers. The technique is suitable for integer sorting. Hence, given a list S of n integers S[0...n-1], the technique sorts the integers in ascending or descending order. It replaces bucket sort, distribution counting sort and address calculation sort family of algorithms and requires only constant amount of additional memory for storing counters and indices beside the input list. The technique was inspired from one of the ordinal theories of "serial order in behavior" and explained by the analogy with the three main stages in the formation and retrieval of memory in cognitive neuroscience: (i) practicing, (ii) storing and (iii) retrieval. In this study in-place associative permutation technique is introduced for integer key sorting problem. Given a list S of n elements S[0...n-1] each have an integer key in the range [0,m-1], the technique sorts the elements according to their integer keys in O(n) time using only O(1) amount of memory if m<=n. On the other hand, if m>n, it sorts in O(n+m) time for the worst, O(m) time for the average (uniformly distributed keys) and O(n) time for the best case using O(1) extra space.
2403.12965
Xi Chen
Mengting Chen, Xi Chen, Zhonghua Zhai, Chen Ju, Xuewen Hong, Jinsong Lan, Shuai Xiao
Wear-Any-Way: Manipulable Virtual Try-on via Sparse Correspondence Alignment
Project Page: https://mengtingchen.github.io/wear-any-way-page/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel framework for virtual try-on, termed Wear-Any-Way. Different from previous methods, Wear-Any-Way is a customizable solution. Besides generating high-fidelity results, our method supports users to precisely manipulate the wearing style. To achieve this goal, we first construct a strong pipeline for standard virtual try-on, supporting single/multiple garment try-on and model-to-model settings in complicated scenarios. To make it manipulable, we propose sparse correspondence alignment which involves point-based control to guide the generation for specific locations. With this design, Wear-Any-Way gets state-of-the-art performance for the standard setting and provides a novel interaction form for customizing the wearing style. For instance, it supports users to drag the sleeve to make it rolled up, drag the coat to make it open, and utilize clicks to control the style of tuck, etc. Wear-Any-Way enables more liberated and flexible expressions of the attires, holding profound implications in the fashion industry.
[ { "created": "Tue, 19 Mar 2024 17:59:52 GMT", "version": "v1" } ]
2024-03-20
[ [ "Chen", "Mengting", "" ], [ "Chen", "Xi", "" ], [ "Zhai", "Zhonghua", "" ], [ "Ju", "Chen", "" ], [ "Hong", "Xuewen", "" ], [ "Lan", "Jinsong", "" ], [ "Xiao", "Shuai", "" ] ]
This paper introduces a novel framework for virtual try-on, termed Wear-Any-Way. Different from previous methods, Wear-Any-Way is a customizable solution. Besides generating high-fidelity results, our method supports users to precisely manipulate the wearing style. To achieve this goal, we first construct a strong pipeline for standard virtual try-on, supporting single/multiple garment try-on and model-to-model settings in complicated scenarios. To make it manipulable, we propose sparse correspondence alignment which involves point-based control to guide the generation for specific locations. With this design, Wear-Any-Way gets state-of-the-art performance for the standard setting and provides a novel interaction form for customizing the wearing style. For instance, it supports users to drag the sleeve to make it rolled up, drag the coat to make it open, and utilize clicks to control the style of tuck, etc. Wear-Any-Way enables more liberated and flexible expressions of the attires, holding profound implications in the fashion industry.
1701.08474
Prasanna Kansakar
Arslan Munir, Prasanna Kansakar, Samee U. Khan
IFCIoT: Integrated Fog Cloud IoT Architectural Paradigm for Future Internet of Things
9 pages, 3 figures, accepted for publication in IEEE Consumer Electronics Magazine, July 2017 issue
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel integrated fog cloud IoT (IFCIoT) architectural paradigm that promises increased performance, energy efficiency, reduced latency, quicker response time, scalability, and better localized accuracy for future IoT applications. The fog nodes (e.g., edge servers, smart routers, base stations) receive computation offloading requests and sensed data from various IoT devices. To enhance performance, energy efficiency, and real-time responsiveness of applications, we propose a reconfigurable and layered fog node (edge server) architecture that analyzes the applications' characteristics and reconfigure the architectural resources to better meet the peak workload demands. The layers of the proposed fog node architecture include application layer, analytics layer, virtualization layer, reconfiguration layer, and hardware layer. The layered architecture facilitates abstraction and implementation for fog computing paradigm that is distributed in nature and where multiple vendors (e.g., applications, services, data and content providers) are involved. We also elaborate the potential applications of IFCIoT architecture, such as smart cities, intelligent transportation systems, localized weather maps and environmental monitoring, and real-time agricultural data analytics and control.
[ { "created": "Mon, 30 Jan 2017 03:50:54 GMT", "version": "v1" } ]
2017-01-31
[ [ "Munir", "Arslan", "" ], [ "Kansakar", "Prasanna", "" ], [ "Khan", "Samee U.", "" ] ]
We propose a novel integrated fog cloud IoT (IFCIoT) architectural paradigm that promises increased performance, energy efficiency, reduced latency, quicker response time, scalability, and better localized accuracy for future IoT applications. The fog nodes (e.g., edge servers, smart routers, base stations) receive computation offloading requests and sensed data from various IoT devices. To enhance performance, energy efficiency, and real-time responsiveness of applications, we propose a reconfigurable and layered fog node (edge server) architecture that analyzes the applications' characteristics and reconfigure the architectural resources to better meet the peak workload demands. The layers of the proposed fog node architecture include application layer, analytics layer, virtualization layer, reconfiguration layer, and hardware layer. The layered architecture facilitates abstraction and implementation for fog computing paradigm that is distributed in nature and where multiple vendors (e.g., applications, services, data and content providers) are involved. We also elaborate the potential applications of IFCIoT architecture, such as smart cities, intelligent transportation systems, localized weather maps and environmental monitoring, and real-time agricultural data analytics and control.
2202.08827
Mislav Balunovic
Mislav Balunovi\'c, Dimitar I. Dimitrov, Nikola Jovanovi\'c, Martin Vechev
LAMP: Extracting Text from Gradients with Language Model Priors
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning. While success was demonstrated primarily on image data, these methods do not directly transfer to other domains such as text. In this work, we propose LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients. Our attack is based on two key insights: (i) modeling prior text probability with an auxiliary language model, guiding the search towards more natural text, and (ii) alternating continuous and discrete optimization, which minimizes reconstruction loss on embeddings, while avoiding local minima by applying discrete text transformations. Our experiments demonstrate that LAMP is significantly more effective than prior work: it reconstructs 5x more bigrams and 23% longer subsequences on average. Moreover, we are the first to recover inputs from batch sizes larger than 1 for textual models. These findings indicate that gradient updates of models operating on textual data leak more information than previously thought.
[ { "created": "Thu, 17 Feb 2022 18:49:25 GMT", "version": "v1" }, { "created": "Wed, 19 Oct 2022 16:00:23 GMT", "version": "v2" } ]
2022-10-20
[ [ "Balunović", "Mislav", "" ], [ "Dimitrov", "Dimitar I.", "" ], [ "Jovanović", "Nikola", "" ], [ "Vechev", "Martin", "" ] ]
Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning. While success was demonstrated primarily on image data, these methods do not directly transfer to other domains such as text. In this work, we propose LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients. Our attack is based on two key insights: (i) modeling prior text probability with an auxiliary language model, guiding the search towards more natural text, and (ii) alternating continuous and discrete optimization, which minimizes reconstruction loss on embeddings, while avoiding local minima by applying discrete text transformations. Our experiments demonstrate that LAMP is significantly more effective than prior work: it reconstructs 5x more bigrams and 23% longer subsequences on average. Moreover, we are the first to recover inputs from batch sizes larger than 1 for textual models. These findings indicate that gradient updates of models operating on textual data leak more information than previously thought.
2109.13070
Zhengyuan Liu
Zhengyuan Liu, Nancy F. Chen
Controllable Neural Dialogue Summarization with Personal Named Entity Planning
EMNLP 2021 Main Conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a controllable neural generation framework that can flexibly guide dialogue summarization with personal named entity planning. The conditional sequences are modulated to decide what types of information or what perspective to focus on when forming summaries to tackle the under-constrained problem in summarization tasks. This framework supports two types of use cases: (1) Comprehensive Perspective, which is a general-purpose case with no user-preference specified, considering summary points from all conversational interlocutors and all mentioned persons; (2) Focus Perspective, positioning the summary based on a user-specified personal named entity, which could be one of the interlocutors or one of the persons mentioned in the conversation. During training, we exploit occurrence planning of personal named entities and coreference information to improve temporal coherence and to minimize hallucination in neural generation. Experimental results show that our proposed framework generates fluent and factually consistent summaries under various planning controls using both objective metrics and human evaluations.
[ { "created": "Mon, 27 Sep 2021 14:19:32 GMT", "version": "v1" } ]
2021-09-28
[ [ "Liu", "Zhengyuan", "" ], [ "Chen", "Nancy F.", "" ] ]
In this paper, we propose a controllable neural generation framework that can flexibly guide dialogue summarization with personal named entity planning. The conditional sequences are modulated to decide what types of information or what perspective to focus on when forming summaries to tackle the under-constrained problem in summarization tasks. This framework supports two types of use cases: (1) Comprehensive Perspective, which is a general-purpose case with no user-preference specified, considering summary points from all conversational interlocutors and all mentioned persons; (2) Focus Perspective, positioning the summary based on a user-specified personal named entity, which could be one of the interlocutors or one of the persons mentioned in the conversation. During training, we exploit occurrence planning of personal named entities and coreference information to improve temporal coherence and to minimize hallucination in neural generation. Experimental results show that our proposed framework generates fluent and factually consistent summaries under various planning controls using both objective metrics and human evaluations.
1509.00584
Norbert B\'atfai Ph.D.
Norbert B\'atfai
Turing's Imitation Game has been Improved
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using the recently introduced universal computing model, called orchestrated machine, that represents computations in a dissipative environment, we consider a new kind of interpretation of Turing's Imitation Game. In addition we raise the question whether the intelligence may show fractal properties. Then we sketch a vision of what robotic cars are going to do in the future. Finally we give the specification of an artificial life game based on the concept of orchestrated machines. The purpose of this paper is to start the search for possible relationships between these different topics.
[ { "created": "Wed, 2 Sep 2015 07:18:20 GMT", "version": "v1" } ]
2015-09-03
[ [ "Bátfai", "Norbert", "" ] ]
Using the recently introduced universal computing model, called orchestrated machine, that represents computations in a dissipative environment, we consider a new kind of interpretation of Turing's Imitation Game. In addition we raise the question whether the intelligence may show fractal properties. Then we sketch a vision of what robotic cars are going to do in the future. Finally we give the specification of an artificial life game based on the concept of orchestrated machines. The purpose of this paper is to start the search for possible relationships between these different topics.
1404.6196
Naohi Eguchi
Naohi Eguchi
Proving Termination of Unfolding Graph Rewriting for General Safe Recursion
Technical report
null
null
null
cs.LO cs.CC math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a new termination proof and complexity analysis of unfolding graph rewriting which is a specific kind of infinite graph rewriting expressing the general form of safe recursion. We introduce a termination order over sequences of terms together with an interpretation of term graphs into sequences of terms. Unfolding graph rewrite rules expressing general safe recursion can be successfully embedded into the termination order by the interpretation, yielding the polynomial runtime complexity. Moreover, generalising the definition of unfolding graph rewrite rules for general safe recursion, we propose a new criterion for the polynomial runtime complexity of infinite GRSs and for the polynomial size of normal forms in infinite GRSs.
[ { "created": "Thu, 24 Apr 2014 17:44:57 GMT", "version": "v1" }, { "created": "Wed, 30 Apr 2014 08:03:50 GMT", "version": "v2" }, { "created": "Wed, 28 May 2014 20:35:58 GMT", "version": "v3" }, { "created": "Fri, 20 Jun 2014 16:06:14 GMT", "version": "v4" } ]
2014-06-23
[ [ "Eguchi", "Naohi", "" ] ]
In this paper we present a new termination proof and complexity analysis of unfolding graph rewriting which is a specific kind of infinite graph rewriting expressing the general form of safe recursion. We introduce a termination order over sequences of terms together with an interpretation of term graphs into sequences of terms. Unfolding graph rewrite rules expressing general safe recursion can be successfully embedded into the termination order by the interpretation, yielding the polynomial runtime complexity. Moreover, generalising the definition of unfolding graph rewrite rules for general safe recursion, we propose a new criterion for the polynomial runtime complexity of infinite GRSs and for the polynomial size of normal forms in infinite GRSs.
1112.1045
Xin Li
Xin Li
Non-Malleable Extractors, Two-Source Extractors and Privacy Amplification
null
null
null
null
cs.CR cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dodis and Wichs introduced the notion of a non-malleable extractor to study the problem of privacy amplification with an active adversary. A non-malleable extractor is a much stronger version of a strong extractor. Previously, there are only two known constructions of non-malleable extractors. Both constructions only work for (n, k)-sources with k>n/2. Interestingly, both constructions are also two-source extractors. In this paper, we present a strong connection between non-malleable extractors and two-source extractors. The first part of the connection shows that non-malleable extractors can be used to construct two-source extractors. With appropriate parameters the resulted two-source extractor beats the best known construction of two-source extractors. This partially explains why previous constructions of non-malleable extractors only work for sources with entropy rate >1/2, and why explicit non-malleable extractors for small min-entropy may be hard to get. The second part of the connection shows that certain two-source extractors can be used to construct non-malleable extractors. Using this connection, we obtain the first construction of non-malleable extractors for k < n/2. Specifically, we give an unconditional construction for min-entropy k=(1/2-\delta)n for some constant \delta>0, and a conditional (semi-explicit) construction that can potentially achieve k=\alpha n for any constant \alpha>0. Finally, despite the lack of explicit non-malleable extractors for arbitrarily linear entropy, we give the first 2-round privacy amplification protocol with asymptotically optimal entropy loss and communication complexity for (n, k) sources with k=\alpha n for any constant \alpha>0. This dramatically improves previous results and answers an open problem in \cite{DLWZ11}.
[ { "created": "Mon, 5 Dec 2011 20:07:10 GMT", "version": "v1" }, { "created": "Mon, 9 Apr 2012 06:21:32 GMT", "version": "v2" } ]
2015-03-19
[ [ "Li", "Xin", "" ] ]
Dodis and Wichs introduced the notion of a non-malleable extractor to study the problem of privacy amplification with an active adversary. A non-malleable extractor is a much stronger version of a strong extractor. Previously, there are only two known constructions of non-malleable extractors. Both constructions only work for (n, k)-sources with k>n/2. Interestingly, both constructions are also two-source extractors. In this paper, we present a strong connection between non-malleable extractors and two-source extractors. The first part of the connection shows that non-malleable extractors can be used to construct two-source extractors. With appropriate parameters the resulted two-source extractor beats the best known construction of two-source extractors. This partially explains why previous constructions of non-malleable extractors only work for sources with entropy rate >1/2, and why explicit non-malleable extractors for small min-entropy may be hard to get. The second part of the connection shows that certain two-source extractors can be used to construct non-malleable extractors. Using this connection, we obtain the first construction of non-malleable extractors for k < n/2. Specifically, we give an unconditional construction for min-entropy k=(1/2-\delta)n for some constant \delta>0, and a conditional (semi-explicit) construction that can potentially achieve k=\alpha n for any constant \alpha>0. Finally, despite the lack of explicit non-malleable extractors for arbitrarily linear entropy, we give the first 2-round privacy amplification protocol with asymptotically optimal entropy loss and communication complexity for (n, k) sources with k=\alpha n for any constant \alpha>0. This dramatically improves previous results and answers an open problem in \cite{DLWZ11}.
1003.5517
Lingjie Duan
Lingjie Duan, Jianwei Huang, and Biying Shou
Competition with Dynamic Spectrum Leasing
A shorter version appears in IEEE DySPAN 2010. This version has been submitted to IEEE/ACM Transactions on Networking.
null
10.1109/DYSPAN.2010.5457903
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a comprehensive analytical study of two competitive cognitive operators' spectrum leasing and pricing strategies, taking into account operators' heterogeneity in leasing costs and users' heterogeneity in transmission power and channel conditions. We model the interactions between operators and users as a three-stage dynamic game, where operators make simultaneous spectrum leasing and pricing decisions in Stages I and II, and users make purchase decisions in Stage III. Using backward induction, we are able to completely characterize the game's equilibria. We show that both operators make the equilibrium leasing and pricing decisions based on simple threshold policies. Moreover, two operators always choose the same equilibrium price despite their difference in leasing costs. Each user receives the same signal-to-noise-ratio (SNR) at the equilibrium, and the obtained payoff is linear in its transmission power and channel gain. We also compare the duopoly equilibrium with the coordinated case where two operators cooperate to maximize their total profit. We show that the maximum loss of total profit due to operators' competition is no larger than 25%. The users, however, always benefit from operators' competition in terms of their payoffs. We show that most of these insights are robust in the general SNR regime.
[ { "created": "Mon, 29 Mar 2010 12:09:04 GMT", "version": "v1" } ]
2016-11-17
[ [ "Duan", "Lingjie", "" ], [ "Huang", "Jianwei", "" ], [ "Shou", "Biying", "" ] ]
This paper presents a comprehensive analytical study of two competitive cognitive operators' spectrum leasing and pricing strategies, taking into account operators' heterogeneity in leasing costs and users' heterogeneity in transmission power and channel conditions. We model the interactions between operators and users as a three-stage dynamic game, where operators make simultaneous spectrum leasing and pricing decisions in Stages I and II, and users make purchase decisions in Stage III. Using backward induction, we are able to completely characterize the game's equilibria. We show that both operators make the equilibrium leasing and pricing decisions based on simple threshold policies. Moreover, two operators always choose the same equilibrium price despite their difference in leasing costs. Each user receives the same signal-to-noise-ratio (SNR) at the equilibrium, and the obtained payoff is linear in its transmission power and channel gain. We also compare the duopoly equilibrium with the coordinated case where two operators cooperate to maximize their total profit. We show that the maximum loss of total profit due to operators' competition is no larger than 25%. The users, however, always benefit from operators' competition in terms of their payoffs. We show that most of these insights are robust in the general SNR regime.
2009.05737
Zuchao Li
Zuchao Li, Hai Zhao, Shexia He, Jiaxun Cai
Syntax Role for Neural Semantic Role Labeling
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruning-based and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.
[ { "created": "Sat, 12 Sep 2020 07:01:12 GMT", "version": "v1" } ]
2020-09-15
[ [ "Li", "Zuchao", "" ], [ "Zhao", "Hai", "" ], [ "He", "Shexia", "" ], [ "Cai", "Jiaxun", "" ] ]
Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruning-based and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.
2202.07991
Vikram Gupta
Vikram Gupta, Rini Sharon, Ramit Sawhney, Debdoot Mukherjee
ADIMA: Abuse Detection In Multilingual Audio
null
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by/4.0/
Abusive content detection in spoken text can be addressed by performing Automatic Speech Recognition (ASR) and leveraging advancements in natural language processing. However, ASR models introduce latency and often perform sub-optimally for profane words as they are underrepresented in training corpora and not spoken clearly or completely. Exploration of this problem entirely in the audio domain has largely been limited by the lack of audio datasets. Building on these challenges, we propose ADIMA, a novel, linguistically diverse, ethically sourced, expert annotated and well-balanced multilingual profanity detection audio dataset comprising of 11,775 audio samples in 10 Indic languages spanning 65 hours and spoken by 6,446 unique users. Through quantitative experiments across monolingual and cross-lingual zero-shot settings, we take the first step in democratizing audio based content moderation in Indic languages and set forth our dataset to pave future work.
[ { "created": "Wed, 16 Feb 2022 11:09:50 GMT", "version": "v1" } ]
2022-02-17
[ [ "Gupta", "Vikram", "" ], [ "Sharon", "Rini", "" ], [ "Sawhney", "Ramit", "" ], [ "Mukherjee", "Debdoot", "" ] ]
Abusive content detection in spoken text can be addressed by performing Automatic Speech Recognition (ASR) and leveraging advancements in natural language processing. However, ASR models introduce latency and often perform sub-optimally for profane words as they are underrepresented in training corpora and not spoken clearly or completely. Exploration of this problem entirely in the audio domain has largely been limited by the lack of audio datasets. Building on these challenges, we propose ADIMA, a novel, linguistically diverse, ethically sourced, expert annotated and well-balanced multilingual profanity detection audio dataset comprising of 11,775 audio samples in 10 Indic languages spanning 65 hours and spoken by 6,446 unique users. Through quantitative experiments across monolingual and cross-lingual zero-shot settings, we take the first step in democratizing audio based content moderation in Indic languages and set forth our dataset to pave future work.
1005.4552
Josef Urban
Josef Urban, Jesse Alama, Piotr Rudnicki, and Herman Geuvers
A Wiki for Mizar: Motivation, Considerations, and Initial Prototype
To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 2010
Intelligent Computer Mathematics 2010, LNCS 6167, pp. 455-469
10.1007/978-3-642-14128-7_38
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formal mathematics has so far not taken full advantage of ideas from collaborative tools such as wikis and distributed version control systems (DVCS). We argue that the field could profit from such tools, serving both newcomers and experts alike. We describe a preliminary system for such collaborative development based on the Git DVCS. We focus, initially, on the Mizar system and its library of formalized mathematics.
[ { "created": "Tue, 25 May 2010 12:42:29 GMT", "version": "v1" } ]
2011-07-27
[ [ "Urban", "Josef", "" ], [ "Alama", "Jesse", "" ], [ "Rudnicki", "Piotr", "" ], [ "Geuvers", "Herman", "" ] ]
Formal mathematics has so far not taken full advantage of ideas from collaborative tools such as wikis and distributed version control systems (DVCS). We argue that the field could profit from such tools, serving both newcomers and experts alike. We describe a preliminary system for such collaborative development based on the Git DVCS. We focus, initially, on the Mizar system and its library of formalized mathematics.
2404.11358
Jeongtaek Oh
Jeongtaek Oh, Jaeyoung Chung, Dongwoo Lee, Kyoung Mu Lee
DeblurGS: Gaussian Splatting for Camera Motion Blur
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Although significant progress has been made in reconstructing sharp 3D scenes from motion-blurred images, a transition to real-world applications remains challenging. The primary obstacle stems from the severe blur which leads to inaccuracies in the acquisition of initial camera poses through Structure-from-Motion, a critical aspect often overlooked by previous approaches. To address this challenge, we propose DeblurGS, a method to optimize sharp 3D Gaussian Splatting from motion-blurred images, even with the noisy camera pose initialization. We restore a fine-grained sharp scene by leveraging the remarkable reconstruction capability of 3D Gaussian Splatting. Our approach estimates the 6-Degree-of-Freedom camera motion for each blurry observation and synthesizes corresponding blurry renderings for the optimization process. Furthermore, we propose Gaussian Densification Annealing strategy to prevent the generation of inaccurate Gaussians at erroneous locations during the early training stages when camera motion is still imprecise. Comprehensive experiments demonstrate that our DeblurGS achieves state-of-the-art performance in deblurring and novel view synthesis for real-world and synthetic benchmark datasets, as well as field-captured blurry smartphone videos.
[ { "created": "Wed, 17 Apr 2024 13:14:52 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2024 03:18:36 GMT", "version": "v2" } ]
2024-04-19
[ [ "Oh", "Jeongtaek", "" ], [ "Chung", "Jaeyoung", "" ], [ "Lee", "Dongwoo", "" ], [ "Lee", "Kyoung Mu", "" ] ]
Although significant progress has been made in reconstructing sharp 3D scenes from motion-blurred images, a transition to real-world applications remains challenging. The primary obstacle stems from the severe blur which leads to inaccuracies in the acquisition of initial camera poses through Structure-from-Motion, a critical aspect often overlooked by previous approaches. To address this challenge, we propose DeblurGS, a method to optimize sharp 3D Gaussian Splatting from motion-blurred images, even with the noisy camera pose initialization. We restore a fine-grained sharp scene by leveraging the remarkable reconstruction capability of 3D Gaussian Splatting. Our approach estimates the 6-Degree-of-Freedom camera motion for each blurry observation and synthesizes corresponding blurry renderings for the optimization process. Furthermore, we propose Gaussian Densification Annealing strategy to prevent the generation of inaccurate Gaussians at erroneous locations during the early training stages when camera motion is still imprecise. Comprehensive experiments demonstrate that our DeblurGS achieves state-of-the-art performance in deblurring and novel view synthesis for real-world and synthetic benchmark datasets, as well as field-captured blurry smartphone videos.
2104.07663
Cristina Maria Pacurar
Cristina Maria Pacurar, Ruxandra-Gabriela Albu, Victor-Dan Pacurar
Tourist route optimization in the context of Covid-19 pandemic
null
Sustainability. 2021; 13(10):5492
10.3390/su13105492
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents an innovative method for tourist route planning inside a destination. The necessity of reorganizing the tourist routes within a destination comes as an immediate response to the Covid-19 crisis. The implementation of the method inside tourist destinations can be an important advantage in transforming a destination into a safer destination in times of Covid-19 and post-Covid-19. The existing trend of shortening the tourist stay length has been accelerated while the epidemic became a pandemic. Moreover, the wariness for future pandemics has brought to the spotlight the issue of overcrowded attractions inside a destination at certain moments. The method proposed in this paper proposes a backtracking algorithm, more precisely an adaptation of the travelling salesman problem. The method presented aims to facilitate the navigation inside a destination and to revive certain less-visited sightseeing spots inside a destination while facilitating the social distancing measures imposed by Covid-19.
[ { "created": "Thu, 15 Apr 2021 17:59:56 GMT", "version": "v1" }, { "created": "Wed, 5 May 2021 18:44:57 GMT", "version": "v2" } ]
2021-07-20
[ [ "Pacurar", "Cristina Maria", "" ], [ "Albu", "Ruxandra-Gabriela", "" ], [ "Pacurar", "Victor-Dan", "" ] ]
The paper presents an innovative method for tourist route planning inside a destination. The necessity of reorganizing the tourist routes within a destination comes as an immediate response to the Covid-19 crisis. The implementation of the method inside tourist destinations can be an important advantage in transforming a destination into a safer destination in times of Covid-19 and post-Covid-19. The existing trend of shortening the tourist stay length has been accelerated while the epidemic became a pandemic. Moreover, the wariness for future pandemics has brought to the spotlight the issue of overcrowded attractions inside a destination at certain moments. The method proposed in this paper proposes a backtracking algorithm, more precisely an adaptation of the travelling salesman problem. The method presented aims to facilitate the navigation inside a destination and to revive certain less-visited sightseeing spots inside a destination while facilitating the social distancing measures imposed by Covid-19.
2301.10485
Mrudula Balachander
Mrudula Balachander, Emmanuel Filiot, Jean-Fran\c{c}ois Raskin
LTL Reactive Synthesis with a Few Hints
null
null
null
null
cs.GT cs.FL cs.LO cs.SY eess.SY
http://creativecommons.org/publicdomain/zero/1.0/
We study a variant of the problem of synthesizing Mealy machines that enforce LTL specifications against all possible behaviours of the environment including hostile ones. In the variant studied here, the user provides the high level LTL specification {\phi} of the system to design, and a set E of examples of executions that the solution must produce. Our synthesis algorithm works in two phases. First, it generalizes the decisions taken along the examples E using tailored extensions of automata learning algorithms. This phase generalizes the user-provided examples in E while preserving realizability of {\phi}. Second, the algorithm turns the (usually) incomplete Mealy machine obtained by the learning phase into a complete Mealy machine that realizes {\phi}. The examples are used to guide the synthesis procedure. We provide a completeness result that shows that our procedure can learn any Mealy machine M that realizes {\phi} with a small (polynomial) set of examples. We also show that our problem, that generalizes the classical LTL synthesis problem (i.e. when E = {\emptyset}), matches its worst-case complexity. The additional cost of learning from E is even polynomial in the size of E and in the size of a symbolic representation of solutions that realize {\phi}. This symbolic representation is computed by the synthesis algorithm implemented in Acacia-Bonzai when solving the plain LTL synthesis problem. We illustrate the practical interest of our approach on a set of examples.
[ { "created": "Wed, 25 Jan 2023 09:45:06 GMT", "version": "v1" } ]
2023-02-09
[ [ "Balachander", "Mrudula", "" ], [ "Filiot", "Emmanuel", "" ], [ "Raskin", "Jean-François", "" ] ]
We study a variant of the problem of synthesizing Mealy machines that enforce LTL specifications against all possible behaviours of the environment including hostile ones. In the variant studied here, the user provides the high level LTL specification {\phi} of the system to design, and a set E of examples of executions that the solution must produce. Our synthesis algorithm works in two phases. First, it generalizes the decisions taken along the examples E using tailored extensions of automata learning algorithms. This phase generalizes the user-provided examples in E while preserving realizability of {\phi}. Second, the algorithm turns the (usually) incomplete Mealy machine obtained by the learning phase into a complete Mealy machine that realizes {\phi}. The examples are used to guide the synthesis procedure. We provide a completeness result that shows that our procedure can learn any Mealy machine M that realizes {\phi} with a small (polynomial) set of examples. We also show that our problem, that generalizes the classical LTL synthesis problem (i.e. when E = {\emptyset}), matches its worst-case complexity. The additional cost of learning from E is even polynomial in the size of E and in the size of a symbolic representation of solutions that realize {\phi}. This symbolic representation is computed by the synthesis algorithm implemented in Acacia-Bonzai when solving the plain LTL synthesis problem. We illustrate the practical interest of our approach on a set of examples.
2007.05164
Natalie Collina
Natalie Collina and S. Matthew Weinberg
On the (in)-approximability of Bayesian Revenue Maximization for a Combinatorial Buyer
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a revenue-maximizing single seller with $m$ items for sale to a single buyer whose value $v(\cdot)$ for the items is drawn from a known distribution $D$ of support $k$. A series of works by Cai et al. establishes that when each $v(\cdot)$ in the support of $D$ is additive or unit-demand (or $c$-demand), the revenue-optimal auction can be found in $\operatorname{poly}(m,k)$ time. We show that going barely beyond this, even to matroid-based valuations (a proper subset of Gross Substitutes), results in strong hardness of approximation. Specifically, even on instances with $m$ items and $k \leq m$ valuations in the support of $D$, it is not possible to achieve a $1/m^{1-\varepsilon}$-approximation for any $\varepsilon>0$ to the revenue-optimal mechanism for matroid-based valuations in (randomized) poly-time unless NP $\subseteq$ RP (note that a $1/k$-approximation is trivial). Cai et al.'s main technical contribution is a black-box reduction from revenue maximization for valuations in class $\mathcal{V}$ to optimizing the difference between two values in class $\mathcal{V}$. Our main technical contribution is a black-box reduction in the other direction (for a wide class of valuation classes), establishing that their reduction is essentially tight.
[ { "created": "Fri, 10 Jul 2020 04:58:29 GMT", "version": "v1" } ]
2020-07-13
[ [ "Collina", "Natalie", "" ], [ "Weinberg", "S. Matthew", "" ] ]
We consider a revenue-maximizing single seller with $m$ items for sale to a single buyer whose value $v(\cdot)$ for the items is drawn from a known distribution $D$ of support $k$. A series of works by Cai et al. establishes that when each $v(\cdot)$ in the support of $D$ is additive or unit-demand (or $c$-demand), the revenue-optimal auction can be found in $\operatorname{poly}(m,k)$ time. We show that going barely beyond this, even to matroid-based valuations (a proper subset of Gross Substitutes), results in strong hardness of approximation. Specifically, even on instances with $m$ items and $k \leq m$ valuations in the support of $D$, it is not possible to achieve a $1/m^{1-\varepsilon}$-approximation for any $\varepsilon>0$ to the revenue-optimal mechanism for matroid-based valuations in (randomized) poly-time unless NP $\subseteq$ RP (note that a $1/k$-approximation is trivial). Cai et al.'s main technical contribution is a black-box reduction from revenue maximization for valuations in class $\mathcal{V}$ to optimizing the difference between two values in class $\mathcal{V}$. Our main technical contribution is a black-box reduction in the other direction (for a wide class of valuation classes), establishing that their reduction is essentially tight.
1209.5785
Dmitry Truhachev
Dmitri Truhachev and Christian Schlegel
Coupling Data Transmission for Multiple-Access Communications
IEEE Transactions on Information Theory, under revision
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a signaling format where the information to be communicated from one or multiple transmitters to a receiver is modulated via a superposition of independent data streams. Each data stream is formed by error-correction encoding, constellation mapping, replication and permutation of symbols, and application of signature sequences. The relations between the data bits and modulation symbols transmitted over the channel can be represented by a sparse graph. In the case where the modulated data streams are transmitted with time offsets the receiver observes spatial coupling of the individual graphs into a graph chain enabling efficient demodulation/decoding. We prove that a two-stage demodulation/decoding method, in which iterative demodulation based on symbol estimation and interference cancellation is followed by parallel error correction decoding, achieves capacity on the additive white Gaussian noise (AWGN) channel asymptotically. We compare the performance of the two-stage receiver to the receiver which utilizes hard feedback between the error-correction encoders and the iterative demodulator.
[ { "created": "Tue, 25 Sep 2012 22:25:59 GMT", "version": "v1" }, { "created": "Wed, 5 Dec 2012 05:26:03 GMT", "version": "v2" }, { "created": "Sat, 24 Jun 2017 20:47:20 GMT", "version": "v3" }, { "created": "Fri, 15 Jun 2018 13:12:49 GMT", "version": "v4" } ]
2018-06-18
[ [ "Truhachev", "Dmitri", "" ], [ "Schlegel", "Christian", "" ] ]
We consider a signaling format where the information to be communicated from one or multiple transmitters to a receiver is modulated via a superposition of independent data streams. Each data stream is formed by error-correction encoding, constellation mapping, replication and permutation of symbols, and application of signature sequences. The relations between the data bits and modulation symbols transmitted over the channel can be represented by a sparse graph. In the case where the modulated data streams are transmitted with time offsets the receiver observes spatial coupling of the individual graphs into a graph chain enabling efficient demodulation/decoding. We prove that a two-stage demodulation/decoding method, in which iterative demodulation based on symbol estimation and interference cancellation is followed by parallel error correction decoding, achieves capacity on the additive white Gaussian noise (AWGN) channel asymptotically. We compare the performance of the two-stage receiver to the receiver which utilizes hard feedback between the error-correction encoders and the iterative demodulator.
1609.07035
Siddhartha Banerjee Siddhartha Banerjee
Siddhartha Banerjee, Prasenjit Mitra and Kazunari Sugiyama
Abstractive Meeting Summarization UsingDependency Graph Fusion
WWW '15 Companion Proceedings of the 24th International Conference on World Wide Web, Pages 5-6. arXiv admin note: substantial text overlap with arXiv:1609.07033
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic summarization techniques on meeting conversations developed so far have been primarily extractive, resulting in poor summaries. To improve this, we propose an approach to generate abstractive summaries by fusing important content from several utterances. Any meeting is generally comprised of several discussion topic segments. For each topic segment within a meeting conversation, we aim to generate a one sentence summary from the most important utterances using an integer linear programming-based sentence fusion approach. Experimental results show that our method can generate more informative summaries than the baselines.
[ { "created": "Thu, 22 Sep 2016 15:53:04 GMT", "version": "v1" } ]
2016-09-25
[ [ "Banerjee", "Siddhartha", "" ], [ "Mitra", "Prasenjit", "" ], [ "Sugiyama", "Kazunari", "" ] ]
Automatic summarization techniques on meeting conversations developed so far have been primarily extractive, resulting in poor summaries. To improve this, we propose an approach to generate abstractive summaries by fusing important content from several utterances. Any meeting is generally comprised of several discussion topic segments. For each topic segment within a meeting conversation, we aim to generate a one sentence summary from the most important utterances using an integer linear programming-based sentence fusion approach. Experimental results show that our method can generate more informative summaries than the baselines.
1912.08124
Luca Manneschi
Luca Manneschi, Andrew C. Lin, Eleni Vasilaki
SpaRCe: Improved Learning of Reservoir Computing Systems through Sparse Representations
null
IEEE Transactions on Neural Networks and Learning Systems, 16 August 2021
10.1109/TNNLS.2021.3102378
null
cs.NE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
"Sparse" neural networks, in which relatively few neurons or connections are active, are common in both machine learning and neuroscience. Whereas in machine learning, "sparsity" is related to a penalty term that leads to some connecting weights becoming small or zero, in biological brains, sparsity is often created when high spiking thresholds prevent neuronal activity. Here we introduce sparsity into a reservoir computing network via neuron-specific learnable thresholds of activity, allowing neurons with low thresholds to contribute to decision-making but suppressing information from neurons with high thresholds. This approach, which we term "SpaRCe", optimises the sparsity level of the reservoir without affecting the reservoir dynamics. The read-out weights and the thresholds are learned by an on-line gradient rule that minimises an error function on the outputs of the network. Threshold learning occurs by the balance of two opposing forces: reducing inter-neuronal correlations in the reservoir by deactivating redundant neurons, while increasing the activity of neurons participating in correct decisions. We test SpaRCe on classification problems and find that threshold learning improves performance compared to standard reservoir computing. SpaRCe alleviates the problem of catastrophic forgetting, a problem most evident in standard echo state networks and recurrent neural networks in general, due to increasing the number of task-specialised neurons that are included in the network decisions.
[ { "created": "Wed, 4 Dec 2019 15:05:26 GMT", "version": "v1" }, { "created": "Mon, 20 Apr 2020 17:51:11 GMT", "version": "v2" }, { "created": "Mon, 11 Jan 2021 16:25:07 GMT", "version": "v3" }, { "created": "Sun, 18 Apr 2021 05:26:54 GMT", "version": "v4" } ]
2021-08-19
[ [ "Manneschi", "Luca", "" ], [ "Lin", "Andrew C.", "" ], [ "Vasilaki", "Eleni", "" ] ]
"Sparse" neural networks, in which relatively few neurons or connections are active, are common in both machine learning and neuroscience. Whereas in machine learning, "sparsity" is related to a penalty term that leads to some connecting weights becoming small or zero, in biological brains, sparsity is often created when high spiking thresholds prevent neuronal activity. Here we introduce sparsity into a reservoir computing network via neuron-specific learnable thresholds of activity, allowing neurons with low thresholds to contribute to decision-making but suppressing information from neurons with high thresholds. This approach, which we term "SpaRCe", optimises the sparsity level of the reservoir without affecting the reservoir dynamics. The read-out weights and the thresholds are learned by an on-line gradient rule that minimises an error function on the outputs of the network. Threshold learning occurs by the balance of two opposing forces: reducing inter-neuronal correlations in the reservoir by deactivating redundant neurons, while increasing the activity of neurons participating in correct decisions. We test SpaRCe on classification problems and find that threshold learning improves performance compared to standard reservoir computing. SpaRCe alleviates the problem of catastrophic forgetting, a problem most evident in standard echo state networks and recurrent neural networks in general, due to increasing the number of task-specialised neurons that are included in the network decisions.
1804.03550
Martin Garbade
Martin Garbade, Yueh-Tung Chen, Johann Sawatzky, Juergen Gall
Two Stream 3D Semantic Scene Completion
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring the 3D geometry and the semantic meaning of surfaces, which are occluded, is a very challenging task. Recently, a first end-to-end learning approach has been proposed that completes a scene from a single depth image. The approach voxelizes the scene and predicts for each voxel if it is occupied and, if it is occupied, the semantic class label. In this work, we propose a two stream approach that leverages depth information and semantic information, which is inferred from the RGB image, for this task. The approach constructs an incomplete 3D semantic tensor, which uses a compact three-channel encoding for the inferred semantic information, and uses a 3D CNN to infer the complete 3D semantic tensor. In our experimental evaluation, we show that the proposed two stream approach substantially outperforms the state-of-the-art for semantic scene completion.
[ { "created": "Tue, 10 Apr 2018 14:10:26 GMT", "version": "v1" }, { "created": "Mon, 16 Jul 2018 16:37:53 GMT", "version": "v2" }, { "created": "Wed, 10 Apr 2019 14:35:56 GMT", "version": "v3" }, { "created": "Wed, 15 May 2019 14:36:17 GMT", "version": "v4" } ]
2019-05-16
[ [ "Garbade", "Martin", "" ], [ "Chen", "Yueh-Tung", "" ], [ "Sawatzky", "Johann", "" ], [ "Gall", "Juergen", "" ] ]
Inferring the 3D geometry and the semantic meaning of surfaces, which are occluded, is a very challenging task. Recently, a first end-to-end learning approach has been proposed that completes a scene from a single depth image. The approach voxelizes the scene and predicts for each voxel if it is occupied and, if it is occupied, the semantic class label. In this work, we propose a two stream approach that leverages depth information and semantic information, which is inferred from the RGB image, for this task. The approach constructs an incomplete 3D semantic tensor, which uses a compact three-channel encoding for the inferred semantic information, and uses a 3D CNN to infer the complete 3D semantic tensor. In our experimental evaluation, we show that the proposed two stream approach substantially outperforms the state-of-the-art for semantic scene completion.
2102.08012
Jason Liang
Jason Liang, Keith Kelly
Training Stacked Denoising Autoencoders for Representation Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We implement stacked denoising autoencoders, a class of neural networks that are capable of learning powerful representations of high dimensional data. We describe stochastic gradient descent for unsupervised training of autoencoders, as well as a novel genetic algorithm based approach that makes use of gradient information. We analyze the performance of both optimization algorithms and also the representation learning ability of the autoencoder when it is trained on standard image classification datasets.
[ { "created": "Tue, 16 Feb 2021 08:18:22 GMT", "version": "v1" } ]
2021-02-17
[ [ "Liang", "Jason", "" ], [ "Kelly", "Keith", "" ] ]
We implement stacked denoising autoencoders, a class of neural networks that are capable of learning powerful representations of high dimensional data. We describe stochastic gradient descent for unsupervised training of autoencoders, as well as a novel genetic algorithm based approach that makes use of gradient information. We analyze the performance of both optimization algorithms and also the representation learning ability of the autoencoder when it is trained on standard image classification datasets.
2305.05348
Jiacheng Yao
Jiacheng Yao, Jindan Xu, Wei Xu, Derrick Wing Kwan Ng, Chau Yuen, Xiaohu You
Robust Beamforming Design for RIS-aided Cell-free Systems with CSI Uncertainties and Capacity-limited Backhaul
Accepted by IEEE TCOM
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the robust beamforming design in a reconfigurable intelligent surface (RIS)-aided cell-free (CF) system considering the channel state information (CSI) uncertainties of both the direct channels and cascaded channels at the transmitter with capacity-limited backhaul. We jointly optimize the precoding at the access points (APs) and the phase shifts at multiple RISs to maximize the worst-case sum rate of the CF system subject to the constraints of maximum transmit power of APs, unit-modulus phase shifts, limited backhaul capacity, and bounded CSI errors. By applying a series of transformations, the non-smoothness and semi-infinite constraints are tackled in a low-complexity manner that facilitates the design of an alternating optimization (AO)-based iterative algorithm. The proposed algorithm divides the considered problem into two subproblems. For the RIS phase shifts optimization subproblem, we exploit the penalty convex-concave procedure (P-CCP) to obtain a stationary solution and achieve effective initialization. For precoding optimization subproblem, successive convex approximation (SCA) is adopted with a convergence guarantee to a Karush-Kuhn-Tucker (KKT) solution. Numerical results demonstrate the effectiveness of the proposed robust beamforming design, which achieves superior performance with low complexity. Moreover, the importance of RIS phase shift optimization for robustness and the advantages of distributed RISs in the CF system are further highlighted.
[ { "created": "Tue, 9 May 2023 11:20:08 GMT", "version": "v1" } ]
2023-05-10
[ [ "Yao", "Jiacheng", "" ], [ "Xu", "Jindan", "" ], [ "Xu", "Wei", "" ], [ "Ng", "Derrick Wing Kwan", "" ], [ "Yuen", "Chau", "" ], [ "You", "Xiaohu", "" ] ]
In this paper, we consider the robust beamforming design in a reconfigurable intelligent surface (RIS)-aided cell-free (CF) system considering the channel state information (CSI) uncertainties of both the direct channels and cascaded channels at the transmitter with capacity-limited backhaul. We jointly optimize the precoding at the access points (APs) and the phase shifts at multiple RISs to maximize the worst-case sum rate of the CF system subject to the constraints of maximum transmit power of APs, unit-modulus phase shifts, limited backhaul capacity, and bounded CSI errors. By applying a series of transformations, the non-smoothness and semi-infinite constraints are tackled in a low-complexity manner that facilitates the design of an alternating optimization (AO)-based iterative algorithm. The proposed algorithm divides the considered problem into two subproblems. For the RIS phase shifts optimization subproblem, we exploit the penalty convex-concave procedure (P-CCP) to obtain a stationary solution and achieve effective initialization. For precoding optimization subproblem, successive convex approximation (SCA) is adopted with a convergence guarantee to a Karush-Kuhn-Tucker (KKT) solution. Numerical results demonstrate the effectiveness of the proposed robust beamforming design, which achieves superior performance with low complexity. Moreover, the importance of RIS phase shift optimization for robustness and the advantages of distributed RISs in the CF system are further highlighted.
2402.10846
Arash Mohammadi
Kawa Atapour, S. Jamal Seyedmohammadi, Jamshid Abouei, Arash Mohammadi, Konstantinos N. Plataniotis
FedD2S: Personalized Data-Free Federated Knowledge Distillation
null
null
null
null
cs.LG cs.AI cs.DC eess.IV
http://creativecommons.org/licenses/by/4.0/
This paper addresses the challenge of mitigating data heterogeneity among clients within a Federated Learning (FL) framework. The model-drift issue, arising from the noniid nature of client data, often results in suboptimal personalization of a global model compared to locally trained models for each client. To tackle this challenge, we propose a novel approach named FedD2S for Personalized Federated Learning (pFL), leveraging knowledge distillation. FedD2S incorporates a deep-to-shallow layer-dropping mechanism in the data-free knowledge distillation process to enhance local model personalization. Through extensive simulations on diverse image datasets-FEMNIST, CIFAR10, CINIC0, and CIFAR100-we compare FedD2S with state-of-the-art FL baselines. The proposed approach demonstrates superior performance, characterized by accelerated convergence and improved fairness among clients. The introduced layer-dropping technique effectively captures personalized knowledge, resulting in enhanced performance compared to alternative FL models. Moreover, we investigate the impact of key hyperparameters, such as the participation ratio and layer-dropping rate, providing valuable insights into the optimal configuration for FedD2S. The findings demonstrate the efficacy of adaptive layer-dropping in the knowledge distillation process to achieve enhanced personalization and performance across diverse datasets and tasks.
[ { "created": "Fri, 16 Feb 2024 17:36:51 GMT", "version": "v1" } ]
2024-02-19
[ [ "Atapour", "Kawa", "" ], [ "Seyedmohammadi", "S. Jamal", "" ], [ "Abouei", "Jamshid", "" ], [ "Mohammadi", "Arash", "" ], [ "Plataniotis", "Konstantinos N.", "" ] ]
This paper addresses the challenge of mitigating data heterogeneity among clients within a Federated Learning (FL) framework. The model-drift issue, arising from the noniid nature of client data, often results in suboptimal personalization of a global model compared to locally trained models for each client. To tackle this challenge, we propose a novel approach named FedD2S for Personalized Federated Learning (pFL), leveraging knowledge distillation. FedD2S incorporates a deep-to-shallow layer-dropping mechanism in the data-free knowledge distillation process to enhance local model personalization. Through extensive simulations on diverse image datasets-FEMNIST, CIFAR10, CINIC0, and CIFAR100-we compare FedD2S with state-of-the-art FL baselines. The proposed approach demonstrates superior performance, characterized by accelerated convergence and improved fairness among clients. The introduced layer-dropping technique effectively captures personalized knowledge, resulting in enhanced performance compared to alternative FL models. Moreover, we investigate the impact of key hyperparameters, such as the participation ratio and layer-dropping rate, providing valuable insights into the optimal configuration for FedD2S. The findings demonstrate the efficacy of adaptive layer-dropping in the knowledge distillation process to achieve enhanced personalization and performance across diverse datasets and tasks.
1906.11156
Jiezhong Qiu
Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, and Jie Tang
NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization
11 pages, in Proceedings of the Web Conference 2019 (WWW 19)
null
10.1145/3308558.3313446
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of large-scale network embedding, which aims to learn latent representations for network mining applications. Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2)the explicit factorization of such matrix generates more powerful embeddings than existing methods. However, directly constructing and factorizing this matrix---which is dense---is prohibitively expensive in terms of both time and space, making it not scalable for large networks. In this work, we present the algorithm of large-scale network embedding as sparse matrix factorization (NetSMF). NetSMF leverages theories from spectral sparsification to efficiently sparsify the aforementioned dense matrix, enabling significantly improved efficiency in embedding learning. The sparsified matrix is spectrally close to the original dense one with a theoretically bounded approximation error, which helps maintain the representation power of the learned embeddings. We conduct experiments on networks of various scales and types. Results show that among both popular benchmarks and factorization based methods, NetSMF is the only method that achieves both high efficiency and effectiveness. We show that NetSMF requires only 24 hours to generate effective embeddings for a large-scale academic collaboration network with tens of millions of nodes, while it would cost DeepWalk months and is computationally infeasible for the dense matrix factorization solution. The source code of NetSMF is publicly available (https://github.com/xptree/NetSMF).
[ { "created": "Wed, 26 Jun 2019 15:17:29 GMT", "version": "v1" } ]
2019-06-27
[ [ "Qiu", "Jiezhong", "" ], [ "Dong", "Yuxiao", "" ], [ "Ma", "Hao", "" ], [ "Li", "Jian", "" ], [ "Wang", "Chi", "" ], [ "Wang", "Kuansan", "" ], [ "Tang", "Jie", "" ] ]
We study the problem of large-scale network embedding, which aims to learn latent representations for network mining applications. Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2)the explicit factorization of such matrix generates more powerful embeddings than existing methods. However, directly constructing and factorizing this matrix---which is dense---is prohibitively expensive in terms of both time and space, making it not scalable for large networks. In this work, we present the algorithm of large-scale network embedding as sparse matrix factorization (NetSMF). NetSMF leverages theories from spectral sparsification to efficiently sparsify the aforementioned dense matrix, enabling significantly improved efficiency in embedding learning. The sparsified matrix is spectrally close to the original dense one with a theoretically bounded approximation error, which helps maintain the representation power of the learned embeddings. We conduct experiments on networks of various scales and types. Results show that among both popular benchmarks and factorization based methods, NetSMF is the only method that achieves both high efficiency and effectiveness. We show that NetSMF requires only 24 hours to generate effective embeddings for a large-scale academic collaboration network with tens of millions of nodes, while it would cost DeepWalk months and is computationally infeasible for the dense matrix factorization solution. The source code of NetSMF is publicly available (https://github.com/xptree/NetSMF).
2308.14030
Chen Shen
Chen Shen and Jun Zhang and Xinggong Liang and Zeyi Hao and Kehan Li and Fan Wang and Zhenyuan Wang and Chunfeng Lian
Forensic Histopathological Recognition via a Context-Aware MIL Network Powered by Self-Supervised Contrastive Learning
11 pages, 2 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Forensic pathology is critical in analyzing death manner and time from the microscopic aspect to assist in the establishment of reliable factual bases for criminal investigation. In practice, even the manual differentiation between different postmortem organ tissues is challenging and relies on expertise, considering that changes like putrefaction and autolysis could significantly change typical histopathological appearance. Developing AI-based computational pathology techniques to assist forensic pathologists is practically meaningful, which requires reliable discriminative representation learning to capture tissues' fine-grained postmortem patterns. To this end, we propose a framework called FPath, in which a dedicated self-supervised contrastive learning strategy and a context-aware multiple-instance learning (MIL) block are designed to learn discriminative representations from postmortem histopathological images acquired at varying magnification scales. Our self-supervised learning step leverages multiple complementary contrastive losses and regularization terms to train a double-tier backbone for fine-grained and informative patch/instance embedding. Thereafter, the context-aware MIL adaptively distills from the local instances a holistic bag/image-level representation for the recognition task. On a large-scale database of $19,607$ experimental rat postmortem images and $3,378$ real-world human decedent images, our FPath led to state-of-the-art accuracy and promising cross-domain generalization in recognizing seven different postmortem tissues. The source code will be released on \href{https://github.com/ladderlab-xjtu/forensic_pathology}{https://github.com/ladderlab-xjtu/forensic\_pathology}.
[ { "created": "Sun, 27 Aug 2023 07:47:38 GMT", "version": "v1" } ]
2023-08-29
[ [ "Shen", "Chen", "" ], [ "Zhang", "Jun", "" ], [ "Liang", "Xinggong", "" ], [ "Hao", "Zeyi", "" ], [ "Li", "Kehan", "" ], [ "Wang", "Fan", "" ], [ "Wang", "Zhenyuan", "" ], [ "Lian", "Chunfeng", "" ] ]
Forensic pathology is critical in analyzing death manner and time from the microscopic aspect to assist in the establishment of reliable factual bases for criminal investigation. In practice, even the manual differentiation between different postmortem organ tissues is challenging and relies on expertise, considering that changes like putrefaction and autolysis could significantly change typical histopathological appearance. Developing AI-based computational pathology techniques to assist forensic pathologists is practically meaningful, which requires reliable discriminative representation learning to capture tissues' fine-grained postmortem patterns. To this end, we propose a framework called FPath, in which a dedicated self-supervised contrastive learning strategy and a context-aware multiple-instance learning (MIL) block are designed to learn discriminative representations from postmortem histopathological images acquired at varying magnification scales. Our self-supervised learning step leverages multiple complementary contrastive losses and regularization terms to train a double-tier backbone for fine-grained and informative patch/instance embedding. Thereafter, the context-aware MIL adaptively distills from the local instances a holistic bag/image-level representation for the recognition task. On a large-scale database of $19,607$ experimental rat postmortem images and $3,378$ real-world human decedent images, our FPath led to state-of-the-art accuracy and promising cross-domain generalization in recognizing seven different postmortem tissues. The source code will be released on \href{https://github.com/ladderlab-xjtu/forensic_pathology}{https://github.com/ladderlab-xjtu/forensic\_pathology}.
2204.05727
Hui Kong
Banghe Wu, Chengzhong Xu, Hui Kong
LiDAR Road-Atlas: An Efficient Map Representation for General 3D Urban Environment
null
Field Robotics, 2023
10.55417/fr.2023014
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose the LiDAR Road-Atlas, a compactable and efficient 3D map representation, for autonomous robot or vehicle navigation in general urban environment. The LiDAR Road-Atlas can be generated by an online mapping framework based on incrementally merging local 2D occupancy grid maps (2D-OGM). Specifically, the contributions of our LiDAR Road-Atlas representation are threefold. First, we solve the challenging problem of creating local 2D-OGM in non-structured urban scenes based on a real-time delimitation of traversable and curb regions in LiDAR point cloud. Second, we achieve accurate 3D mapping in multiple-layer urban road scenarios by a probabilistic fusion scheme. Third, we achieve very efficient 3D map representation of general environment thanks to the automatic local-OGM induced traversable-region labeling and a sparse probabilistic local point-cloud encoding. Given the LiDAR Road-Atlas, one can achieve accurate vehicle localization, path planning and some other tasks. Our map representation is insensitive to dynamic objects which can be filtered out in the resulting map based on a probabilistic fusion. Empirically, we compare our map representation with a couple of popular map representation methods in robotics and autonomous driving societies, and our map representation is more favorable in terms of efficiency, scalability and compactness. In addition, we also evaluate localization accuracy extensively given the created LiDAR Road-Atlas representations on several public benchmark datasets. With a 16-channel LiDAR sensor, our method achieves an average global localization errors of 0.26m (translation) and 1.07 degrees (rotation) on the Apollo dataset, and 0.89m (translation) and 1.29 degrees (rotation) on the MulRan dataset, respectively, at 10Hz, which validates the promising performance of our map representation for autonomous driving.
[ { "created": "Tue, 12 Apr 2022 11:46:09 GMT", "version": "v1" }, { "created": "Mon, 13 Mar 2023 07:16:04 GMT", "version": "v2" } ]
2023-05-18
[ [ "Wu", "Banghe", "" ], [ "Xu", "Chengzhong", "" ], [ "Kong", "Hui", "" ] ]
In this work, we propose the LiDAR Road-Atlas, a compactable and efficient 3D map representation, for autonomous robot or vehicle navigation in general urban environment. The LiDAR Road-Atlas can be generated by an online mapping framework based on incrementally merging local 2D occupancy grid maps (2D-OGM). Specifically, the contributions of our LiDAR Road-Atlas representation are threefold. First, we solve the challenging problem of creating local 2D-OGM in non-structured urban scenes based on a real-time delimitation of traversable and curb regions in LiDAR point cloud. Second, we achieve accurate 3D mapping in multiple-layer urban road scenarios by a probabilistic fusion scheme. Third, we achieve very efficient 3D map representation of general environment thanks to the automatic local-OGM induced traversable-region labeling and a sparse probabilistic local point-cloud encoding. Given the LiDAR Road-Atlas, one can achieve accurate vehicle localization, path planning and some other tasks. Our map representation is insensitive to dynamic objects which can be filtered out in the resulting map based on a probabilistic fusion. Empirically, we compare our map representation with a couple of popular map representation methods in robotics and autonomous driving societies, and our map representation is more favorable in terms of efficiency, scalability and compactness. In addition, we also evaluate localization accuracy extensively given the created LiDAR Road-Atlas representations on several public benchmark datasets. With a 16-channel LiDAR sensor, our method achieves an average global localization errors of 0.26m (translation) and 1.07 degrees (rotation) on the Apollo dataset, and 0.89m (translation) and 1.29 degrees (rotation) on the MulRan dataset, respectively, at 10Hz, which validates the promising performance of our map representation for autonomous driving.
1511.02307
Mehrdad Tahmasbi
Mehrdad Tahmasbi and Faramarz Fekri
On the Capacity Achieving Probability Measures for Molecular Receivers
6 pages, 1 figure
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, diffusion-based molecular commu- nication with ligand receptor receivers is studied. Information messages are assumed to be encoded via variations of the con- centration of molecules. The randomness in the ligand reception process induces uncertainty in the communication; limiting the rate of information decoding. We model the ligand receptor receiver by a set of finite-state Markov channels and study the general capacity of such a receiver. Furthermore, the i.i.d. capacity of the receiver is characterized as a lower bound for the general capacity. It is also proved that a finite support probability measure can achieve the i.i.d. capacity of the receiver. Moreover, a bound on the number of points in the support of the probability measure is obtained.
[ { "created": "Sat, 7 Nov 2015 05:56:57 GMT", "version": "v1" } ]
2015-11-10
[ [ "Tahmasbi", "Mehrdad", "" ], [ "Fekri", "Faramarz", "" ] ]
In this paper, diffusion-based molecular commu- nication with ligand receptor receivers is studied. Information messages are assumed to be encoded via variations of the con- centration of molecules. The randomness in the ligand reception process induces uncertainty in the communication; limiting the rate of information decoding. We model the ligand receptor receiver by a set of finite-state Markov channels and study the general capacity of such a receiver. Furthermore, the i.i.d. capacity of the receiver is characterized as a lower bound for the general capacity. It is also proved that a finite support probability measure can achieve the i.i.d. capacity of the receiver. Moreover, a bound on the number of points in the support of the probability measure is obtained.
2206.08514
Ganqu Cui
Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, Maosong Sun
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
NeurIPS 2022 Datasets & Benchmarks; Toolkits avaliable at https://github.com/thunlp/OpenBackdoor
null
null
null
cs.LG cs.CL cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations.
[ { "created": "Fri, 17 Jun 2022 02:29:23 GMT", "version": "v1" }, { "created": "Tue, 1 Nov 2022 15:26:31 GMT", "version": "v2" } ]
2022-11-02
[ [ "Cui", "Ganqu", "" ], [ "Yuan", "Lifan", "" ], [ "He", "Bingxiang", "" ], [ "Chen", "Yangyi", "" ], [ "Liu", "Zhiyuan", "" ], [ "Sun", "Maosong", "" ] ]
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations.
2210.09805
Amr Hendy
Amr Hendy, Mohamed Abdelghaffar, Mohamed Afify and Ahmed Y. Tawfik
Domain Specific Sub-network for Multi-Domain Neural Machine Translation
6 pages, 1 figure, 5 tables, AACL-IJCNLP 2022 conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents Domain-Specific Sub-network (DoSS). It uses a set of masks obtained through pruning to define a sub-network for each domain and finetunes the sub-network parameters on domain data. This performs very closely and drastically reduces the number of parameters compared to finetuning the whole network on each domain. Also a method to make masks unique per domain is proposed and shown to greatly improve the generalization to unseen domains. In our experiments on German to English machine translation the proposed method outperforms the strong baseline of continue training on multi-domain (medical, tech and religion) data by 1.47 BLEU points. Also continue training DoSS on new domain (legal) outperforms the multi-domain (medical, tech, religion, legal) baseline by 1.52 BLEU points.
[ { "created": "Tue, 18 Oct 2022 12:26:49 GMT", "version": "v1" } ]
2022-10-19
[ [ "Hendy", "Amr", "" ], [ "Abdelghaffar", "Mohamed", "" ], [ "Afify", "Mohamed", "" ], [ "Tawfik", "Ahmed Y.", "" ] ]
This paper presents Domain-Specific Sub-network (DoSS). It uses a set of masks obtained through pruning to define a sub-network for each domain and finetunes the sub-network parameters on domain data. This performs very closely and drastically reduces the number of parameters compared to finetuning the whole network on each domain. Also a method to make masks unique per domain is proposed and shown to greatly improve the generalization to unseen domains. In our experiments on German to English machine translation the proposed method outperforms the strong baseline of continue training on multi-domain (medical, tech and religion) data by 1.47 BLEU points. Also continue training DoSS on new domain (legal) outperforms the multi-domain (medical, tech, religion, legal) baseline by 1.52 BLEU points.
2205.00224
Sue Sin Chong
Sue Sin Chong
Loss Function Entropy Regularization for Diverse Decision Boundaries
7 pages
2022 7th International Conference on Big Data Analytics (ICBDA)
10.1109/ICBDA55095.2022.9760312
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better prediction label set without ground-truth annotation? This paper will modify the contrastive learning objectives to automatically train a self-complementing ensemble to produce a state-of-the-art prediction on the CIFAR10 and CIFAR100-20 tasks. This paper will present a straightforward method to modify a single unsupervised classification pipeline to automatically generate an ensemble of neural networks with varied decision boundaries to learn a more extensive feature set of classes. Loss Function Entropy Regularization (LFER) are regularization terms to be added to the pre-training and contrastive learning loss functions. LFER is a gear to modify the entropy state of the output space of unsupervised learning, thereby diversifying the latent representation of decision boundaries of neural networks. Ensemble trained with LFER has higher successful prediction accuracy for samples near decision boundaries. LFER is an adequate gear to perturb decision boundaries and has produced classifiers that beat state-of-the-art at the contrastive learning stage. Experiments show that LFER can produce an ensemble with accuracy comparable to the state-of-the-art yet have varied latent decision boundaries. It allows us to perform meaningful verification for samples near decision boundaries, encouraging the correct classification of near-boundary samples. By compounding the probability of correct prediction of a single sample amongst an ensemble of neural network trained, our method can improve upon a single classifier by denoising and affirming correct feature mappings.
[ { "created": "Sat, 30 Apr 2022 10:16:41 GMT", "version": "v1" }, { "created": "Mon, 23 May 2022 07:18:28 GMT", "version": "v2" } ]
2022-05-24
[ [ "Chong", "Sue Sin", "" ] ]
Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better prediction label set without ground-truth annotation? This paper will modify the contrastive learning objectives to automatically train a self-complementing ensemble to produce a state-of-the-art prediction on the CIFAR10 and CIFAR100-20 tasks. This paper will present a straightforward method to modify a single unsupervised classification pipeline to automatically generate an ensemble of neural networks with varied decision boundaries to learn a more extensive feature set of classes. Loss Function Entropy Regularization (LFER) are regularization terms to be added to the pre-training and contrastive learning loss functions. LFER is a gear to modify the entropy state of the output space of unsupervised learning, thereby diversifying the latent representation of decision boundaries of neural networks. Ensemble trained with LFER has higher successful prediction accuracy for samples near decision boundaries. LFER is an adequate gear to perturb decision boundaries and has produced classifiers that beat state-of-the-art at the contrastive learning stage. Experiments show that LFER can produce an ensemble with accuracy comparable to the state-of-the-art yet have varied latent decision boundaries. It allows us to perform meaningful verification for samples near decision boundaries, encouraging the correct classification of near-boundary samples. By compounding the probability of correct prediction of a single sample amongst an ensemble of neural network trained, our method can improve upon a single classifier by denoising and affirming correct feature mappings.
1403.6614
Kin Tat Ho
Kin Tat Ho and Lok Ming Lui
QCMC: Quasi-conformal Parameterizations for Multiply-connected domains
26 pages, 23 figures, submitted. arXiv admin note: text overlap with arXiv:1402.6908, arXiv:1307.2679 by other authors
null
null
null
cs.CG cs.CV math.DG
http://creativecommons.org/licenses/by-nc-sa/3.0/
This paper presents a method to compute the {\it quasi-conformal parameterization} (QCMC) for a multiply-connected 2D domain or surface. QCMC computes a quasi-conformal map from a multiply-connected domain $S$ onto a punctured disk $D_S$ associated with a given Beltrami differential. The Beltrami differential, which measures the conformality distortion, is a complex-valued function $\mu:S\to\mathbb{C}$ with supremum norm strictly less than 1. Every Beltrami differential gives a conformal structure of $S$. Hence, the conformal module of $D_S$, which are the radii and centers of the inner circles, can be fully determined by $\mu$, up to a M\"obius transformation. In this paper, we propose an iterative algorithm to simultaneously search for the conformal module and the optimal quasi-conformal parameterization. The key idea is to minimize the Beltrami energy subject to the boundary constraints. The optimal solution is our desired quasi-conformal parameterization onto a punctured disk. The parameterization of the multiply-connected domain simplifies numerical computations and has important applications in various fields, such as in computer graphics and vision. Experiments have been carried out on synthetic data together with real multiply-connected Riemann surfaces. Results show that our proposed method can efficiently compute quasi-conformal parameterizations of multiply-connected domains and outperforms other state-of-the-art algorithms. Applications of the proposed parameterization technique have also been explored.
[ { "created": "Wed, 26 Mar 2014 10:21:03 GMT", "version": "v1" } ]
2014-03-27
[ [ "Ho", "Kin Tat", "" ], [ "Lui", "Lok Ming", "" ] ]
This paper presents a method to compute the {\it quasi-conformal parameterization} (QCMC) for a multiply-connected 2D domain or surface. QCMC computes a quasi-conformal map from a multiply-connected domain $S$ onto a punctured disk $D_S$ associated with a given Beltrami differential. The Beltrami differential, which measures the conformality distortion, is a complex-valued function $\mu:S\to\mathbb{C}$ with supremum norm strictly less than 1. Every Beltrami differential gives a conformal structure of $S$. Hence, the conformal module of $D_S$, which are the radii and centers of the inner circles, can be fully determined by $\mu$, up to a M\"obius transformation. In this paper, we propose an iterative algorithm to simultaneously search for the conformal module and the optimal quasi-conformal parameterization. The key idea is to minimize the Beltrami energy subject to the boundary constraints. The optimal solution is our desired quasi-conformal parameterization onto a punctured disk. The parameterization of the multiply-connected domain simplifies numerical computations and has important applications in various fields, such as in computer graphics and vision. Experiments have been carried out on synthetic data together with real multiply-connected Riemann surfaces. Results show that our proposed method can efficiently compute quasi-conformal parameterizations of multiply-connected domains and outperforms other state-of-the-art algorithms. Applications of the proposed parameterization technique have also been explored.
1810.09142
Dmitry Shkatov
Mikhail Rybakov and Dmitry Shkatov
Complexity and Expressivity of Branching- and Alternating-Time Temporal Logics with Finitely Many Variables
Prefinal version of the published paper
Bernd Fischer and Tarmo Uustalu (eds.) Theoretical Aspects of Computing -- ICTAC 2018. Lecture Notes in Computer Science,Vol. 11187, Springer 2018, pp. 396--414
10.1007/978-3-030-02508-3_21
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that Branching-time temporal logics CTL and CTL*, as well as Alternating-time temporal logics ATL and ATL*, are as semantically expressive in the language with a single propositional variable as they are in the full language, i.e., with an unlimited supply of propositional variables. It follows that satisfiability for CTL, as well as for ATL, with a single variable is EXPTIME-complete, while satisfiability for CTL*, as well as for ATL*, with a single variable is 2EXPTIME-complete,--i.e., for these logics, the satisfiability for formulas with only one variable is as hard as satisfiability for arbitrary formulas.
[ { "created": "Mon, 22 Oct 2018 08:52:23 GMT", "version": "v1" }, { "created": "Fri, 18 Jan 2019 20:08:42 GMT", "version": "v2" } ]
2019-01-23
[ [ "Rybakov", "Mikhail", "" ], [ "Shkatov", "Dmitry", "" ] ]
We show that Branching-time temporal logics CTL and CTL*, as well as Alternating-time temporal logics ATL and ATL*, are as semantically expressive in the language with a single propositional variable as they are in the full language, i.e., with an unlimited supply of propositional variables. It follows that satisfiability for CTL, as well as for ATL, with a single variable is EXPTIME-complete, while satisfiability for CTL*, as well as for ATL*, with a single variable is 2EXPTIME-complete,--i.e., for these logics, the satisfiability for formulas with only one variable is as hard as satisfiability for arbitrary formulas.
1604.04827
Hendrik Molter
Ren\'e van Bevern and Christian Komusiewicz and Hendrik Molter and Rolf Niedermeier and Manuel Sorge and Toby Walsh
h-Index Manipulation by Undoing Merges
null
Quantitative Science Studies, 1(4): 1529-1552. 2020
10.1162/qss_a_00093
null
cs.DL cs.DM cs.DS cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The h-index is an important bibliographic measure used to assess the performance of researchers. Dutiful researchers merge different versions of their articles in their Google Scholar profile even though this can decrease their h-index. In this article, we study the manipulation of the h-index by undoing such merges. In contrast to manipulation by merging articles (van Bevern et al. [Artif. Intel. 240:19-35, 2016]) such manipulation is harder to detect. We present numerous results on computational complexity (from linear-time algorithms to parameterized computational hardness results) and empirically indicate that at least small improvements of the h-index by splitting merged articles are unfortunately easily achievable.
[ { "created": "Sun, 17 Apr 2016 04:11:30 GMT", "version": "v1" }, { "created": "Sat, 9 Jul 2016 06:47:32 GMT", "version": "v2" }, { "created": "Tue, 12 Nov 2019 11:28:18 GMT", "version": "v3" } ]
2021-01-14
[ [ "van Bevern", "René", "" ], [ "Komusiewicz", "Christian", "" ], [ "Molter", "Hendrik", "" ], [ "Niedermeier", "Rolf", "" ], [ "Sorge", "Manuel", "" ], [ "Walsh", "Toby", "" ] ]
The h-index is an important bibliographic measure used to assess the performance of researchers. Dutiful researchers merge different versions of their articles in their Google Scholar profile even though this can decrease their h-index. In this article, we study the manipulation of the h-index by undoing such merges. In contrast to manipulation by merging articles (van Bevern et al. [Artif. Intel. 240:19-35, 2016]) such manipulation is harder to detect. We present numerous results on computational complexity (from linear-time algorithms to parameterized computational hardness results) and empirically indicate that at least small improvements of the h-index by splitting merged articles are unfortunately easily achievable.
2106.09170
Heitor Murilo Gomes
Heitor Murilo Gomes, Maciej Grzenda, Rodrigo Mello, Jesse Read, Minh Huong Le Nguyen, Albert Bifet
A Survey on Semi-Supervised Learning for Delayed Partially Labelled Data Streams
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unlabelled data appear in many domains and are particularly relevant to streaming applications, where even though data is abundant, labelled data is rare. To address the learning problems associated with such data, one can ignore the unlabelled data and focus only on the labelled data (supervised learning); use the labelled data and attempt to leverage the unlabelled data (semi-supervised learning); or assume some labels will be available on request (active learning). The first approach is the simplest, yet the amount of labelled data available will limit the predictive performance. The second relies on finding and exploiting the underlying characteristics of the data distribution. The third depends on an external agent to provide the required labels in a timely fashion. This survey pays special attention to methods that leverage unlabelled data in a semi-supervised setting. We also discuss the delayed labelling issue, which impacts both fully supervised and semi-supervised methods. We propose a unified problem setting, discuss the learning guarantees and existing methods, explain the differences between related problem settings. Finally, we review the current benchmarking practices and propose adaptations to enhance them.
[ { "created": "Wed, 16 Jun 2021 23:14:20 GMT", "version": "v1" } ]
2021-06-18
[ [ "Gomes", "Heitor Murilo", "" ], [ "Grzenda", "Maciej", "" ], [ "Mello", "Rodrigo", "" ], [ "Read", "Jesse", "" ], [ "Nguyen", "Minh Huong Le", "" ], [ "Bifet", "Albert", "" ] ]
Unlabelled data appear in many domains and are particularly relevant to streaming applications, where even though data is abundant, labelled data is rare. To address the learning problems associated with such data, one can ignore the unlabelled data and focus only on the labelled data (supervised learning); use the labelled data and attempt to leverage the unlabelled data (semi-supervised learning); or assume some labels will be available on request (active learning). The first approach is the simplest, yet the amount of labelled data available will limit the predictive performance. The second relies on finding and exploiting the underlying characteristics of the data distribution. The third depends on an external agent to provide the required labels in a timely fashion. This survey pays special attention to methods that leverage unlabelled data in a semi-supervised setting. We also discuss the delayed labelling issue, which impacts both fully supervised and semi-supervised methods. We propose a unified problem setting, discuss the learning guarantees and existing methods, explain the differences between related problem settings. Finally, we review the current benchmarking practices and propose adaptations to enhance them.
1802.10457
Vincent Divol
Fr\'ed\'eric Chazal, Vincent Divol
The density of expected persistence diagrams and its kernel based estimation
Extended version of a paper published in the proceedings of the Symposium of Computational Geometry 2018
null
null
null
cs.CG
http://creativecommons.org/licenses/by/4.0/
Persistence diagrams play a fundamental role in Topological Data Analysis where they are used as topological descriptors of filtrations built on top of data. They consist in discrete multisets of points in the plane $\mathbb{R}^2$ that can equivalently be seen as discrete measures in $\mathbb{R}^2$. When the data come as a random point cloud, these discrete measures become random measures whose expectation is studied in this paper. First, we show that for a wide class of filtrations, including the \v{C}ech and Rips-Vietoris filtrations, the expected persistence diagram, that is a deterministic measure on $\mathbb{R}^2$ , has a density with respect to the Lebesgue measure. Second, building on the previous result we show that the persistence surface recently introduced in [Adams & al., Persistence images: a stable vector representation of persistent homology] can be seen as a kernel estimator of this density. We propose a cross-validation scheme for selecting an optimal bandwidth, which is proven to be a consistent procedure to estimate the density.
[ { "created": "Wed, 28 Feb 2018 14:58:19 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2019 16:11:31 GMT", "version": "v2" } ]
2019-03-25
[ [ "Chazal", "Frédéric", "" ], [ "Divol", "Vincent", "" ] ]
Persistence diagrams play a fundamental role in Topological Data Analysis where they are used as topological descriptors of filtrations built on top of data. They consist in discrete multisets of points in the plane $\mathbb{R}^2$ that can equivalently be seen as discrete measures in $\mathbb{R}^2$. When the data come as a random point cloud, these discrete measures become random measures whose expectation is studied in this paper. First, we show that for a wide class of filtrations, including the \v{C}ech and Rips-Vietoris filtrations, the expected persistence diagram, that is a deterministic measure on $\mathbb{R}^2$ , has a density with respect to the Lebesgue measure. Second, building on the previous result we show that the persistence surface recently introduced in [Adams & al., Persistence images: a stable vector representation of persistent homology] can be seen as a kernel estimator of this density. We propose a cross-validation scheme for selecting an optimal bandwidth, which is proven to be a consistent procedure to estimate the density.
1202.3773
Haohai Yu
Haohai Yu, Robert A. van Engelen
Measuring the Hardness of Stochastic Sampling on Bayesian Networks with Deterministic Causalities: the k-Test
null
null
null
UAI-P-2011-PG-786-795
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate Bayesian inference is NP-hard. Dagum and Luby defined the Local Variance Bound (LVB) to measure the approximation hardness of Bayesian inference on Bayesian networks, assuming the networks model strictly positive joint probability distributions, i.e. zero probabilities are not permitted. This paper introduces the k-test to measure the approximation hardness of inference on Bayesian networks with deterministic causalities in the probability distribution, i.e. when zero conditional probabilities are permitted. Approximation by stochastic sampling is a widely-used inference method that is known to suffer from inefficiencies due to sample rejection. The k-test predicts when rejection rates of stochastic sampling a Bayesian network will be low, modest, high, or when sampling is intractable.
[ { "created": "Tue, 14 Feb 2012 16:41:17 GMT", "version": "v1" } ]
2012-02-20
[ [ "Yu", "Haohai", "" ], [ "van Engelen", "Robert A.", "" ] ]
Approximate Bayesian inference is NP-hard. Dagum and Luby defined the Local Variance Bound (LVB) to measure the approximation hardness of Bayesian inference on Bayesian networks, assuming the networks model strictly positive joint probability distributions, i.e. zero probabilities are not permitted. This paper introduces the k-test to measure the approximation hardness of inference on Bayesian networks with deterministic causalities in the probability distribution, i.e. when zero conditional probabilities are permitted. Approximation by stochastic sampling is a widely-used inference method that is known to suffer from inefficiencies due to sample rejection. The k-test predicts when rejection rates of stochastic sampling a Bayesian network will be low, modest, high, or when sampling is intractable.
1012.5318
Sergei Viznyuk
Sergei Viznyuk
Condensation into ground state in binary string models
6 pages, 3 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ensemble of binary strings defined via strong-interaction model exhibits enhanced condensation (collapse) into ground state below certain temperature. The non-interaction model shows gradual accumulation into ground state as temperature approaches zero
[ { "created": "Thu, 23 Dec 2010 23:35:10 GMT", "version": "v1" } ]
2010-12-27
[ [ "Viznyuk", "Sergei", "" ] ]
The ensemble of binary strings defined via strong-interaction model exhibits enhanced condensation (collapse) into ground state below certain temperature. The non-interaction model shows gradual accumulation into ground state as temperature approaches zero
2203.07961
Hippolyte Verdier
Hippolyte Verdier, Fran\c{c}ois Laurent, Alhassan Cass\'e, Christian Vestergaard, Jean-Baptiste Masson
Variational inference of fractional Brownian motion with linear computational complexity
null
null
10.1103/PhysRevE.106.055311
null
cs.LG physics.bio-ph physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a simulation-based, amortised Bayesian inference scheme to infer the parameters of random walks. Our approach learns the posterior distribution of the walks' parameters with a likelihood-free method. In the first step a graph neural network is trained on simulated data to learn optimized low-dimensional summary statistics of the random walk. In the second step an invertible neural network generates the posterior distribution of the parameters from the learnt summary statistics using variational inference. We apply our method to infer the parameters of the fractional Brownian motion model from single trajectories. The computational complexity of the amortized inference procedure scales linearly with trajectory length, and its precision scales similarly to the Cram{\'e}r-Rao bound over a wide range of lengths. The approach is robust to positional noise, and generalizes well to trajectories longer than those seen during training. Finally, we adapt this scheme to show that a finite decorrelation time in the environment can furthermore be inferred from individual trajectories.
[ { "created": "Tue, 15 Mar 2022 14:43:16 GMT", "version": "v1" }, { "created": "Mon, 21 Mar 2022 09:27:31 GMT", "version": "v2" }, { "created": "Thu, 22 Sep 2022 16:40:02 GMT", "version": "v3" }, { "created": "Fri, 23 Sep 2022 06:30:36 GMT", "version": "v4" } ]
2022-12-07
[ [ "Verdier", "Hippolyte", "" ], [ "Laurent", "François", "" ], [ "Cassé", "Alhassan", "" ], [ "Vestergaard", "Christian", "" ], [ "Masson", "Jean-Baptiste", "" ] ]
We introduce a simulation-based, amortised Bayesian inference scheme to infer the parameters of random walks. Our approach learns the posterior distribution of the walks' parameters with a likelihood-free method. In the first step a graph neural network is trained on simulated data to learn optimized low-dimensional summary statistics of the random walk. In the second step an invertible neural network generates the posterior distribution of the parameters from the learnt summary statistics using variational inference. We apply our method to infer the parameters of the fractional Brownian motion model from single trajectories. The computational complexity of the amortized inference procedure scales linearly with trajectory length, and its precision scales similarly to the Cram{\'e}r-Rao bound over a wide range of lengths. The approach is robust to positional noise, and generalizes well to trajectories longer than those seen during training. Finally, we adapt this scheme to show that a finite decorrelation time in the environment can furthermore be inferred from individual trajectories.
2109.05843
Ritu Kapur
Ritu Kapur and Balwinder Sodhi
OSS effort estimation using software features similarity and developer activity-based metrics
45 pages, 10 figures, 11 tables, 3 algorithms, Accepted in ACM TOSEM
null
10.1145/3485819
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Software development effort estimation (SDEE) generally involves leveraging the information about the effort spent in developing similar software in the past. Most organizations do not have access to sufficient and reliable forms of such data from past projects. As such, the existing SDEE methods suffer from low usage and accuracy. We propose an efficient SDEE method for open source software, which provides accurate and fast effort estimates. The significant contributions of our paper are i) Novel SDEE software metrics derived from developer activity information of various software repositories, ii) SDEE dataset comprising the SDEE metrics' values derived from $\approx13,000$ GitHub repositories from 150 different software categories, iii) an effort estimation tool based on SDEE metrics and a software description similarity model. Our software description similarity model is basically a machine learning model trained using the Paragraph Vectors algorithm on the software product descriptions of GitHub repositories. Given the software description of a newly-envisioned software, our tool yields an effort estimate for developing it. Our method achieves the highest Standard Accuracy score of 87.26% (with cliff's $\delta$=0.88 at 99.999% confidence level) and 42.7% with the Automatic Transformed Linear Baseline model. Our software artifacts are available at https://doi.org/10.5281/zenodo.5095723.
[ { "created": "Mon, 13 Sep 2021 10:16:39 GMT", "version": "v1" } ]
2021-09-14
[ [ "Kapur", "Ritu", "" ], [ "Sodhi", "Balwinder", "" ] ]
Software development effort estimation (SDEE) generally involves leveraging the information about the effort spent in developing similar software in the past. Most organizations do not have access to sufficient and reliable forms of such data from past projects. As such, the existing SDEE methods suffer from low usage and accuracy. We propose an efficient SDEE method for open source software, which provides accurate and fast effort estimates. The significant contributions of our paper are i) Novel SDEE software metrics derived from developer activity information of various software repositories, ii) SDEE dataset comprising the SDEE metrics' values derived from $\approx13,000$ GitHub repositories from 150 different software categories, iii) an effort estimation tool based on SDEE metrics and a software description similarity model. Our software description similarity model is basically a machine learning model trained using the Paragraph Vectors algorithm on the software product descriptions of GitHub repositories. Given the software description of a newly-envisioned software, our tool yields an effort estimate for developing it. Our method achieves the highest Standard Accuracy score of 87.26% (with cliff's $\delta$=0.88 at 99.999% confidence level) and 42.7% with the Automatic Transformed Linear Baseline model. Our software artifacts are available at https://doi.org/10.5281/zenodo.5095723.
2311.05237
Shuhei Tarashima
Shuhei Tarashima, Muhammad Abdul Haq, Yushan Wang, Norio Tagawa
Widely Applicable Strong Baseline for Sports Ball Detection and Tracking
BMVC2023. Code & dataset : https://github.com/nttcom/WASB-SBDT
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present a novel Sports Ball Detection and Tracking (SBDT) method that can be applied to various sports categories. Our approach is composed of (1) high-resolution feature extraction, (2) position-aware model training, and (3) inference considering temporal consistency, all of which are put together as a new SBDT baseline. Besides, to validate the wide-applicability of our approach, we compare our baseline with 6 state-of-the-art SBDT methods on 5 datasets from different sports categories. We achieve this by newly introducing two SBDT datasets, providing new ball annotations for two datasets, and re-implementing all the methods to ease extensive comparison. Experimental results demonstrate that our approach is substantially superior to existing methods on all the sports categories covered by the datasets. We believe our proposed method can play as a Widely Applicable Strong Baseline (WASB) of SBDT, and our datasets and codebase will promote future SBDT research. Datasets and codes are available at https://github.com/nttcom/WASB-SBDT .
[ { "created": "Thu, 9 Nov 2023 09:39:12 GMT", "version": "v1" }, { "created": "Thu, 16 Nov 2023 17:46:58 GMT", "version": "v2" } ]
2023-11-17
[ [ "Tarashima", "Shuhei", "" ], [ "Haq", "Muhammad Abdul", "" ], [ "Wang", "Yushan", "" ], [ "Tagawa", "Norio", "" ] ]
In this work, we present a novel Sports Ball Detection and Tracking (SBDT) method that can be applied to various sports categories. Our approach is composed of (1) high-resolution feature extraction, (2) position-aware model training, and (3) inference considering temporal consistency, all of which are put together as a new SBDT baseline. Besides, to validate the wide-applicability of our approach, we compare our baseline with 6 state-of-the-art SBDT methods on 5 datasets from different sports categories. We achieve this by newly introducing two SBDT datasets, providing new ball annotations for two datasets, and re-implementing all the methods to ease extensive comparison. Experimental results demonstrate that our approach is substantially superior to existing methods on all the sports categories covered by the datasets. We believe our proposed method can play as a Widely Applicable Strong Baseline (WASB) of SBDT, and our datasets and codebase will promote future SBDT research. Datasets and codes are available at https://github.com/nttcom/WASB-SBDT .
2207.07591
Hasna Bouraoui
Hasna Bouraoui, Chadlia Jerad, Omar Romdhani, Jeronimo Castrillon
mAPN: Modeling, Analysis, and Exploration of Algorithmic and Parallelism Adaptivity
26 PAGES JOURNAL PAPER
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Using parallel embedded systems these days is increasing. They are getting more complex due to integrating multiple functionalities in one application or running numerous ones concurrently. This concerns a wide range of applications, including streaming applications, commonly used in embedded systems. These applications must implement adaptable and reliable algorithms to deliver the required performance under varying circumstances (e.g., running applications on the platform, input data, platform variety, etc.). Given the complexity of streaming applications, target systems, and adaptivity requirements, designing such systems with traditional programming models is daunting. This is why model-based strategies with an appropriate Model of Computation (MoC) have long been studied for embedded system design. This work provides algorithmic adaptivity on top of parallelism for dynamic dataflow to express larger sets of variants. We present a multi-Alternative Process Network (mAPN), a high-level abstract representation in which several variants of the same application coexist in the same graph expressing different implementations. We introduce mAPN properties and its formalism to describe various local implementation alternatives. Furthermore, mAPNs are enriched with metadata to Provide the alternatives with quantitative annotations in terms of a specific metric. To help the user analyze the rich space of variants, we propose a methodology to extract feasible variants under user and hardware constraints. At the core of the methodology is an algorithm for computing global metrics of an execution of different alternatives from a compact mAPN specification. We validate our approach by exploring several possible variants created for the Automatic Subtitling Application (ASA) on two hardware platforms.
[ { "created": "Fri, 15 Jul 2022 16:39:41 GMT", "version": "v1" } ]
2022-07-18
[ [ "Bouraoui", "Hasna", "" ], [ "Jerad", "Chadlia", "" ], [ "Romdhani", "Omar", "" ], [ "Castrillon", "Jeronimo", "" ] ]
Using parallel embedded systems these days is increasing. They are getting more complex due to integrating multiple functionalities in one application or running numerous ones concurrently. This concerns a wide range of applications, including streaming applications, commonly used in embedded systems. These applications must implement adaptable and reliable algorithms to deliver the required performance under varying circumstances (e.g., running applications on the platform, input data, platform variety, etc.). Given the complexity of streaming applications, target systems, and adaptivity requirements, designing such systems with traditional programming models is daunting. This is why model-based strategies with an appropriate Model of Computation (MoC) have long been studied for embedded system design. This work provides algorithmic adaptivity on top of parallelism for dynamic dataflow to express larger sets of variants. We present a multi-Alternative Process Network (mAPN), a high-level abstract representation in which several variants of the same application coexist in the same graph expressing different implementations. We introduce mAPN properties and its formalism to describe various local implementation alternatives. Furthermore, mAPNs are enriched with metadata to Provide the alternatives with quantitative annotations in terms of a specific metric. To help the user analyze the rich space of variants, we propose a methodology to extract feasible variants under user and hardware constraints. At the core of the methodology is an algorithm for computing global metrics of an execution of different alternatives from a compact mAPN specification. We validate our approach by exploring several possible variants created for the Automatic Subtitling Application (ASA) on two hardware platforms.
2207.14243
Nikita Gabdullin
Nikita Gabdullin
Combining human parsing with analytical feature extraction and ranking schemes for high-generalization person reidentification
20 pages, 7 figures, 6 tables, 15 equations
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person reidentification (re-ID) has been receiving increasing attention in recent years due to its importance for both science and society. Machine learning and particularly Deep Learning (DL) has become the main re-id tool that allowed researches to achieve unprecedented accuracy levels on benchmark datasets. However, there is a known problem of poor generalization of DL models. That is, models trained to achieve high accuracy on one dataset perform poorly on other ones and require re-training. To address this issue, we present a model without trainable parameters which shows great potential for high generalization. It combines a fully analytical feature extraction and similarity ranking scheme with DL-based human parsing used to obtain the initial subregion classification. We show that such combination to a high extent eliminates the drawbacks of existing analytical methods. We use interpretable color and texture features which have human-readable similarity measures associated with them. To verify the proposed method we conduct experiments on Market1501 and CUHK03 datasets achieving competitive rank-1 accuracy comparable with that of DL-models. Most importantly we show that our method achieves 63.9% and 93.5% rank-1 cross-domain accuracy when applied to transfer learning tasks. It is significantly higher than previously reported 30-50% transfer accuracy. We discuss the potential ways of adding new features to further improve the model. We also show the advantage of interpretable features for constructing human-generated queries from verbal description to conduct search without a query image.
[ { "created": "Thu, 28 Jul 2022 17:22:48 GMT", "version": "v1" } ]
2022-07-29
[ [ "Gabdullin", "Nikita", "" ] ]
Person reidentification (re-ID) has been receiving increasing attention in recent years due to its importance for both science and society. Machine learning and particularly Deep Learning (DL) has become the main re-id tool that allowed researches to achieve unprecedented accuracy levels on benchmark datasets. However, there is a known problem of poor generalization of DL models. That is, models trained to achieve high accuracy on one dataset perform poorly on other ones and require re-training. To address this issue, we present a model without trainable parameters which shows great potential for high generalization. It combines a fully analytical feature extraction and similarity ranking scheme with DL-based human parsing used to obtain the initial subregion classification. We show that such combination to a high extent eliminates the drawbacks of existing analytical methods. We use interpretable color and texture features which have human-readable similarity measures associated with them. To verify the proposed method we conduct experiments on Market1501 and CUHK03 datasets achieving competitive rank-1 accuracy comparable with that of DL-models. Most importantly we show that our method achieves 63.9% and 93.5% rank-1 cross-domain accuracy when applied to transfer learning tasks. It is significantly higher than previously reported 30-50% transfer accuracy. We discuss the potential ways of adding new features to further improve the model. We also show the advantage of interpretable features for constructing human-generated queries from verbal description to conduct search without a query image.
cs/0611075
Ying Jun Zhang Ph.D.
Soung Chang Liew and Ying Jun Zhang
Proportional Fairness in Multi-channel Multi-rate Wireless Networks-Part I: The Case of Deterministic Channels
null
null
null
null
cs.NI cs.IT cs.PF math.IT
null
This is Part I of a two-part paper series that studies the use of the proportional fairness (PF) utility function as the basis for capacity allocation and scheduling in multi-channel multi-rate wireless networks. The contributions of Part I are threefold. (i) First, we lay down the theoretical foundation for PF. Specifically, we present the fundamental properties and physical/economic interpretation of PF. We show by general mathematical arguments that PF leads to equal airtime allocation to users for the single-channel case; and equal equivalent airtime allocation to users for the multi-channel case, where the equivalent airtime enjoyed by a user is a weighted sum of the airtimes enjoyed by the user on all channels, with the weight of a channel being the price or value of that channel. We also establish the Pareto efficiency of PF solutions. (ii) Second, we derive characteristics of PF solutions that are useful for the construction of PF-optimization algorithms. We present several PF-optimization algorithms, including a fast algorithm that is amenable to parallel implementation. (iii) Third, we study the use of PF utility for capacity allocation in large-scale WiFi networks consisting of many adjacent wireless LANs. We find that the PF solution simultaneously achieves higher system throughput, better fairness, and lower outage probability with respect to the default solution given by today's 802.11 commercial products. Part II of this paper series extends our investigation to the time-varying-channel case in which the data rates enjoyed by users over the channels vary dynamically over time
[ { "created": "Thu, 16 Nov 2006 03:08:36 GMT", "version": "v1" }, { "created": "Fri, 22 Feb 2008 14:27:39 GMT", "version": "v2" } ]
2008-02-22
[ [ "Liew", "Soung Chang", "" ], [ "Zhang", "Ying Jun", "" ] ]
This is Part I of a two-part paper series that studies the use of the proportional fairness (PF) utility function as the basis for capacity allocation and scheduling in multi-channel multi-rate wireless networks. The contributions of Part I are threefold. (i) First, we lay down the theoretical foundation for PF. Specifically, we present the fundamental properties and physical/economic interpretation of PF. We show by general mathematical arguments that PF leads to equal airtime allocation to users for the single-channel case; and equal equivalent airtime allocation to users for the multi-channel case, where the equivalent airtime enjoyed by a user is a weighted sum of the airtimes enjoyed by the user on all channels, with the weight of a channel being the price or value of that channel. We also establish the Pareto efficiency of PF solutions. (ii) Second, we derive characteristics of PF solutions that are useful for the construction of PF-optimization algorithms. We present several PF-optimization algorithms, including a fast algorithm that is amenable to parallel implementation. (iii) Third, we study the use of PF utility for capacity allocation in large-scale WiFi networks consisting of many adjacent wireless LANs. We find that the PF solution simultaneously achieves higher system throughput, better fairness, and lower outage probability with respect to the default solution given by today's 802.11 commercial products. Part II of this paper series extends our investigation to the time-varying-channel case in which the data rates enjoyed by users over the channels vary dynamically over time
2306.16632
Ha Thanh Nguyen
Ha-Thanh Nguyen, Francesca Toni, Kostas Stathis, Ken Satoh
Beyond Logic Programming for Legal Reasoning
Workshop on Logic Programming and Legal Reasoning, @ICLP 2023
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logic programming has long being advocated for legal reasoning, and several approaches have been put forward relying upon explicit representation of the law in logic programming terms. In this position paper we focus on the PROLEG logic-programming-based framework for formalizing and reasoning with Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunities in leveraging deep learning techniques for improving legal reasoning using PROLEG identifying four distinct options ranging from enhancing fact extraction using deep learning to end-to-end solutions for reasoning with textual legal descriptions. We assess advantages and limitations of each option, considering their technical feasibility, interpretability, and alignment with the needs of legal practitioners and decision-makers. We believe that our analysis can serve as a guideline for developers aiming to build effective decision-support systems for the legal domain, while fostering a deeper understanding of challenges and potential advancements by neuro-symbolic approaches in legal applications.
[ { "created": "Thu, 29 Jun 2023 02:12:18 GMT", "version": "v1" } ]
2023-06-30
[ [ "Nguyen", "Ha-Thanh", "" ], [ "Toni", "Francesca", "" ], [ "Stathis", "Kostas", "" ], [ "Satoh", "Ken", "" ] ]
Logic programming has long being advocated for legal reasoning, and several approaches have been put forward relying upon explicit representation of the law in logic programming terms. In this position paper we focus on the PROLEG logic-programming-based framework for formalizing and reasoning with Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunities in leveraging deep learning techniques for improving legal reasoning using PROLEG identifying four distinct options ranging from enhancing fact extraction using deep learning to end-to-end solutions for reasoning with textual legal descriptions. We assess advantages and limitations of each option, considering their technical feasibility, interpretability, and alignment with the needs of legal practitioners and decision-makers. We believe that our analysis can serve as a guideline for developers aiming to build effective decision-support systems for the legal domain, while fostering a deeper understanding of challenges and potential advancements by neuro-symbolic approaches in legal applications.