id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1403.7698
Ramani Duraiswami
Nail A. Gumerov and Ramani Duraiswami
Recursive computation of spherical harmonic rotation coefficients of large degree
null
null
null
University of Maryland Institute for Advanced Computer Studies, UMIACS-TR-2014-4; Department of Computer Science CS-TR-5037
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computation of the spherical harmonic rotation coefficients or elements of Wigner's d-matrix is important in a number of quantum mechanics and mathematical physics applications. Particularly, this is important for the Fast Multipole Methods in three dimensions for the Helmholtz, Laplace and related equations, if rotation-based decomposition of translation operators are used. In these and related problems related to representation of functions on a sphere via spherical harmonic expansions computation of the rotation coefficients of large degree $n$ (of the order of thousands and more) may be necessary. Existing algorithms for their computation, based on recursions, are usually unstable, and do not extend to $n$. We develop a new recursion and study its behavior for large degrees, via computational and asymptotic analyses. Stability of this recursion was studied based on a novel application of the Courant-Friedrichs-Lewy condition and the von Neumann method for stability of finite-difference schemes for solution of PDEs. A recursive algorithm of minimal complexity $O\left(n^{2}\right)$ for degree $n$ and FFT-based algorithms of complexity $O\left(n^{2}\log n\right) $ suitable for computation of rotation coefficients of large degrees are proposed, studied numerically, and cross-validated. It is shown that the latter algorithm can be used for $n\lesssim 10^{3}$ in double precision, while the former algorithm was tested for large $n$ (up to $10^{4}$ in our experiments) and demonstrated better performance and accuracy compared to the FFT-based algorithm.
[ { "created": "Sun, 30 Mar 2014 04:13:46 GMT", "version": "v1" } ]
2014-04-01
[ [ "Gumerov", "Nail A.", "" ], [ "Duraiswami", "Ramani", "" ] ]
Computation of the spherical harmonic rotation coefficients or elements of Wigner's d-matrix is important in a number of quantum mechanics and mathematical physics applications. Particularly, this is important for the Fast Multipole Methods in three dimensions for the Helmholtz, Laplace and related equations, if rotation-based decomposition of translation operators are used. In these and related problems related to representation of functions on a sphere via spherical harmonic expansions computation of the rotation coefficients of large degree $n$ (of the order of thousands and more) may be necessary. Existing algorithms for their computation, based on recursions, are usually unstable, and do not extend to $n$. We develop a new recursion and study its behavior for large degrees, via computational and asymptotic analyses. Stability of this recursion was studied based on a novel application of the Courant-Friedrichs-Lewy condition and the von Neumann method for stability of finite-difference schemes for solution of PDEs. A recursive algorithm of minimal complexity $O\left(n^{2}\right)$ for degree $n$ and FFT-based algorithms of complexity $O\left(n^{2}\log n\right) $ suitable for computation of rotation coefficients of large degrees are proposed, studied numerically, and cross-validated. It is shown that the latter algorithm can be used for $n\lesssim 10^{3}$ in double precision, while the former algorithm was tested for large $n$ (up to $10^{4}$ in our experiments) and demonstrated better performance and accuracy compared to the FFT-based algorithm.
1503.03653
Chang Yao
Chang Yao, Divyakant Agrawal, Gang Chen, Beng Chin Ooi, Sai Wu
Adaptive Logging for Distributed In-memory Databases
13 pages
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new type of logs, the command log, is being employed to replace the traditional data log (e.g., ARIES log) in the in-memory databases. Instead of recording how the tuples are updated, a command log only tracks the transactions being executed, thereby effectively reducing the size of the log and improving the performance. Command logging on the other hand increases the cost of recovery, because all the transactions in the log after the last checkpoint must be completely redone in case of a failure. In this paper, we first extend the command logging technique to a distributed environment, where all the nodes can perform recovery in parallel. We then propose an adaptive logging approach by combining data logging and command logging. The percentage of data logging versus command logging becomes an optimization between the performance of transaction processing and recovery to suit different OLTP applications. Our experimental study compares the performance of our proposed adaptive logging, ARIES-style data logging and command logging on top of H-Store. The results show that adaptive logging can achieve a 10x boost for recovery and a transaction throughput that is comparable to that of command logging.
[ { "created": "Thu, 12 Mar 2015 09:53:17 GMT", "version": "v1" }, { "created": "Mon, 27 Apr 2015 05:30:24 GMT", "version": "v2" } ]
2015-04-28
[ [ "Yao", "Chang", "" ], [ "Agrawal", "Divyakant", "" ], [ "Chen", "Gang", "" ], [ "Ooi", "Beng Chin", "" ], [ "Wu", "Sai", "" ] ]
A new type of logs, the command log, is being employed to replace the traditional data log (e.g., ARIES log) in the in-memory databases. Instead of recording how the tuples are updated, a command log only tracks the transactions being executed, thereby effectively reducing the size of the log and improving the performance. Command logging on the other hand increases the cost of recovery, because all the transactions in the log after the last checkpoint must be completely redone in case of a failure. In this paper, we first extend the command logging technique to a distributed environment, where all the nodes can perform recovery in parallel. We then propose an adaptive logging approach by combining data logging and command logging. The percentage of data logging versus command logging becomes an optimization between the performance of transaction processing and recovery to suit different OLTP applications. Our experimental study compares the performance of our proposed adaptive logging, ARIES-style data logging and command logging on top of H-Store. The results show that adaptive logging can achieve a 10x boost for recovery and a transaction throughput that is comparable to that of command logging.
1412.7063
Kartik Audhkhasi
Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran
Diverse Embedding Neural Network Language Models
Under review as workshop contribution at ICLR 2015
null
null
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs). A DENNLM projects the input word history vector onto multiple diverse low-dimensional sub-spaces instead of a single higher-dimensional sub-space as in conventional feed-forward neural network LMs. We encourage these sub-spaces to be diverse during network training through an augmented loss function. Our language modeling experiments on the Penn Treebank data set show the performance benefit of using a DENNLM.
[ { "created": "Mon, 22 Dec 2014 17:19:56 GMT", "version": "v1" }, { "created": "Wed, 7 Jan 2015 21:33:46 GMT", "version": "v2" }, { "created": "Mon, 19 Jan 2015 19:53:21 GMT", "version": "v3" }, { "created": "Wed, 25 Feb 2015 21:55:15 GMT", "version": "v4" }, { "created": "Wed, 15 Apr 2015 20:07:50 GMT", "version": "v5" } ]
2015-04-17
[ [ "Audhkhasi", "Kartik", "" ], [ "Sethy", "Abhinav", "" ], [ "Ramabhadran", "Bhuvana", "" ] ]
We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs). A DENNLM projects the input word history vector onto multiple diverse low-dimensional sub-spaces instead of a single higher-dimensional sub-space as in conventional feed-forward neural network LMs. We encourage these sub-spaces to be diverse during network training through an augmented loss function. Our language modeling experiments on the Penn Treebank data set show the performance benefit of using a DENNLM.
1710.06559
Asahi Takaoka
Asahi Takaoka
A recognition algorithm for simple-triangle graphs
revised, results unchanged, reference changed. 12 pages 12pt, 1 figure
null
null
null
cs.DM cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple-triangle graph is the intersection graph of triangles that are defined by a point on a horizontal line and an interval on another horizontal line. The time complexity of the recognition problem for simple-triangle graphs was a longstanding open problem, which was recently settled. This paper provides a new recognition algorithm for simple-triangle graphs to improve the time bound from $O(n^2 \overline{m})$ to $O(nm)$, where $n$, $m$, and $\overline{m}$ are the number of vertices, edges, and non-edges of the graph, respectively. The algorithm uses the vertex ordering characterization that a graph is a simple-triangle graph if and only if there is a linear ordering of the vertices containing both an alternating orientation of the graph and a transitive orientation of the complement of the graph. We also show, as a byproduct, that an alternating orientation can be obtained in $O(nm)$ time for cocomparability graphs, and it is NP-complete to decide whether a graph has an orientation that is alternating and acyclic.
[ { "created": "Wed, 18 Oct 2017 02:30:04 GMT", "version": "v1" }, { "created": "Wed, 19 Sep 2018 16:04:14 GMT", "version": "v2" } ]
2018-09-20
[ [ "Takaoka", "Asahi", "" ] ]
A simple-triangle graph is the intersection graph of triangles that are defined by a point on a horizontal line and an interval on another horizontal line. The time complexity of the recognition problem for simple-triangle graphs was a longstanding open problem, which was recently settled. This paper provides a new recognition algorithm for simple-triangle graphs to improve the time bound from $O(n^2 \overline{m})$ to $O(nm)$, where $n$, $m$, and $\overline{m}$ are the number of vertices, edges, and non-edges of the graph, respectively. The algorithm uses the vertex ordering characterization that a graph is a simple-triangle graph if and only if there is a linear ordering of the vertices containing both an alternating orientation of the graph and a transitive orientation of the complement of the graph. We also show, as a byproduct, that an alternating orientation can be obtained in $O(nm)$ time for cocomparability graphs, and it is NP-complete to decide whether a graph has an orientation that is alternating and acyclic.
2202.05664
Ante Sikirica
Ivana Lu\v{c}in, Sini\v{s}a Dru\v{z}eta, Goran Mau\v{s}a, Marta Alvir, Luka Grb\v{c}i\'c, Darija Vuki\'c Lu\v{s}i\'c, Ante Sikirica, Lado Kranj\v{c}evi\'c
Predictive modeling of microbiological seawater quality classification in karst region using cascade model
Submitted to Marine Pollution Bulletin
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, an in-depth analysis of Escherichia coli seawater measurements during the bathing season in the city of Rijeka, Croatia was conducted. Submerged sources of groundwater were observed at several measurement locations which could be the cause for increased E. coli values. This specificity of karst terrain is usually not considered during the monitoring process, thus a novel measurement methodology is proposed. A cascade machine learning model is used to predict coastal water quality based on meteorological data, which improves the level of accuracy due to data imbalance resulting from rare occurrences of measurements with reduced water quality. Currently, the cascade model is employed as a filter method, where measurements not classified as excellent quality need to be further analyzed. However, with improvements proposed in the paper, the cascade model could be ultimately used as a standalone method.
[ { "created": "Fri, 11 Feb 2022 15:03:31 GMT", "version": "v1" } ]
2022-02-14
[ [ "Lučin", "Ivana", "" ], [ "Družeta", "Siniša", "" ], [ "Mauša", "Goran", "" ], [ "Alvir", "Marta", "" ], [ "Grbčić", "Luka", "" ], [ "Lušić", "Darija Vukić", "" ], [ "Sikirica", "Ante", "" ], [ "Kranjčević", "Lado", "" ] ]
In this paper, an in-depth analysis of Escherichia coli seawater measurements during the bathing season in the city of Rijeka, Croatia was conducted. Submerged sources of groundwater were observed at several measurement locations which could be the cause for increased E. coli values. This specificity of karst terrain is usually not considered during the monitoring process, thus a novel measurement methodology is proposed. A cascade machine learning model is used to predict coastal water quality based on meteorological data, which improves the level of accuracy due to data imbalance resulting from rare occurrences of measurements with reduced water quality. Currently, the cascade model is employed as a filter method, where measurements not classified as excellent quality need to be further analyzed. However, with improvements proposed in the paper, the cascade model could be ultimately used as a standalone method.
1103.0171
Amir Ingber
Amir Ingber, Ram Zamir, Meir Feder
Finite Dimensional Infinite Constellations
54 pages, 13 figures. Submitted to IEEE Transactions on Information Theory
IEEE Trans. on Information Theory, Vol. 59 ,Issue 3, pp. 1630-1656, 2013
10.1109/TIT.2012.2224145
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the setting of a Gaussian channel without power constraints, proposed by Poltyrev, the codewords are points in an n-dimensional Euclidean space (an infinite constellation) and the tradeoff between their density and the error probability is considered. The capacity in this setting is the highest achievable normalized log density (NLD) with vanishing error probability. This capacity as well as error exponent bounds for this setting are known. In this work we consider the optimal performance achievable in the fixed blocklength (dimension) regime. We provide two new achievability bounds, and extend the validity of the sphere bound to finite dimensional infinite constellations. We also provide asymptotic analysis of the bounds: When the NLD is fixed, we provide asymptotic expansions for the bounds that are significantly tighter than the previously known error exponent results. When the error probability is fixed, we show that as n grows, the gap to capacity is inversely proportional (up to the first order) to the square-root of n where the proportion constant is given by the inverse Q-function of the allowed error probability, times the square root of 1/2. In an analogy to similar result in channel coding, the dispersion of infinite constellations is 1/2nat^2 per channel use. All our achievability results use lattices and therefore hold for the maximal error probability as well. Connections to the error exponent of the power constrained Gaussian channel and to the volume-to-noise ratio as a figure of merit are discussed. In addition, we demonstrate the tightness of the results numerically and compare to state-of-the-art coding schemes.
[ { "created": "Tue, 1 Mar 2011 14:05:30 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2011 19:38:35 GMT", "version": "v2" }, { "created": "Mon, 5 Sep 2011 12:18:33 GMT", "version": "v3" } ]
2013-02-28
[ [ "Ingber", "Amir", "" ], [ "Zamir", "Ram", "" ], [ "Feder", "Meir", "" ] ]
In the setting of a Gaussian channel without power constraints, proposed by Poltyrev, the codewords are points in an n-dimensional Euclidean space (an infinite constellation) and the tradeoff between their density and the error probability is considered. The capacity in this setting is the highest achievable normalized log density (NLD) with vanishing error probability. This capacity as well as error exponent bounds for this setting are known. In this work we consider the optimal performance achievable in the fixed blocklength (dimension) regime. We provide two new achievability bounds, and extend the validity of the sphere bound to finite dimensional infinite constellations. We also provide asymptotic analysis of the bounds: When the NLD is fixed, we provide asymptotic expansions for the bounds that are significantly tighter than the previously known error exponent results. When the error probability is fixed, we show that as n grows, the gap to capacity is inversely proportional (up to the first order) to the square-root of n where the proportion constant is given by the inverse Q-function of the allowed error probability, times the square root of 1/2. In an analogy to similar result in channel coding, the dispersion of infinite constellations is 1/2nat^2 per channel use. All our achievability results use lattices and therefore hold for the maximal error probability as well. Connections to the error exponent of the power constrained Gaussian channel and to the volume-to-noise ratio as a figure of merit are discussed. In addition, we demonstrate the tightness of the results numerically and compare to state-of-the-art coding schemes.
2301.06083
Yuntian Chen
Qian Li, Yuxiao Hu, Ye Liu, Dongxiao Zhang, Xin Jin, Yuntian Chen
Discrete Point-wise Attack Is Not Enough: Generalized Manifold Adversarial Attack for Face Recognition
Accepted by CVPR2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical adversarial attacks for Face Recognition (FR) models typically generate discrete examples for target identity with a single state image. However, such paradigm of point-wise attack exhibits poor generalization against numerous unknown states of identity and can be easily defended. In this paper, by rethinking the inherent relationship between the face of target identity and its variants, we introduce a new pipeline of Generalized Manifold Adversarial Attack (GMAA) to achieve a better attack performance by expanding the attack range. Specifically, this expansion lies on two aspects - GMAA not only expands the target to be attacked from one to many to encourage a good generalization ability for the generated adversarial examples, but it also expands the latter from discrete points to manifold by leveraging the domain knowledge that face expression change can be continuous, which enhances the attack effect as a data augmentation mechanism did. Moreover, we further design a dual supervision with local and global constraints as a minor contribution to improve the visual quality of the generated adversarial examples. We demonstrate the effectiveness of our method based on extensive experiments, and reveal that GMAA promises a semantic continuous adversarial space with a higher generalization ability and visual quality
[ { "created": "Mon, 19 Dec 2022 02:57:55 GMT", "version": "v1" }, { "created": "Sat, 8 Apr 2023 02:47:42 GMT", "version": "v2" } ]
2023-04-11
[ [ "Li", "Qian", "" ], [ "Hu", "Yuxiao", "" ], [ "Liu", "Ye", "" ], [ "Zhang", "Dongxiao", "" ], [ "Jin", "Xin", "" ], [ "Chen", "Yuntian", "" ] ]
Classical adversarial attacks for Face Recognition (FR) models typically generate discrete examples for target identity with a single state image. However, such paradigm of point-wise attack exhibits poor generalization against numerous unknown states of identity and can be easily defended. In this paper, by rethinking the inherent relationship between the face of target identity and its variants, we introduce a new pipeline of Generalized Manifold Adversarial Attack (GMAA) to achieve a better attack performance by expanding the attack range. Specifically, this expansion lies on two aspects - GMAA not only expands the target to be attacked from one to many to encourage a good generalization ability for the generated adversarial examples, but it also expands the latter from discrete points to manifold by leveraging the domain knowledge that face expression change can be continuous, which enhances the attack effect as a data augmentation mechanism did. Moreover, we further design a dual supervision with local and global constraints as a minor contribution to improve the visual quality of the generated adversarial examples. We demonstrate the effectiveness of our method based on extensive experiments, and reveal that GMAA promises a semantic continuous adversarial space with a higher generalization ability and visual quality
1410.5993
Henning Schnoor
Henning Schnoor
The Relative Succinctness and Expressiveness of Modal Logics Can Be Arbitrarily Complex
29 pages
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the relative succinctness and expressiveness of modal logics, and prove that these relationships can be as complex as any countable partial order. For this, we use two uniform formalisms to define modal operators, and obtain results on succinctness and expressiveness in these two settings. Our proofs are based on formula size games introduced by Adler and Immerman and bisimulations.
[ { "created": "Wed, 22 Oct 2014 10:59:54 GMT", "version": "v1" } ]
2014-10-23
[ [ "Schnoor", "Henning", "" ] ]
We study the relative succinctness and expressiveness of modal logics, and prove that these relationships can be as complex as any countable partial order. For this, we use two uniform formalisms to define modal operators, and obtain results on succinctness and expressiveness in these two settings. Our proofs are based on formula size games introduced by Adler and Immerman and bisimulations.
1906.07063
Linh Nguyen PhD
Linh Nguyen and Hoc T. Nguyen
Mobility based network lifetime in wireless sensor networks: A review
null
null
null
null
cs.NI cs.CY cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows a mobile wireless sensor networks (MWSN) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating a MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of a MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported.
[ { "created": "Tue, 4 Jun 2019 00:48:37 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2019 01:47:13 GMT", "version": "v2" } ]
2019-08-13
[ [ "Nguyen", "Linh", "" ], [ "Nguyen", "Hoc T.", "" ] ]
Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows a mobile wireless sensor networks (MWSN) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating a MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of a MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported.
2402.05131
Antonio Jose Jimeno Yepes
Antonio Jimeno Yepes, Yao You, Jan Milczek, Sebastian Laverde, and Renyu Li
Financial Report Chunking for Effective Retrieval Augmented Generation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Chunking information is a key step in Retrieval Augmented Generation (RAG). Current research primarily centers on paragraph-level chunking. This approach treats all texts as equal and neglects the information contained in the structure of documents. We propose an expanded approach to chunk documents by moving beyond mere paragraph-level chunking to chunk primary by structural element components of documents. Dissecting documents into these constituent elements creates a new way to chunk documents that yields the best chunk size without tuning. We introduce a novel framework that evaluates how chunking based on element types annotated by document understanding models contributes to the overall context and accuracy of the information retrieved. We also demonstrate how this approach impacts RAG assisted Question & Answer task performance. Our research includes a comprehensive analysis of various element types, their role in effective information retrieval, and the impact they have on the quality of RAG outputs. Findings support that element type based chunking largely improve RAG results on financial reporting. Through this research, we are also able to answer how to uncover highly accurate RAG.
[ { "created": "Mon, 5 Feb 2024 22:35:42 GMT", "version": "v1" }, { "created": "Sat, 10 Feb 2024 10:55:15 GMT", "version": "v2" }, { "created": "Sat, 16 Mar 2024 09:08:26 GMT", "version": "v3" } ]
2024-03-19
[ [ "Yepes", "Antonio Jimeno", "" ], [ "You", "Yao", "" ], [ "Milczek", "Jan", "" ], [ "Laverde", "Sebastian", "" ], [ "Li", "Renyu", "" ] ]
Chunking information is a key step in Retrieval Augmented Generation (RAG). Current research primarily centers on paragraph-level chunking. This approach treats all texts as equal and neglects the information contained in the structure of documents. We propose an expanded approach to chunk documents by moving beyond mere paragraph-level chunking to chunk primary by structural element components of documents. Dissecting documents into these constituent elements creates a new way to chunk documents that yields the best chunk size without tuning. We introduce a novel framework that evaluates how chunking based on element types annotated by document understanding models contributes to the overall context and accuracy of the information retrieved. We also demonstrate how this approach impacts RAG assisted Question & Answer task performance. Our research includes a comprehensive analysis of various element types, their role in effective information retrieval, and the impact they have on the quality of RAG outputs. Findings support that element type based chunking largely improve RAG results on financial reporting. Through this research, we are also able to answer how to uncover highly accurate RAG.
1807.07156
Fahad Panolan
Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh
Approximation Schemes for Low-Rank Binary Matrix Approximation Problems
38 pages
null
null
null
cs.DS cs.CC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a randomized linear time approximation scheme for a generic problem about clustering of binary vectors subject to additional constrains. The new constrained clustering problem encompasses a number of problems and by solving it, we obtain the first linear time-approximation schemes for a number of well-studied fundamental problems concerning clustering of binary vectors and low-rank approximation of binary matrices. Among the problems solvable by our approach are \textsc{Low GF(2)-Rank Approximation}, \textsc{Low Boolean-Rank Approximation}, and various versions of \textsc{Binary Clustering}. For example, for \textsc{Low GF(2)-Rank Approximation} problem, where for an $m\times n$ binary matrix $A$ and integer $r>0$, we seek for a binary matrix $B$ of $GF_2$ rank at most $r$ such that $\ell_0$ norm of matrix $A-B$ is minimum, our algorithm, for any $\epsilon>0$ in time $ f(r,\epsilon)\cdot n\cdot m$, where $f$ is some computable function, outputs a $(1+\epsilon)$-approximate solution with probability at least $(1-\frac{1}{e})$. Our approximation algorithms substantially improve the running times and approximation factors of previous works. We also give (deterministic) PTASes for these problems running in time $n^{f(r)\frac{1}{\epsilon^2}\log \frac{1}{\epsilon}}$, where $f$ is some function depending on the problem. Our algorithm for the constrained clustering problem is based on a novel sampling lemma, which is interesting in its own.
[ { "created": "Wed, 18 Jul 2018 21:11:35 GMT", "version": "v1" } ]
2018-07-20
[ [ "Fomin", "Fedor V.", "" ], [ "Golovach", "Petr A.", "" ], [ "Lokshtanov", "Daniel", "" ], [ "Panolan", "Fahad", "" ], [ "Saurabh", "Saket", "" ] ]
We provide a randomized linear time approximation scheme for a generic problem about clustering of binary vectors subject to additional constrains. The new constrained clustering problem encompasses a number of problems and by solving it, we obtain the first linear time-approximation schemes for a number of well-studied fundamental problems concerning clustering of binary vectors and low-rank approximation of binary matrices. Among the problems solvable by our approach are \textsc{Low GF(2)-Rank Approximation}, \textsc{Low Boolean-Rank Approximation}, and various versions of \textsc{Binary Clustering}. For example, for \textsc{Low GF(2)-Rank Approximation} problem, where for an $m\times n$ binary matrix $A$ and integer $r>0$, we seek for a binary matrix $B$ of $GF_2$ rank at most $r$ such that $\ell_0$ norm of matrix $A-B$ is minimum, our algorithm, for any $\epsilon>0$ in time $ f(r,\epsilon)\cdot n\cdot m$, where $f$ is some computable function, outputs a $(1+\epsilon)$-approximate solution with probability at least $(1-\frac{1}{e})$. Our approximation algorithms substantially improve the running times and approximation factors of previous works. We also give (deterministic) PTASes for these problems running in time $n^{f(r)\frac{1}{\epsilon^2}\log \frac{1}{\epsilon}}$, where $f$ is some function depending on the problem. Our algorithm for the constrained clustering problem is based on a novel sampling lemma, which is interesting in its own.
2009.06586
Yunhao Ge
Yunhao Ge, Sami Abu-El-Haija, Gan Xin and Laurent Itti
Zero-shot Synthesis with Group-Supervised Learning
Published at ICLR 2021 (16 pages including appendix)
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual cognition of primates is superior to that of artificial neural networks in its ability to 'envision' a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc. To aid neural networks to envision objects with different attributes, we propose a family of objective functions, expressed on groups of examples, as a novel learning framework that we term Group-Supervised Learning (GSL). GSL allows us to decompose inputs into a disentangled representation with swappable components, that can be recombined to synthesize new samples. For instance, images of red boats & blue cars can be decomposed and recombined to synthesize novel images of red cars. We propose an implementation based on auto-encoder, termed group-supervised zero-shot synthesis network (GZS-Net) trained with our learning framework, that can produce a high-quality red car even if no such example is witnessed during training. We test our model and learning framework on existing benchmarks, in addition to anew dataset that we open-source. We qualitatively and quantitatively demonstrate that GZS-Net trained with GSL outperforms state-of-the-art methods.
[ { "created": "Mon, 14 Sep 2020 17:17:49 GMT", "version": "v1" }, { "created": "Thu, 10 Dec 2020 19:43:03 GMT", "version": "v2" }, { "created": "Tue, 16 Feb 2021 21:19:12 GMT", "version": "v3" } ]
2021-02-18
[ [ "Ge", "Yunhao", "" ], [ "Abu-El-Haija", "Sami", "" ], [ "Xin", "Gan", "" ], [ "Itti", "Laurent", "" ] ]
Visual cognition of primates is superior to that of artificial neural networks in its ability to 'envision' a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc. To aid neural networks to envision objects with different attributes, we propose a family of objective functions, expressed on groups of examples, as a novel learning framework that we term Group-Supervised Learning (GSL). GSL allows us to decompose inputs into a disentangled representation with swappable components, that can be recombined to synthesize new samples. For instance, images of red boats & blue cars can be decomposed and recombined to synthesize novel images of red cars. We propose an implementation based on auto-encoder, termed group-supervised zero-shot synthesis network (GZS-Net) trained with our learning framework, that can produce a high-quality red car even if no such example is witnessed during training. We test our model and learning framework on existing benchmarks, in addition to anew dataset that we open-source. We qualitatively and quantitatively demonstrate that GZS-Net trained with GSL outperforms state-of-the-art methods.
1001.5073
Hosein Mohimani
Hosein Mohimani, Massoud Babaie-Zadeh, Irina Gorodnitsky, and Christian Jutten
Sparse Recovery using Smoothed $\ell^0$ (SL0): Convergence Analysis
Journal
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding the sparse solution of an underdetermined system of linear equations has many applications, especially, it is used in Compressed Sensing (CS), Sparse Component Analysis (SCA), and sparse decomposition of signals on overcomplete dictionaries. We have recently proposed a fast algorithm, called Smoothed $\ell^0$ (SL0), for this task. Contrary to many other sparse recovery algorithms, SL0 is not based on minimizing the $\ell^1$ norm, but it tries to directly minimize the $\ell^0$ norm of the solution. The basic idea of SL0 is optimizing a sequence of certain (continuous) cost functions approximating the $\ell^0$ norm of a vector. However, in previous papers, we did not provide a complete convergence proof for SL0. In this paper, we study the convergence properties of SL0, and show that under a certain sparsity constraint in terms of Asymmetric Restricted Isometry Property (ARIP), and with a certain choice of parameters, the convergence of SL0 to the sparsest solution is guaranteed. Moreover, we study the complexity of SL0, and we show that whenever the dimension of the dictionary grows, the complexity of SL0 increases with the same order as Matching Pursuit (MP), which is one of the fastest existing sparse recovery methods, while contrary to MP, its convergence to the sparsest solution is guaranteed under certain conditions which are satisfied through the choice of parameters.
[ { "created": "Thu, 28 Jan 2010 00:25:56 GMT", "version": "v1" } ]
2010-01-29
[ [ "Mohimani", "Hosein", "" ], [ "Babaie-Zadeh", "Massoud", "" ], [ "Gorodnitsky", "Irina", "" ], [ "Jutten", "Christian", "" ] ]
Finding the sparse solution of an underdetermined system of linear equations has many applications, especially, it is used in Compressed Sensing (CS), Sparse Component Analysis (SCA), and sparse decomposition of signals on overcomplete dictionaries. We have recently proposed a fast algorithm, called Smoothed $\ell^0$ (SL0), for this task. Contrary to many other sparse recovery algorithms, SL0 is not based on minimizing the $\ell^1$ norm, but it tries to directly minimize the $\ell^0$ norm of the solution. The basic idea of SL0 is optimizing a sequence of certain (continuous) cost functions approximating the $\ell^0$ norm of a vector. However, in previous papers, we did not provide a complete convergence proof for SL0. In this paper, we study the convergence properties of SL0, and show that under a certain sparsity constraint in terms of Asymmetric Restricted Isometry Property (ARIP), and with a certain choice of parameters, the convergence of SL0 to the sparsest solution is guaranteed. Moreover, we study the complexity of SL0, and we show that whenever the dimension of the dictionary grows, the complexity of SL0 increases with the same order as Matching Pursuit (MP), which is one of the fastest existing sparse recovery methods, while contrary to MP, its convergence to the sparsest solution is guaranteed under certain conditions which are satisfied through the choice of parameters.
1708.02813
Nikita Dvornik
Nikita Dvornik, Konstantin Shmelkov, Julien Mairal, Cordelia Schmid
BlitzNet: A Real-Time Deep Network for Scene Understanding
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time scene understanding has become crucial in many applications such as autonomous driving. In this paper, we propose a deep architecture, called BlitzNet, that jointly performs object detection and semantic segmentation in one forward pass, allowing real-time computations. Besides the computational gain of having a single network to perform several tasks, we show that object detection and semantic segmentation benefit from each other in terms of accuracy. Experimental results for VOC and COCO datasets show state-of-the-art performance for object detection and segmentation among real time systems.
[ { "created": "Wed, 9 Aug 2017 12:36:17 GMT", "version": "v1" } ]
2017-08-10
[ [ "Dvornik", "Nikita", "" ], [ "Shmelkov", "Konstantin", "" ], [ "Mairal", "Julien", "" ], [ "Schmid", "Cordelia", "" ] ]
Real-time scene understanding has become crucial in many applications such as autonomous driving. In this paper, we propose a deep architecture, called BlitzNet, that jointly performs object detection and semantic segmentation in one forward pass, allowing real-time computations. Besides the computational gain of having a single network to perform several tasks, we show that object detection and semantic segmentation benefit from each other in terms of accuracy. Experimental results for VOC and COCO datasets show state-of-the-art performance for object detection and segmentation among real time systems.
2203.10673
Nardine Basta
Nardine Basta, Ming Ding, Muhammad Ikram, Mohamed Ali Kaafar
5G-Enabled Pseudonymity for Cooperative Intelligent Transportation System
null
null
null
null
cs.NI cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cooperative Intelligent Transportation Systems (C-ITS) enable communications between vehicles, road-side infrastructures, and road-users to improve users' safety and to efficiently manage traffic. Most, if not all, of the intelligent vehicles-to-everything (V2X) applications, often rely on continuous collection and sharing of sensitive information such as detailed location information which raises privacy concerns. In this light, a common approach to concealing the long-term identity of C-ITS vehicles is using multiple temporary identifiers, called pseudonyms. However, the legacy pseudonyms management approach is prone to linking attacks. The introduction of 5G network to V2X offers enhanced location accuracy, better clock synchronisation, improved modular service-based architecture, and enhanced security and privacy preservation controls. Motivated by the above enhancements, we study 5G-enabled pseudonyms for protecting vehicle identity privacy in C-ITS. We highlight the gaps in the current standards of pseudonyms management. We further provide recommendations regarding the pseudonyms management life-cycle.
[ { "created": "Sun, 20 Mar 2022 23:10:37 GMT", "version": "v1" } ]
2022-03-22
[ [ "Basta", "Nardine", "" ], [ "Ding", "Ming", "" ], [ "Ikram", "Muhammad", "" ], [ "Kaafar", "Mohamed Ali", "" ] ]
Cooperative Intelligent Transportation Systems (C-ITS) enable communications between vehicles, road-side infrastructures, and road-users to improve users' safety and to efficiently manage traffic. Most, if not all, of the intelligent vehicles-to-everything (V2X) applications, often rely on continuous collection and sharing of sensitive information such as detailed location information which raises privacy concerns. In this light, a common approach to concealing the long-term identity of C-ITS vehicles is using multiple temporary identifiers, called pseudonyms. However, the legacy pseudonyms management approach is prone to linking attacks. The introduction of 5G network to V2X offers enhanced location accuracy, better clock synchronisation, improved modular service-based architecture, and enhanced security and privacy preservation controls. Motivated by the above enhancements, we study 5G-enabled pseudonyms for protecting vehicle identity privacy in C-ITS. We highlight the gaps in the current standards of pseudonyms management. We further provide recommendations regarding the pseudonyms management life-cycle.
2405.03950
Qi Zou
Qi Zou, Na Yu, Daoliang Zhang, Wei Zhang, Rui Gao
Relating-Up: Advancing Graph Neural Networks through Inter-Graph Relationships
16 pages, 6 figures, 9 tables
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have excelled in learning from graph-structured data, especially in understanding the relationships within a single graph, i.e., intra-graph relationships. Despite their successes, GNNs are limited by neglecting the context of relationships across graphs, i.e., inter-graph relationships. Recognizing the potential to extend this capability, we introduce Relating-Up, a plug-and-play module that enhances GNNs by exploiting inter-graph relationships. This module incorporates a relation-aware encoder and a feedback training strategy. The former enables GNNs to capture relationships across graphs, enriching relation-aware graph representation through collective context. The latter utilizes a feedback loop mechanism for the recursively refinement of these representations, leveraging insights from refining inter-graph dynamics to conduct feedback loop. The synergy between these two innovations results in a robust and versatile module. Relating-Up enhances the expressiveness of GNNs, enabling them to encapsulate a wider spectrum of graph relationships with greater precision. Our evaluations across 16 benchmark datasets demonstrate that integrating Relating-Up into GNN architectures substantially improves performance, positioning Relating-Up as a formidable choice for a broad spectrum of graph representation learning tasks.
[ { "created": "Tue, 7 May 2024 02:16:54 GMT", "version": "v1" } ]
2024-05-08
[ [ "Zou", "Qi", "" ], [ "Yu", "Na", "" ], [ "Zhang", "Daoliang", "" ], [ "Zhang", "Wei", "" ], [ "Gao", "Rui", "" ] ]
Graph Neural Networks (GNNs) have excelled in learning from graph-structured data, especially in understanding the relationships within a single graph, i.e., intra-graph relationships. Despite their successes, GNNs are limited by neglecting the context of relationships across graphs, i.e., inter-graph relationships. Recognizing the potential to extend this capability, we introduce Relating-Up, a plug-and-play module that enhances GNNs by exploiting inter-graph relationships. This module incorporates a relation-aware encoder and a feedback training strategy. The former enables GNNs to capture relationships across graphs, enriching relation-aware graph representation through collective context. The latter utilizes a feedback loop mechanism for the recursively refinement of these representations, leveraging insights from refining inter-graph dynamics to conduct feedback loop. The synergy between these two innovations results in a robust and versatile module. Relating-Up enhances the expressiveness of GNNs, enabling them to encapsulate a wider spectrum of graph relationships with greater precision. Our evaluations across 16 benchmark datasets demonstrate that integrating Relating-Up into GNN architectures substantially improves performance, positioning Relating-Up as a formidable choice for a broad spectrum of graph representation learning tasks.
2407.15857
Simen Eide
Simen Eide, Arnoldo Frigessi
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-task Large Language Models
13 pages, 5 figures
null
null
null
cs.LG cs.CL stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ { "created": "Mon, 8 Jul 2024 06:38:50 GMT", "version": "v1" } ]
2024-07-24
[ [ "Eide", "Simen", "" ], [ "Frigessi", "Arnoldo", "" ] ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
2311.09969
Roberto Ulloa Dr.
Celina Kacperski, Mona Bielig, Mykola Makhortykh, Maryna Sydorova, Roberto Ulloa
Examining bias perpetuation in academic search engines: an algorithm audit of Google and Semantic Scholar
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Researchers rely on academic web search engines to find scientific sources, but search engine mechanisms may selectively present content that aligns with biases embedded in the queries. This study examines whether confirmation-biased queries prompted into Google Scholar and Semantic Scholar will yield skewed results. Six queries (topics across health and technology domains such as "vaccines" or "internet use") were analyzed for disparities in search results. We confirm that biased queries (targeting "benefits" or "risks") affect search results in line with the bias, with technology-related queries displaying more significant disparities. Overall, Semantic Scholar exhibited fewer disparities than Google Scholar. Topics rated as more polarizing did not consistently show more skewed results. Academic search results that perpetuate confirmation bias have strong implications for both researchers and citizens searching for evidence. More research is needed to explore how scientific inquiry and academic search engines interact.
[ { "created": "Thu, 16 Nov 2023 15:43:31 GMT", "version": "v1" }, { "created": "Tue, 21 Nov 2023 09:49:25 GMT", "version": "v2" } ]
2023-11-22
[ [ "Kacperski", "Celina", "" ], [ "Bielig", "Mona", "" ], [ "Makhortykh", "Mykola", "" ], [ "Sydorova", "Maryna", "" ], [ "Ulloa", "Roberto", "" ] ]
Researchers rely on academic web search engines to find scientific sources, but search engine mechanisms may selectively present content that aligns with biases embedded in the queries. This study examines whether confirmation-biased queries prompted into Google Scholar and Semantic Scholar will yield skewed results. Six queries (topics across health and technology domains such as "vaccines" or "internet use") were analyzed for disparities in search results. We confirm that biased queries (targeting "benefits" or "risks") affect search results in line with the bias, with technology-related queries displaying more significant disparities. Overall, Semantic Scholar exhibited fewer disparities than Google Scholar. Topics rated as more polarizing did not consistently show more skewed results. Academic search results that perpetuate confirmation bias have strong implications for both researchers and citizens searching for evidence. More research is needed to explore how scientific inquiry and academic search engines interact.
2001.05864
Yiyan Chen
Yiyan Chen, Li Tao, Xueting Wang and Toshihiko Yamasaki
Weakly Supervised Video Summarization by Hierarchical Reinforcement Learning
mmasia 2019 accepted
null
null
null
cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional video summarization approaches based on reinforcement learning have the problem that the reward can only be received after the whole summary is generated. Such kind of reward is sparse and it makes reinforcement learning hard to converge. Another problem is that labelling each frame is tedious and costly, which usually prohibits the construction of large-scale datasets. To solve these problems, we propose a weakly supervised hierarchical reinforcement learning framework, which decomposes the whole task into several subtasks to enhance the summarization quality. This framework consists of a manager network and a worker network. For each subtask, the manager is trained to set a subgoal only by a task-level binary label, which requires much fewer labels than conventional approaches. With the guide of the subgoal, the worker predicts the importance scores for video frames in the subtask by policy gradient according to both global reward and innovative defined sub-rewards to overcome the sparse problem. Experiments on two benchmark datasets show that our proposal has achieved the best performance, even better than supervised approaches.
[ { "created": "Sun, 12 Jan 2020 07:47:02 GMT", "version": "v1" }, { "created": "Sat, 29 Feb 2020 15:31:24 GMT", "version": "v2" } ]
2020-03-03
[ [ "Chen", "Yiyan", "" ], [ "Tao", "Li", "" ], [ "Wang", "Xueting", "" ], [ "Yamasaki", "Toshihiko", "" ] ]
Conventional video summarization approaches based on reinforcement learning have the problem that the reward can only be received after the whole summary is generated. Such kind of reward is sparse and it makes reinforcement learning hard to converge. Another problem is that labelling each frame is tedious and costly, which usually prohibits the construction of large-scale datasets. To solve these problems, we propose a weakly supervised hierarchical reinforcement learning framework, which decomposes the whole task into several subtasks to enhance the summarization quality. This framework consists of a manager network and a worker network. For each subtask, the manager is trained to set a subgoal only by a task-level binary label, which requires much fewer labels than conventional approaches. With the guide of the subgoal, the worker predicts the importance scores for video frames in the subtask by policy gradient according to both global reward and innovative defined sub-rewards to overcome the sparse problem. Experiments on two benchmark datasets show that our proposal has achieved the best performance, even better than supervised approaches.
2210.13376
William Buchanan Prof
Simon R Davies, Richard Macfarlane, William J. Buchanan
Comparison of Entropy Calculation Methods for Ransomware Encrypted File Identification
null
Entropy. 2022; 24(10):1503
10.3390/e24101503
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Ransomware is a malicious class of software that utilises encryption to implement an attack on system availability. The target's data remains encrypted and is held captive by the attacker until a ransom demand is met. A common approach used by many crypto-ransomware detection techniques is to monitor file system activity and attempt to identify encrypted files being written to disk, often using a file's entropy as an indicator of encryption. However, often in the description of these techniques, little or no discussion is made as to why a particular entropy calculation technique is selected or any justification given as to why one technique is selected over the alternatives. The Shannon method of entropy calculation is the most commonly-used technique when it comes to file encryption identification in crypto-ransomware detection techniques. Overall, correctly encrypted data should be indistinguishable from random data, so apart from the standard mathematical entropy calculations such as Chi-Square, Shannon Entropy and Serial Correlation, the test suites used to validate the output from pseudo-random number generators would also be suited to perform this analysis. he hypothesis being that there is a fundamental difference between different entropy methods and that the best methods may be used to better detect ransomware encrypted files. The paper compares the accuracy of 53 distinct tests in being able to differentiate between encrypted data and other file types. The testing is broken down into two phases, the first phase is used to identify potential candidate tests, and a second phase where these candidates are thoroughly evaluated. To ensure that the tests were sufficiently robust, the NapierOne dataset is used. This dataset contains thousands of examples of the most commonly used file types, as well as examples of files that have been encrypted by crypto-ransomware.
[ { "created": "Mon, 24 Oct 2022 16:19:54 GMT", "version": "v1" } ]
2022-10-25
[ [ "Davies", "Simon R", "" ], [ "Macfarlane", "Richard", "" ], [ "Buchanan", "William J.", "" ] ]
Ransomware is a malicious class of software that utilises encryption to implement an attack on system availability. The target's data remains encrypted and is held captive by the attacker until a ransom demand is met. A common approach used by many crypto-ransomware detection techniques is to monitor file system activity and attempt to identify encrypted files being written to disk, often using a file's entropy as an indicator of encryption. However, often in the description of these techniques, little or no discussion is made as to why a particular entropy calculation technique is selected or any justification given as to why one technique is selected over the alternatives. The Shannon method of entropy calculation is the most commonly-used technique when it comes to file encryption identification in crypto-ransomware detection techniques. Overall, correctly encrypted data should be indistinguishable from random data, so apart from the standard mathematical entropy calculations such as Chi-Square, Shannon Entropy and Serial Correlation, the test suites used to validate the output from pseudo-random number generators would also be suited to perform this analysis. he hypothesis being that there is a fundamental difference between different entropy methods and that the best methods may be used to better detect ransomware encrypted files. The paper compares the accuracy of 53 distinct tests in being able to differentiate between encrypted data and other file types. The testing is broken down into two phases, the first phase is used to identify potential candidate tests, and a second phase where these candidates are thoroughly evaluated. To ensure that the tests were sufficiently robust, the NapierOne dataset is used. This dataset contains thousands of examples of the most commonly used file types, as well as examples of files that have been encrypted by crypto-ransomware.
1908.10554
Zhenghao Liu PhD.
Zhenghao Liu, Chenyan Xiong, Maosong Sun, Zhiyuan Liu
Explore Entity Embedding Effectiveness in Entity Retrieval
12 pages, 2 figures
CCL 2019
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores entity embedding effectiveness in ad-hoc entity retrieval, which introduces distributed representation of entities into entity retrieval. The knowledge graph contains lots of knowledge and models entity semantic relations with the well-formed structural representation. Entity embedding learns lots of semantic information from the knowledge graph and represents entities with a low-dimensional representation, which provides an opportunity to establish interactions between query related entities and candidate entities for entity retrieval. Our experiments demonstrate the effectiveness of entity embedding based model, which achieves more than 5\% improvement than the previous state-of-the-art learning to rank based entity retrieval model. Our further analysis reveals that the entity semantic match feature effective, especially for the scenario which needs more semantic understanding.
[ { "created": "Wed, 28 Aug 2019 05:27:19 GMT", "version": "v1" } ]
2019-08-29
[ [ "Liu", "Zhenghao", "" ], [ "Xiong", "Chenyan", "" ], [ "Sun", "Maosong", "" ], [ "Liu", "Zhiyuan", "" ] ]
This paper explores entity embedding effectiveness in ad-hoc entity retrieval, which introduces distributed representation of entities into entity retrieval. The knowledge graph contains lots of knowledge and models entity semantic relations with the well-formed structural representation. Entity embedding learns lots of semantic information from the knowledge graph and represents entities with a low-dimensional representation, which provides an opportunity to establish interactions between query related entities and candidate entities for entity retrieval. Our experiments demonstrate the effectiveness of entity embedding based model, which achieves more than 5\% improvement than the previous state-of-the-art learning to rank based entity retrieval model. Our further analysis reveals that the entity semantic match feature effective, especially for the scenario which needs more semantic understanding.
2305.18655
Youngseog Chung
Youngseog Chung, Aaron Rumack, Chirag Gupta
Parity Calibration
To appear at UAI 2023 (Oral); 19 pages and 10 figures
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a sequential regression setting, a decision-maker may be primarily concerned with whether the future observation will increase or decrease compared to the current one, rather than the actual value of the future observation. In this context, we introduce the notion of parity calibration, which captures the goal of calibrated forecasting for the increase-decrease (or "parity") event in a timeseries. Parity probabilities can be extracted from a forecasted distribution for the output, but we show that such a strategy leads to theoretical unpredictability and poor practical performance. We then observe that although the original task was regression, parity calibration can be expressed as binary calibration. Drawing on this connection, we use an online binary calibration method to achieve parity calibration. We demonstrate the effectiveness of our approach on real-world case studies in epidemiology, weather forecasting, and model-based control in nuclear fusion.
[ { "created": "Mon, 29 May 2023 23:27:42 GMT", "version": "v1" }, { "created": "Wed, 7 Jun 2023 23:14:34 GMT", "version": "v2" } ]
2023-06-09
[ [ "Chung", "Youngseog", "" ], [ "Rumack", "Aaron", "" ], [ "Gupta", "Chirag", "" ] ]
In a sequential regression setting, a decision-maker may be primarily concerned with whether the future observation will increase or decrease compared to the current one, rather than the actual value of the future observation. In this context, we introduce the notion of parity calibration, which captures the goal of calibrated forecasting for the increase-decrease (or "parity") event in a timeseries. Parity probabilities can be extracted from a forecasted distribution for the output, but we show that such a strategy leads to theoretical unpredictability and poor practical performance. We then observe that although the original task was regression, parity calibration can be expressed as binary calibration. Drawing on this connection, we use an online binary calibration method to achieve parity calibration. We demonstrate the effectiveness of our approach on real-world case studies in epidemiology, weather forecasting, and model-based control in nuclear fusion.
2112.09964
Ke Alexander Wang
Ke Alexander Wang, Danielle Maddix, Yuyang Wang
GOPHER: Categorical probabilistic forecasting with graph structure via local continuous-time dynamics
NeurIPS 2021 Workshop ICBINB Spotlight
null
null
null
cs.LG math.DS
http://creativecommons.org/licenses/by/4.0/
We consider the problem of probabilistic forecasting over categories with graph structure, where the dynamics at a vertex depends on its local connectivity structure. We present GOPHER, a method that combines the inductive bias of graph neural networks with neural ODEs to capture the intrinsic local continuous-time dynamics of our probabilistic forecasts. We study the benefits of these two inductive biases by comparing against baseline models that help disentangle the benefits of each. We find that capturing the graph structure is crucial for accurate in-domain probabilistic predictions and more sample efficient models. Surprisingly, our experiments demonstrate that the continuous time evolution inductive bias brings little to no benefit despite reflecting the true probability dynamics.
[ { "created": "Sat, 18 Dec 2021 16:51:53 GMT", "version": "v1" } ]
2021-12-21
[ [ "Wang", "Ke Alexander", "" ], [ "Maddix", "Danielle", "" ], [ "Wang", "Yuyang", "" ] ]
We consider the problem of probabilistic forecasting over categories with graph structure, where the dynamics at a vertex depends on its local connectivity structure. We present GOPHER, a method that combines the inductive bias of graph neural networks with neural ODEs to capture the intrinsic local continuous-time dynamics of our probabilistic forecasts. We study the benefits of these two inductive biases by comparing against baseline models that help disentangle the benefits of each. We find that capturing the graph structure is crucial for accurate in-domain probabilistic predictions and more sample efficient models. Surprisingly, our experiments demonstrate that the continuous time evolution inductive bias brings little to no benefit despite reflecting the true probability dynamics.
0812.3259
Valmir Barbosa
Rodrigo S. C. Leao, Valmir C. Barbosa
Approximate conditional distributions of distances between nodes in a two-dimensional sensor network
null
Lecture Notes in Computer Science 5513 (2009), 324-338
10.1007/978-3-642-02205-0_23
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When we represent a network of sensors in Euclidean space by a graph, there are two distances between any two nodes that we may consider. One of them is the Euclidean distance. The other is the distance between the two nodes in the graph, defined to be the number of edges on a shortest path between them. In this paper, we consider a network of sensors placed uniformly at random in a two-dimensional region and study two conditional distributions related to these distances. The first is the probability distribution of distances in the graph, conditioned on Euclidean distances; the other is the probability density function associated with Euclidean distances, conditioned on distances in the graph. We study these distributions both analytically (when feasible) and by means of simulations. To the best of our knowledge, our results constitute the first of their kind and open up the possibility of discovering improved solutions to certain sensor-network problems, as for example sensor localization.
[ { "created": "Wed, 17 Dec 2008 11:23:59 GMT", "version": "v1" } ]
2009-06-10
[ [ "Leao", "Rodrigo S. C.", "" ], [ "Barbosa", "Valmir C.", "" ] ]
When we represent a network of sensors in Euclidean space by a graph, there are two distances between any two nodes that we may consider. One of them is the Euclidean distance. The other is the distance between the two nodes in the graph, defined to be the number of edges on a shortest path between them. In this paper, we consider a network of sensors placed uniformly at random in a two-dimensional region and study two conditional distributions related to these distances. The first is the probability distribution of distances in the graph, conditioned on Euclidean distances; the other is the probability density function associated with Euclidean distances, conditioned on distances in the graph. We study these distributions both analytically (when feasible) and by means of simulations. To the best of our knowledge, our results constitute the first of their kind and open up the possibility of discovering improved solutions to certain sensor-network problems, as for example sensor localization.
2301.11467
Chuang Zhao
Chuang Zhao, Hongke Zhao, Ming He, Jian Zhang and Jianping Fan
Cross-domain recommendation via user interest alignment
TheWebConf2023
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-domain recommendation aims to leverage knowledge from multiple domains to alleviate the data sparsity and cold-start problems in traditional recommender systems. One popular paradigm is to employ overlapping user representations to establish domain connections, thereby improving recommendation performance in all scenarios. Nevertheless, the general practice of this approach is to train user embeddings in each domain separately and then aggregate them in a plain manner, often ignoring potential cross-domain similarities between users and items. Furthermore, considering that their training objective is recommendation task-oriented without specific regularizations, the optimized embeddings disregard the interest alignment among user's views, and even violate the user's original interest distribution. To address these challenges, we propose a novel cross-domain recommendation framework, namely COAST, to improve recommendation performance on dual domains by perceiving the cross-domain similarity between entities and aligning user interests. Specifically, we first construct a unified cross-domain heterogeneous graph and redefine the message passing mechanism of graph convolutional networks to capture high-order similarity of users and items across domains. Targeted at user interest alignment, we develop deep insights from two more fine-grained perspectives of user-user and user-item interest invariance across domains by virtue of affluent unsupervised and semantic signals. We conduct intensive experiments on multiple tasks, constructed from two large recommendation data sets. Extensive results show COAST consistently and significantly outperforms state-of-the-art cross-domain recommendation algorithms as well as classic single-domain recommendation methods.
[ { "created": "Thu, 26 Jan 2023 23:54:41 GMT", "version": "v1" } ]
2023-01-30
[ [ "Zhao", "Chuang", "" ], [ "Zhao", "Hongke", "" ], [ "He", "Ming", "" ], [ "Zhang", "Jian", "" ], [ "Fan", "Jianping", "" ] ]
Cross-domain recommendation aims to leverage knowledge from multiple domains to alleviate the data sparsity and cold-start problems in traditional recommender systems. One popular paradigm is to employ overlapping user representations to establish domain connections, thereby improving recommendation performance in all scenarios. Nevertheless, the general practice of this approach is to train user embeddings in each domain separately and then aggregate them in a plain manner, often ignoring potential cross-domain similarities between users and items. Furthermore, considering that their training objective is recommendation task-oriented without specific regularizations, the optimized embeddings disregard the interest alignment among user's views, and even violate the user's original interest distribution. To address these challenges, we propose a novel cross-domain recommendation framework, namely COAST, to improve recommendation performance on dual domains by perceiving the cross-domain similarity between entities and aligning user interests. Specifically, we first construct a unified cross-domain heterogeneous graph and redefine the message passing mechanism of graph convolutional networks to capture high-order similarity of users and items across domains. Targeted at user interest alignment, we develop deep insights from two more fine-grained perspectives of user-user and user-item interest invariance across domains by virtue of affluent unsupervised and semantic signals. We conduct intensive experiments on multiple tasks, constructed from two large recommendation data sets. Extensive results show COAST consistently and significantly outperforms state-of-the-art cross-domain recommendation algorithms as well as classic single-domain recommendation methods.
2109.12599
Che Liu
Che Liu, Rui Wang, Jinghua Liu, Jian Sun, Fei Huang, Luo Si
DialogueCSE: Dialogue-based Contrastive Learning of Sentence Embeddings
Accepted as Long Paper at "EMNLP,2021"
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning sentence embeddings from dialogues has drawn increasing attention due to its low annotation cost and high domain adaptability. Conventional approaches employ the siamese-network for this task, which obtains the sentence embeddings through modeling the context-response semantic relevance by applying a feed-forward network on top of the sentence encoders. However, as the semantic textual similarity is commonly measured through the element-wise distance metrics (e.g. cosine and L2 distance), such architecture yields a large gap between training and evaluating. In this paper, we propose DialogueCSE, a dialogue-based contrastive learning approach to tackle this issue. DialogueCSE first introduces a novel matching-guided embedding (MGE) mechanism, which generates a context-aware embedding for each candidate response embedding (i.e. the context-free embedding) according to the guidance of the multi-turn context-response matching matrices. Then it pairs each context-aware embedding with its corresponding context-free embedding and finally minimizes the contrastive loss across all pairs. We evaluate our model on three multi-turn dialogue datasets: the Microsoft Dialogue Corpus, the Jing Dong Dialogue Corpus, and the E-commerce Dialogue Corpus. Evaluation results show that our approach significantly outperforms the baselines across all three datasets in terms of MAP and Spearman's correlation measures, demonstrating its effectiveness. Further quantitative experiments show that our approach achieves better performance when leveraging more dialogue context and remains robust when less training data is provided.
[ { "created": "Sun, 26 Sep 2021 13:25:41 GMT", "version": "v1" } ]
2021-09-28
[ [ "Liu", "Che", "" ], [ "Wang", "Rui", "" ], [ "Liu", "Jinghua", "" ], [ "Sun", "Jian", "" ], [ "Huang", "Fei", "" ], [ "Si", "Luo", "" ] ]
Learning sentence embeddings from dialogues has drawn increasing attention due to its low annotation cost and high domain adaptability. Conventional approaches employ the siamese-network for this task, which obtains the sentence embeddings through modeling the context-response semantic relevance by applying a feed-forward network on top of the sentence encoders. However, as the semantic textual similarity is commonly measured through the element-wise distance metrics (e.g. cosine and L2 distance), such architecture yields a large gap between training and evaluating. In this paper, we propose DialogueCSE, a dialogue-based contrastive learning approach to tackle this issue. DialogueCSE first introduces a novel matching-guided embedding (MGE) mechanism, which generates a context-aware embedding for each candidate response embedding (i.e. the context-free embedding) according to the guidance of the multi-turn context-response matching matrices. Then it pairs each context-aware embedding with its corresponding context-free embedding and finally minimizes the contrastive loss across all pairs. We evaluate our model on three multi-turn dialogue datasets: the Microsoft Dialogue Corpus, the Jing Dong Dialogue Corpus, and the E-commerce Dialogue Corpus. Evaluation results show that our approach significantly outperforms the baselines across all three datasets in terms of MAP and Spearman's correlation measures, demonstrating its effectiveness. Further quantitative experiments show that our approach achieves better performance when leveraging more dialogue context and remains robust when less training data is provided.
2201.12482
Feng Li
Feng Li, Xuyang Yuan, Lina Wang, Huan Yang, Dongxiao Yu, Weifeng Lv, Xiuzhen Cheng
Collaborative Learning in General Graphs with Limited Memorization: Complexity, Learnability, and Reliability
null
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a K-armed bandit problem in general graphs where agents are arbitrarily connected and each of them has limited memorizing capabilities and communication bandwidth. The goal is to let each of the agents eventually learn the best arm. It is assumed in these studies that the communication graph should be complete or well-structured, whereas such an assumption is not always valid in practice. Furthermore, limited memorization and communication bandwidth also restrict the collaborations of the agents, since the agents memorize and communicate very few experiences. Additionally, an agent may be corrupted to share falsified experiences to its peers, while the resource limit in terms of memorization and communication may considerably restrict the reliability of the learning process. To address the above issues, we propose a three-staged collaborative learning algorithm. In each step, the agents share their latest experiences with each other through light-weight random walks in a general communication graph, and then make decisions on which arms to pull according to the recommendations received from their peers. The agents finally update their adoptions (i.e., preferences to the arms) based on the reward obtained by pulling the arms. Our theoretical analysis shows that, when there are a sufficient number of agents participating in the collaborative learning process, all the agents eventually learn the best arm with high probability, even with limited memorizing capabilities and light-weight communications. We also reveal in our theoretical analysis the upper bound on the number of corrupted agents our algorithm can tolerate. The efficacy of our proposed three-staged collaborative learning algorithm is finally verified by extensive experiments on both synthetic and real datasets.
[ { "created": "Sat, 29 Jan 2022 02:42:25 GMT", "version": "v1" }, { "created": "Wed, 3 May 2023 12:03:20 GMT", "version": "v2" }, { "created": "Sun, 7 May 2023 01:32:59 GMT", "version": "v3" } ]
2023-05-09
[ [ "Li", "Feng", "" ], [ "Yuan", "Xuyang", "" ], [ "Wang", "Lina", "" ], [ "Yang", "Huan", "" ], [ "Yu", "Dongxiao", "" ], [ "Lv", "Weifeng", "" ], [ "Cheng", "Xiuzhen", "" ] ]
We consider a K-armed bandit problem in general graphs where agents are arbitrarily connected and each of them has limited memorizing capabilities and communication bandwidth. The goal is to let each of the agents eventually learn the best arm. It is assumed in these studies that the communication graph should be complete or well-structured, whereas such an assumption is not always valid in practice. Furthermore, limited memorization and communication bandwidth also restrict the collaborations of the agents, since the agents memorize and communicate very few experiences. Additionally, an agent may be corrupted to share falsified experiences to its peers, while the resource limit in terms of memorization and communication may considerably restrict the reliability of the learning process. To address the above issues, we propose a three-staged collaborative learning algorithm. In each step, the agents share their latest experiences with each other through light-weight random walks in a general communication graph, and then make decisions on which arms to pull according to the recommendations received from their peers. The agents finally update their adoptions (i.e., preferences to the arms) based on the reward obtained by pulling the arms. Our theoretical analysis shows that, when there are a sufficient number of agents participating in the collaborative learning process, all the agents eventually learn the best arm with high probability, even with limited memorizing capabilities and light-weight communications. We also reveal in our theoretical analysis the upper bound on the number of corrupted agents our algorithm can tolerate. The efficacy of our proposed three-staged collaborative learning algorithm is finally verified by extensive experiments on both synthetic and real datasets.
2401.04343
Xinyu Tang
Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
null
null
null
null
cs.LG cs.CL cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
Differentially private stochastic gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner, but has proven difficult to scale to the era of foundation models. We introduce DP-ZO, a private fine-tuning framework for large language models by privatizing zeroth order optimization methods. A key insight into the design of our method is that the direction of the gradient in the zeroth-order optimization we use is random and the only information from training data is the step size, i.e., a scalar. Therefore, we only need to privatize the scalar step size, which is memory-efficient. DP-ZO provides a strong privacy-utility trade-off across different tasks, and model sizes that are comparable to DP-SGD in $(\varepsilon,\delta)$-DP. Notably, DP-ZO possesses significant advantages over DP-SGD in memory efficiency, and obtains higher utility in $\varepsilon$-DP when using the Laplace mechanism.
[ { "created": "Tue, 9 Jan 2024 03:53:59 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2024 15:07:50 GMT", "version": "v2" } ]
2024-08-13
[ [ "Tang", "Xinyu", "" ], [ "Panda", "Ashwinee", "" ], [ "Nasr", "Milad", "" ], [ "Mahloujifar", "Saeed", "" ], [ "Mittal", "Prateek", "" ] ]
Differentially private stochastic gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner, but has proven difficult to scale to the era of foundation models. We introduce DP-ZO, a private fine-tuning framework for large language models by privatizing zeroth order optimization methods. A key insight into the design of our method is that the direction of the gradient in the zeroth-order optimization we use is random and the only information from training data is the step size, i.e., a scalar. Therefore, we only need to privatize the scalar step size, which is memory-efficient. DP-ZO provides a strong privacy-utility trade-off across different tasks, and model sizes that are comparable to DP-SGD in $(\varepsilon,\delta)$-DP. Notably, DP-ZO possesses significant advantages over DP-SGD in memory efficiency, and obtains higher utility in $\varepsilon$-DP when using the Laplace mechanism.
0901.0222
Damien Chablat
Liang Ma (IRCCyN), Damien Chablat (IRCCyN), Fouad Bennis (IRCCyN), Wei Zhang (DIE)
Dynamic Muscle Fatigue Evaluation in Virtual Working Environment
null
International Journal of Industrial Ergonomics 39, 1 (2009) 211-220
10.1016/j.ergon.2008.04.004
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Musculoskeletal disorder (MSD) is one of the major health problems in mechanical work especially in manual handling jobs. Muscle fatigue is believed to be the main reason for MSD. Posture analysis techniques have been used to expose MSD risks of the work, but most of the conventional methods are only suitable for static posture analysis. Meanwhile the subjective influences from the inspectors can result differences in the risk assessment. Another disadvantage is that the evaluation has to be taken place in the workshop, so it is impossible to avoid some design defects before data collection in the field environment and it is time consuming. In order to enhance the efficiency of ergonomic MSD risk evaluation and avoid subjective influences, we develop a new muscle fatigue model and a new fatigue index to evaluate the human muscle fatigue during manual handling jobs in this paper. Our new fatigue model is closely related to the muscle load during working procedure so that it can be used to evaluate the dynamic working process. This muscle fatigue model is mathematically validated and it is to be further experimental validated and integrated into a virtual working environment to evaluate the muscle fatigue and predict the MSD risks quickly and objectively.
[ { "created": "Fri, 2 Jan 2009 08:49:04 GMT", "version": "v1" } ]
2009-01-05
[ [ "Ma", "Liang", "", "IRCCyN" ], [ "Chablat", "Damien", "", "IRCCyN" ], [ "Bennis", "Fouad", "", "IRCCyN" ], [ "Zhang", "Wei", "", "DIE" ] ]
Musculoskeletal disorder (MSD) is one of the major health problems in mechanical work especially in manual handling jobs. Muscle fatigue is believed to be the main reason for MSD. Posture analysis techniques have been used to expose MSD risks of the work, but most of the conventional methods are only suitable for static posture analysis. Meanwhile the subjective influences from the inspectors can result differences in the risk assessment. Another disadvantage is that the evaluation has to be taken place in the workshop, so it is impossible to avoid some design defects before data collection in the field environment and it is time consuming. In order to enhance the efficiency of ergonomic MSD risk evaluation and avoid subjective influences, we develop a new muscle fatigue model and a new fatigue index to evaluate the human muscle fatigue during manual handling jobs in this paper. Our new fatigue model is closely related to the muscle load during working procedure so that it can be used to evaluate the dynamic working process. This muscle fatigue model is mathematically validated and it is to be further experimental validated and integrated into a virtual working environment to evaluate the muscle fatigue and predict the MSD risks quickly and objectively.
2005.01539
Spyridon Samothrakis
Spyridon Samothrakis
Open Loop In Natura Economic Planning
10 pages, 3 Figures
null
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The debate between the optimal way of allocating societal surplus (i.e. products and services) has been raging, in one form or another, practically forever; following the collapse of the Soviet Union in 1991, the market became the only legitimate form of organisation -- there was no other alternative. Working within the tradition of Marx, Leontief, Kantorovich, Beer and Cockshott, we propose what we deem an automated planning system that aims to operate on unit level (e.g., factories and citizens), rather than on aggregate demand and sectors. We explain why it is both a viable and desirable alternative to current market conditions and position our solution within current societal structures. Our experiments show that it would be trivial to plan for up to 50K industrial goods and 5K final goods in commodity hardware.
[ { "created": "Mon, 4 May 2020 14:50:01 GMT", "version": "v1" }, { "created": "Thu, 14 May 2020 14:28:48 GMT", "version": "v2" } ]
2020-05-15
[ [ "Samothrakis", "Spyridon", "" ] ]
The debate between the optimal way of allocating societal surplus (i.e. products and services) has been raging, in one form or another, practically forever; following the collapse of the Soviet Union in 1991, the market became the only legitimate form of organisation -- there was no other alternative. Working within the tradition of Marx, Leontief, Kantorovich, Beer and Cockshott, we propose what we deem an automated planning system that aims to operate on unit level (e.g., factories and citizens), rather than on aggregate demand and sectors. We explain why it is both a viable and desirable alternative to current market conditions and position our solution within current societal structures. Our experiments show that it would be trivial to plan for up to 50K industrial goods and 5K final goods in commodity hardware.
1906.00327
Pareesa Golnari
Pareesa Ameneh Golnari and Sharad Malik
Sparse Matrix to Matrix Multiplication: A Representation and Architecture for Acceleration (long version)
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accelerators for sparse matrix multiplication are important components in emerging systems. In this paper, we study the main challenges of accelerating Sparse Matrix Multiplication (SpMM). For the situations that data is not stored in the desired order (row/column order), we propose a compact high performance sparse format, which allows for random access to a dataset with low memory access overhead. We show that using this format results in a 14-49 times speedup for SpMM. Next, we propose a high performance systolic architecture for SpMM, which uses a mesh of comparators to locate the useful (non-zero) computation. This design maximizes data reuse by sharing the input data among a row/column of the mesh. We also show that, with similar memory access assumptions, the proposed architecture results in a 9-30 times speedup in comparison with the state of the art.
[ { "created": "Sun, 2 Jun 2019 02:26:30 GMT", "version": "v1" } ]
2019-06-04
[ [ "Golnari", "Pareesa Ameneh", "" ], [ "Malik", "Sharad", "" ] ]
Accelerators for sparse matrix multiplication are important components in emerging systems. In this paper, we study the main challenges of accelerating Sparse Matrix Multiplication (SpMM). For the situations that data is not stored in the desired order (row/column order), we propose a compact high performance sparse format, which allows for random access to a dataset with low memory access overhead. We show that using this format results in a 14-49 times speedup for SpMM. Next, we propose a high performance systolic architecture for SpMM, which uses a mesh of comparators to locate the useful (non-zero) computation. This design maximizes data reuse by sharing the input data among a row/column of the mesh. We also show that, with similar memory access assumptions, the proposed architecture results in a 9-30 times speedup in comparison with the state of the art.
1607.07939
Ali Ghadirzadeh
Ali Ghadirzadeh, Judith B\"utepage, Atsuto Maki, Danica Kragic and M{\aa}rten Bj\"orkman
A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction
The paper is accepted for publication at the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016)
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty of the human actions is modeled using Gaussian processes (GP) to implement action-value functions. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainties and equal role sharing between the partners.
[ { "created": "Wed, 27 Jul 2016 02:29:52 GMT", "version": "v1" } ]
2016-07-28
[ [ "Ghadirzadeh", "Ali", "" ], [ "Bütepage", "Judith", "" ], [ "Maki", "Atsuto", "" ], [ "Kragic", "Danica", "" ], [ "Björkman", "Mårten", "" ] ]
Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty of the human actions is modeled using Gaussian processes (GP) to implement action-value functions. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainties and equal role sharing between the partners.
1905.02682
Alessio Caminata
Alessio Caminata, Elisa Gorla
The complexity of MinRank
Final version. Theorem numbering adjusted to match the published version
Women in Numbers Europe III. Association for Women in Mathematics Series, vol 24, pp. 163-169, Springer, Cham, 2021
null
null
cs.SC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note, we leverage some of our results from arXiv:1706.06319 to produce a concise and rigorous proof for the complexity of the generalized MinRank Problem in the under-defined and well-defined case. Our main theorem recovers and extends previous results by Faug\`ere, Safey El Din, Spaenlehauer (arXiv:1112.4411).
[ { "created": "Mon, 6 May 2019 16:34:00 GMT", "version": "v1" }, { "created": "Mon, 3 Jun 2019 13:29:02 GMT", "version": "v2" }, { "created": "Thu, 10 Mar 2022 16:24:38 GMT", "version": "v3" } ]
2022-03-11
[ [ "Caminata", "Alessio", "" ], [ "Gorla", "Elisa", "" ] ]
In this note, we leverage some of our results from arXiv:1706.06319 to produce a concise and rigorous proof for the complexity of the generalized MinRank Problem in the under-defined and well-defined case. Our main theorem recovers and extends previous results by Faug\`ere, Safey El Din, Spaenlehauer (arXiv:1112.4411).
1810.03155
JuanLuis GonzalezBello
Juan Luis Gonzalez, Muhammad Sarmad, Hyunjoo J.Lee, Munchurl Kim
Finding Correspondences for Optical Flow and Disparity Estimations using a Sub-pixel Convolution-based Encoder-Decoder Network
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks (DCNN) have recently shown promising results in low-level computer vision problems such as optical flow and disparity estimation, but still, have much room to further improve their performance. In this paper, we propose a novel sub-pixel convolution-based encoder-decoder network for optical flow and disparity estimations, which can extend FlowNetS and DispNet by replacing the deconvolution layers with sup-pixel convolution blocks. By using sub-pixel refinement and estimation on the decoder stages instead of deconvolution, we can significantly improve the estimation accuracy for optical flow and disparity, even with reduced numbers of parameters. We show a supervised end-to-end training of our proposed networks for optical flow and disparity estimations, and an unsupervised end-to-end training for monocular depth and pose estimations. In order to verify the effectiveness of our proposed networks, we perform intensive experiments for (i) optical flow and disparity estimations, and (ii) monocular depth and pose estimations. Throughout the extensive experiments, our proposed networks outperform the baselines such as FlowNetS and DispNet in terms of estimation accuracy and training times.
[ { "created": "Sun, 7 Oct 2018 14:41:37 GMT", "version": "v1" } ]
2018-10-12
[ [ "Gonzalez", "Juan Luis", "" ], [ "Sarmad", "Muhammad", "" ], [ "Lee", "Hyunjoo J.", "" ], [ "Kim", "Munchurl", "" ] ]
Deep convolutional neural networks (DCNN) have recently shown promising results in low-level computer vision problems such as optical flow and disparity estimation, but still, have much room to further improve their performance. In this paper, we propose a novel sub-pixel convolution-based encoder-decoder network for optical flow and disparity estimations, which can extend FlowNetS and DispNet by replacing the deconvolution layers with sup-pixel convolution blocks. By using sub-pixel refinement and estimation on the decoder stages instead of deconvolution, we can significantly improve the estimation accuracy for optical flow and disparity, even with reduced numbers of parameters. We show a supervised end-to-end training of our proposed networks for optical flow and disparity estimations, and an unsupervised end-to-end training for monocular depth and pose estimations. In order to verify the effectiveness of our proposed networks, we perform intensive experiments for (i) optical flow and disparity estimations, and (ii) monocular depth and pose estimations. Throughout the extensive experiments, our proposed networks outperform the baselines such as FlowNetS and DispNet in terms of estimation accuracy and training times.
2001.06420
Enrico Cambiaso
Maurizio Aiello, Enrico Cambiaso, Roberto Canonico, Leonardo Maccari, Marco Mellia, Antonio Pescap\`e, Ivan Vaccari
IPPO: A Privacy-Aware Architecture for Decentralized Data-sharing
null
null
null
null
cs.NI cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Online trackers personalize ads campaigns, exponentially increasing their efficacy compared to traditional channels. The downside of this is that thousands of mostly unknown systems own our profiles and violate our privacy without our awareness. IPPO turns the table and re-empower users of their data, through anonymised data publishing via a Blockchain-based Decentralized Data Marketplace. We also propose a service based on machine learning and big data analytics to automatically identify web trackers and build Privacy Labels (PLs), based on the nutrition labels concept. This paper describes the motivation, the vision, the architecture and the research challenges related to IPPO.
[ { "created": "Fri, 17 Jan 2020 16:34:31 GMT", "version": "v1" } ]
2020-01-20
[ [ "Aiello", "Maurizio", "" ], [ "Cambiaso", "Enrico", "" ], [ "Canonico", "Roberto", "" ], [ "Maccari", "Leonardo", "" ], [ "Mellia", "Marco", "" ], [ "Pescapè", "Antonio", "" ], [ "Vaccari", "Ivan", "" ] ]
Online trackers personalize ads campaigns, exponentially increasing their efficacy compared to traditional channels. The downside of this is that thousands of mostly unknown systems own our profiles and violate our privacy without our awareness. IPPO turns the table and re-empower users of their data, through anonymised data publishing via a Blockchain-based Decentralized Data Marketplace. We also propose a service based on machine learning and big data analytics to automatically identify web trackers and build Privacy Labels (PLs), based on the nutrition labels concept. This paper describes the motivation, the vision, the architecture and the research challenges related to IPPO.
2210.14190
Bashar Alhafni
Hossein Rajaby Faghihi, Bashar Alhafni, Ke Zhang, Shihao Ran, Joel Tetreault, Alejandro Jaimes
CrisisLTLSum: A Benchmark for Local Crisis Event Timeline Extraction and Summarization
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Social media has increasingly played a key role in emergency response: first responders can use public posts to better react to ongoing crisis events and deploy the necessary resources where they are most needed. Timeline extraction and abstractive summarization are critical technical tasks to leverage large numbers of social media posts about events. Unfortunately, there are few datasets for benchmarking technical approaches for those tasks. This paper presents CrisisLTLSum, the largest dataset of local crisis event timelines available to date. CrisisLTLSum contains 1,000 crisis event timelines across four domains: wildfires, local fires, traffic, and storms. We built CrisisLTLSum using a semi-automated cluster-then-refine approach to collect data from the public Twitter stream. Our initial experiments indicate a significant gap between the performance of strong baselines compared to the human performance on both tasks. Our dataset, code, and models are publicly available.
[ { "created": "Tue, 25 Oct 2022 17:32:40 GMT", "version": "v1" } ]
2022-10-26
[ [ "Faghihi", "Hossein Rajaby", "" ], [ "Alhafni", "Bashar", "" ], [ "Zhang", "Ke", "" ], [ "Ran", "Shihao", "" ], [ "Tetreault", "Joel", "" ], [ "Jaimes", "Alejandro", "" ] ]
Social media has increasingly played a key role in emergency response: first responders can use public posts to better react to ongoing crisis events and deploy the necessary resources where they are most needed. Timeline extraction and abstractive summarization are critical technical tasks to leverage large numbers of social media posts about events. Unfortunately, there are few datasets for benchmarking technical approaches for those tasks. This paper presents CrisisLTLSum, the largest dataset of local crisis event timelines available to date. CrisisLTLSum contains 1,000 crisis event timelines across four domains: wildfires, local fires, traffic, and storms. We built CrisisLTLSum using a semi-automated cluster-then-refine approach to collect data from the public Twitter stream. Our initial experiments indicate a significant gap between the performance of strong baselines compared to the human performance on both tasks. Our dataset, code, and models are publicly available.
2307.04702
David S\"udholt
David S\"udholt, Mateo C\'amara, Zhiyuan Xu, Joshua D. Reiss
Vocal Tract Area Estimation by Gradient Descent
Accepted to DAFx 2023
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Articulatory features can provide interpretable and flexible controls for the synthesis of human vocalizations by allowing the user to directly modify parameters like vocal strain or lip position. To make this manipulation through resynthesis possible, we need to estimate the features that result in a desired vocalization directly from audio recordings. In this work, we propose a white-box optimization technique for estimating glottal source parameters and vocal tract shapes from audio recordings of human vowels. The approach is based on inverse filtering and optimizing the frequency response of a wave\-guide model of the vocal tract with gradient descent, propagating error gradients through the mapping of articulatory features to the vocal tract area function. We apply this method to the task of matching the sound of the Pink Trombone, an interactive articulatory synthesizer, to a given vocalization. We find that our method accurately recovers control functions for audio generated by the Pink Trombone itself. We then compare our technique against evolutionary optimization algorithms and a neural network trained to predict control parameters from audio. A subjective evaluation finds that our approach outperforms these black-box optimization baselines on the task of reproducing human vocalizations.
[ { "created": "Mon, 10 Jul 2023 16:59:49 GMT", "version": "v1" } ]
2023-07-11
[ [ "Südholt", "David", "" ], [ "Cámara", "Mateo", "" ], [ "Xu", "Zhiyuan", "" ], [ "Reiss", "Joshua D.", "" ] ]
Articulatory features can provide interpretable and flexible controls for the synthesis of human vocalizations by allowing the user to directly modify parameters like vocal strain or lip position. To make this manipulation through resynthesis possible, we need to estimate the features that result in a desired vocalization directly from audio recordings. In this work, we propose a white-box optimization technique for estimating glottal source parameters and vocal tract shapes from audio recordings of human vowels. The approach is based on inverse filtering and optimizing the frequency response of a wave\-guide model of the vocal tract with gradient descent, propagating error gradients through the mapping of articulatory features to the vocal tract area function. We apply this method to the task of matching the sound of the Pink Trombone, an interactive articulatory synthesizer, to a given vocalization. We find that our method accurately recovers control functions for audio generated by the Pink Trombone itself. We then compare our technique against evolutionary optimization algorithms and a neural network trained to predict control parameters from audio. A subjective evaluation finds that our approach outperforms these black-box optimization baselines on the task of reproducing human vocalizations.
1512.08325
Shimin Cai Dr
Yao-Dong Zhao, Shi-Min Cai, Ming Tang, Ming-Sheng Shang
A Fast Recommendation Algorithm for Social Tagging Systems : A Delicious Case
20 pages, 7 figures
Physica A 483, 209 (2017)
10.1016/j.physa.2017.04.131
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tripartite graph is one of the commonest topological structures in social tagging systems such as Delicious, which has three types of nodes (i.e., users, URLs and tags). Traditional recommender systems developed based on collaborative filtering for the social tagging systems bring very high demands on CPU time cost. In this paper, to overcome this drawback, we propose a novel approach that extracts non-overlapping user clusters and corresponding overlapping item clusters simultaneously through coarse clustering to accelerate the user-based collaborative filtering and develop a fast recommendation algorithm for the social tagging systems. The experimental results show that the proposed approach is able to dramatically reduce the processing time cost greater than $90\%$ and relatively enhance the accuracy in comparison with the ordinary user-based collaborative filtering algorithm.
[ { "created": "Mon, 28 Dec 2015 06:23:37 GMT", "version": "v1" } ]
2019-08-17
[ [ "Zhao", "Yao-Dong", "" ], [ "Cai", "Shi-Min", "" ], [ "Tang", "Ming", "" ], [ "Shang", "Ming-Sheng", "" ] ]
The tripartite graph is one of the commonest topological structures in social tagging systems such as Delicious, which has three types of nodes (i.e., users, URLs and tags). Traditional recommender systems developed based on collaborative filtering for the social tagging systems bring very high demands on CPU time cost. In this paper, to overcome this drawback, we propose a novel approach that extracts non-overlapping user clusters and corresponding overlapping item clusters simultaneously through coarse clustering to accelerate the user-based collaborative filtering and develop a fast recommendation algorithm for the social tagging systems. The experimental results show that the proposed approach is able to dramatically reduce the processing time cost greater than $90\%$ and relatively enhance the accuracy in comparison with the ordinary user-based collaborative filtering algorithm.
2408.05101
Junhao Xu
Junhao Xu, Zhenlin Liang, Yi Liu, Yichao Hu, Jian Li, Yajun Zheng, Meng Cai, Hua Wang
MooER: LLM-based Speech Recognition and Translation Models from Moore Threads
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present MooER, a LLM-based large-scale automatic speech recognition (ASR) / automatic speech translation (AST) model of Moore Threads. A 5000h pseudo labeled dataset containing open source and self collected speech data is used for training. We achieve performance comparable to other open source models trained with up to hundreds of thousands of hours of labeled speech data. Meanwhile, experiments conducted on Covost2 Zh2en testset suggest that our model outperforms other open source Speech LLMs. A BLEU score of 25.2 can be obtained. The main contributions of this paper are summarized as follows. First, this paper presents a training strategy for encoders and LLMs on speech related tasks (including ASR and AST) using a small size of pseudo labeled data without any extra manual annotation and selection. Second, we release our ASR and AST models and plan to open-source our training code and strategy in the near future. Moreover, a model trained on 8wh scale training data is planned to be released later on.
[ { "created": "Fri, 9 Aug 2024 14:43:56 GMT", "version": "v1" } ]
2024-08-12
[ [ "Xu", "Junhao", "" ], [ "Liang", "Zhenlin", "" ], [ "Liu", "Yi", "" ], [ "Hu", "Yichao", "" ], [ "Li", "Jian", "" ], [ "Zheng", "Yajun", "" ], [ "Cai", "Meng", "" ], [ "Wang", "Hua", "" ] ]
In this paper, we present MooER, a LLM-based large-scale automatic speech recognition (ASR) / automatic speech translation (AST) model of Moore Threads. A 5000h pseudo labeled dataset containing open source and self collected speech data is used for training. We achieve performance comparable to other open source models trained with up to hundreds of thousands of hours of labeled speech data. Meanwhile, experiments conducted on Covost2 Zh2en testset suggest that our model outperforms other open source Speech LLMs. A BLEU score of 25.2 can be obtained. The main contributions of this paper are summarized as follows. First, this paper presents a training strategy for encoders and LLMs on speech related tasks (including ASR and AST) using a small size of pseudo labeled data without any extra manual annotation and selection. Second, we release our ASR and AST models and plan to open-source our training code and strategy in the near future. Moreover, a model trained on 8wh scale training data is planned to be released later on.
2010.01737
Yinghao Li
Yinghao Li (Georgia Institute of Technology), Rui Feng (Georgia Institute of Technology), Isaac Rehg (Georgia Institute of Technology), Chao Zhang (Georgia Institute of Technology)
Transformer-Based Neural Text Generation with Syntactic Guidance
11 pages, 4 figures and 5 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We study the problem of using (partial) constituency parse trees as syntactic guidance for controlled text generation. Existing approaches to this problem use recurrent structures, which not only suffer from the long-term dependency problem but also falls short in modeling the tree structure of the syntactic guidance. We propose to leverage the parallelism of Transformer to better incorporate parse trees. Our method first expands a partial template constituency parse tree to a full-fledged parse tree tailored for the input source text, and then uses the expanded tree to guide text generation. The effectiveness of our model in this process hinges upon two new attention mechanisms: 1) a path attention mechanism that forces one node to attend to only other nodes located in its path in the syntax tree to better incorporate syntax guidance; 2) a multi-encoder attention mechanism that allows the decoder to dynamically attend to information from multiple encoders. Our experiments in the controlled paraphrasing task show that our method outperforms SOTA models both semantically and syntactically, improving the best baseline's BLEU score from 11.83 to 26.27.
[ { "created": "Mon, 5 Oct 2020 01:33:58 GMT", "version": "v1" } ]
2020-10-06
[ [ "Li", "Yinghao", "", "Georgia Institute of Technology" ], [ "Feng", "Rui", "", "Georgia\n Institute of Technology" ], [ "Rehg", "Isaac", "", "Georgia Institute of Technology" ], [ "Zhang", "Chao", "", "Georgia Institute of Technology" ] ]
We study the problem of using (partial) constituency parse trees as syntactic guidance for controlled text generation. Existing approaches to this problem use recurrent structures, which not only suffer from the long-term dependency problem but also falls short in modeling the tree structure of the syntactic guidance. We propose to leverage the parallelism of Transformer to better incorporate parse trees. Our method first expands a partial template constituency parse tree to a full-fledged parse tree tailored for the input source text, and then uses the expanded tree to guide text generation. The effectiveness of our model in this process hinges upon two new attention mechanisms: 1) a path attention mechanism that forces one node to attend to only other nodes located in its path in the syntax tree to better incorporate syntax guidance; 2) a multi-encoder attention mechanism that allows the decoder to dynamically attend to information from multiple encoders. Our experiments in the controlled paraphrasing task show that our method outperforms SOTA models both semantically and syntactically, improving the best baseline's BLEU score from 11.83 to 26.27.
2004.08499
Shenli Yuan
Shenli Yuan, Lin Shao, Connor L. Yako, Alex Gruebele, and J. Kenneth Salisbury
Design and Control of Roller Grasper V2 for In-Hand Manipulation
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) October 25-29, 2020, Las Vegas, NV, USA (Virtual)
null
null
null
cs.RO cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to perform in-hand manipulation still remains an unsolved problem; having this capability would allow robots to perform sophisticated tasks requiring repositioning and reorienting of grasped objects. In this work, we present a novel non-anthropomorphic robot grasper with the ability to manipulate objects by means of active surfaces at the fingertips. Active surfaces are achieved by spherical rolling fingertips with two degrees of freedom (DoF) -- a pivoting motion for surface reorientation -- and a continuous rolling motion for moving the object. A further DoF is in the base of each finger, allowing the fingers to grasp objects over a range of size and shapes. Instantaneous kinematics was derived and objects were successfully manipulated both with a custom handcrafted control scheme as well as one learned through imitation learning, in simulation and experimentally on the hardware.
[ { "created": "Sat, 18 Apr 2020 00:54:09 GMT", "version": "v1" }, { "created": "Tue, 17 Nov 2020 23:16:45 GMT", "version": "v2" } ]
2020-11-19
[ [ "Yuan", "Shenli", "" ], [ "Shao", "Lin", "" ], [ "Yako", "Connor L.", "" ], [ "Gruebele", "Alex", "" ], [ "Salisbury", "J. Kenneth", "" ] ]
The ability to perform in-hand manipulation still remains an unsolved problem; having this capability would allow robots to perform sophisticated tasks requiring repositioning and reorienting of grasped objects. In this work, we present a novel non-anthropomorphic robot grasper with the ability to manipulate objects by means of active surfaces at the fingertips. Active surfaces are achieved by spherical rolling fingertips with two degrees of freedom (DoF) -- a pivoting motion for surface reorientation -- and a continuous rolling motion for moving the object. A further DoF is in the base of each finger, allowing the fingers to grasp objects over a range of size and shapes. Instantaneous kinematics was derived and objects were successfully manipulated both with a custom handcrafted control scheme as well as one learned through imitation learning, in simulation and experimentally on the hardware.
2103.13061
Amaia Salvador
Amaia Salvador, Erhan Gundogdu, Loris Bazzani, Michael Donoser
Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning
CVPR 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-modal recipe retrieval has recently gained substantial attention due to the importance of food in people's lives, as well as the availability of vast amounts of digital cooking recipes and food images to train machine learning models. In this work, we revisit existing approaches for cross-modal recipe retrieval and propose a simplified end-to-end model based on well established and high performing encoders for text and images. We introduce a hierarchical recipe Transformer which attentively encodes individual recipe components (titles, ingredients and instructions). Further, we propose a self-supervised loss function computed on top of pairs of individual recipe components, which is able to leverage semantic relationships within recipes, and enables training using both image-recipe and recipe-only samples. We conduct a thorough analysis and ablation studies to validate our design choices. As a result, our proposed method achieves state-of-the-art performance in the cross-modal recipe retrieval task on the Recipe1M dataset. We make code and models publicly available.
[ { "created": "Wed, 24 Mar 2021 10:17:09 GMT", "version": "v1" } ]
2021-03-25
[ [ "Salvador", "Amaia", "" ], [ "Gundogdu", "Erhan", "" ], [ "Bazzani", "Loris", "" ], [ "Donoser", "Michael", "" ] ]
Cross-modal recipe retrieval has recently gained substantial attention due to the importance of food in people's lives, as well as the availability of vast amounts of digital cooking recipes and food images to train machine learning models. In this work, we revisit existing approaches for cross-modal recipe retrieval and propose a simplified end-to-end model based on well established and high performing encoders for text and images. We introduce a hierarchical recipe Transformer which attentively encodes individual recipe components (titles, ingredients and instructions). Further, we propose a self-supervised loss function computed on top of pairs of individual recipe components, which is able to leverage semantic relationships within recipes, and enables training using both image-recipe and recipe-only samples. We conduct a thorough analysis and ablation studies to validate our design choices. As a result, our proposed method achieves state-of-the-art performance in the cross-modal recipe retrieval task on the Recipe1M dataset. We make code and models publicly available.
1809.08027
Carme \`Alvarez
C. \`Alvarez and A. Messegu\'e
On the Constant Price of Anarchy Conjecture
19 pages, 7 figures
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study Nash equilibria and the price of anarchy in the classic model of Network Creation Games introduced by Fabrikant et al. In this model every agent (node) buys links at a prefixed price $\alpha > 0$ in order to get connected to the network formed by all the $n$ agents. In this setting, the reformulated tree conjecture states that for $\alpha > n$, every Nash equilibrium network is a tree. Moreover, Demaine et al. conjectured that the price of anarchy for this model is constant. Since it was shown that the price of anarchy for trees is constant, if the tree conjecture were true, then the price of anarchy would be constant for $\alpha > n$. Up to now it has been proved that the \PoA is constant $(i)$ in the \emph{lower range}, for $\alpha = O(n^{1-\delta})$ with $\delta \geq \frac{1}{\log n}$ and $(ii)$ in the \emph{upper range}, for $\alpha > 4n-13$. In contrast, the best upper bound known for the price of anarchy for the remaining range is $2^{O(\sqrt{\log n})}$. In this paper we give new insights into the structure of the Nash equilibria for $\alpha > n$ and we enlarge the range of the parameter $\alpha$ for which the price of anarchy is constant. Specifically, we prove that the price of anarchy is constant for $\alpha > n(1+\epsilon)$ by showing that every equilibrium of diameter greater than some prefixed constant is a tree.
[ { "created": "Fri, 21 Sep 2018 10:37:51 GMT", "version": "v1" } ]
2018-09-24
[ [ "Àlvarez", "C.", "" ], [ "Messegué", "A.", "" ] ]
We study Nash equilibria and the price of anarchy in the classic model of Network Creation Games introduced by Fabrikant et al. In this model every agent (node) buys links at a prefixed price $\alpha > 0$ in order to get connected to the network formed by all the $n$ agents. In this setting, the reformulated tree conjecture states that for $\alpha > n$, every Nash equilibrium network is a tree. Moreover, Demaine et al. conjectured that the price of anarchy for this model is constant. Since it was shown that the price of anarchy for trees is constant, if the tree conjecture were true, then the price of anarchy would be constant for $\alpha > n$. Up to now it has been proved that the \PoA is constant $(i)$ in the \emph{lower range}, for $\alpha = O(n^{1-\delta})$ with $\delta \geq \frac{1}{\log n}$ and $(ii)$ in the \emph{upper range}, for $\alpha > 4n-13$. In contrast, the best upper bound known for the price of anarchy for the remaining range is $2^{O(\sqrt{\log n})}$. In this paper we give new insights into the structure of the Nash equilibria for $\alpha > n$ and we enlarge the range of the parameter $\alpha$ for which the price of anarchy is constant. Specifically, we prove that the price of anarchy is constant for $\alpha > n(1+\epsilon)$ by showing that every equilibrium of diameter greater than some prefixed constant is a tree.
2010.13386
Daizong Liu
Daizong Liu, Hongting Zhang, Pan Zhou
Video-based Facial Expression Recognition using Graph Convolutional Networks
Accepted by ICPR2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial expression recognition (FER), aiming to classify the expression present in the facial image or video, has attracted a lot of research interests in the field of artificial intelligence and multimedia. In terms of video based FER task, it is sensible to capture the dynamic expression variation among the frames to recognize facial expression. However, existing methods directly utilize CNN-RNN or 3D CNN to extract the spatial-temporal features from different facial units, instead of concentrating on a certain region during expression variation capturing, which leads to limited performance in FER. In our paper, we introduce a Graph Convolutional Network (GCN) layer into a common CNN-RNN based model for video-based FER. First, the GCN layer is utilized to learn more significant facial expression features which concentrate on certain regions after sharing information between extracted CNN features of nodes. Then, a LSTM layer is applied to learn long-term dependencies among the GCN learned features to model the variation. In addition, a weight assignment mechanism is also designed to weight the output of different nodes for final classification by characterizing the expression intensities in each frame. To the best of our knowledge, it is the first time to use GCN in FER task. We evaluate our method on three widely-used datasets, CK+, Oulu-CASIA and MMI, and also one challenging wild dataset AFEW8.0, and the experimental results demonstrate that our method has superior performance to existing methods.
[ { "created": "Mon, 26 Oct 2020 07:31:51 GMT", "version": "v1" } ]
2020-10-27
[ [ "Liu", "Daizong", "" ], [ "Zhang", "Hongting", "" ], [ "Zhou", "Pan", "" ] ]
Facial expression recognition (FER), aiming to classify the expression present in the facial image or video, has attracted a lot of research interests in the field of artificial intelligence and multimedia. In terms of video based FER task, it is sensible to capture the dynamic expression variation among the frames to recognize facial expression. However, existing methods directly utilize CNN-RNN or 3D CNN to extract the spatial-temporal features from different facial units, instead of concentrating on a certain region during expression variation capturing, which leads to limited performance in FER. In our paper, we introduce a Graph Convolutional Network (GCN) layer into a common CNN-RNN based model for video-based FER. First, the GCN layer is utilized to learn more significant facial expression features which concentrate on certain regions after sharing information between extracted CNN features of nodes. Then, a LSTM layer is applied to learn long-term dependencies among the GCN learned features to model the variation. In addition, a weight assignment mechanism is also designed to weight the output of different nodes for final classification by characterizing the expression intensities in each frame. To the best of our knowledge, it is the first time to use GCN in FER task. We evaluate our method on three widely-used datasets, CK+, Oulu-CASIA and MMI, and also one challenging wild dataset AFEW8.0, and the experimental results demonstrate that our method has superior performance to existing methods.
2401.03153
Bo Zhang
Bo Zhang, Yuqi Han, Jinli Suo, Qionghai Dai
An Event-Oriented Diffusion-Refinement Method for Sparse Events Completion
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames, and feature ultra-high sensitivity at low bandwidth. The new mechanism demonstrates great advantages in challenging scenarios with fast motion and large dynamic range. However, the recorded events might be highly sparse due to either limited hardware bandwidth or extreme photon starvation in harsh environments. To unlock the full potential of event cameras, we propose an inventive event sequence completion approach conforming to the unique characteristics of event data in both the processing stage and the output form. Specifically, we treat event streams as 3D event clouds in the spatiotemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully. To validate the effectiveness of our method comprehensively, we perform extensive experiments on three widely used public datasets with different spatial resolutions, and additionally collect a novel event dataset covering diverse scenarios with highly dynamic motions and under harsh illumination. Besides generating high-quality dense events, our method can benefit downstream applications such as object classification and intensity frame reconstruction.
[ { "created": "Sat, 6 Jan 2024 08:09:54 GMT", "version": "v1" } ]
2024-01-09
[ [ "Zhang", "Bo", "" ], [ "Han", "Yuqi", "" ], [ "Suo", "Jinli", "" ], [ "Dai", "Qionghai", "" ] ]
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames, and feature ultra-high sensitivity at low bandwidth. The new mechanism demonstrates great advantages in challenging scenarios with fast motion and large dynamic range. However, the recorded events might be highly sparse due to either limited hardware bandwidth or extreme photon starvation in harsh environments. To unlock the full potential of event cameras, we propose an inventive event sequence completion approach conforming to the unique characteristics of event data in both the processing stage and the output form. Specifically, we treat event streams as 3D event clouds in the spatiotemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully. To validate the effectiveness of our method comprehensively, we perform extensive experiments on three widely used public datasets with different spatial resolutions, and additionally collect a novel event dataset covering diverse scenarios with highly dynamic motions and under harsh illumination. Besides generating high-quality dense events, our method can benefit downstream applications such as object classification and intensity frame reconstruction.
2205.01887
Anshul Nayak
Anshul Nayak, Azim Eskandarian and Zachary Doerzaph
Uncertainty estimation of pedestrian future trajectory using Bayesian approximation
12 pages, 17 figures, 1 table
null
10.1109/OJITS.2022.3205504
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Past research on pedestrian trajectory forecasting mainly focused on deterministic predictions which provide only point estimates of future states. These future estimates can help an autonomous vehicle plan its trajectory and avoid collision. However, under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy. Rather, estimating the uncertainty associated with the predicted states with a certain level of confidence can lead to robust path planning. Hence, the authors propose to quantify this uncertainty during forecasting using stochastic approximation which deterministic approaches fail to capture. The current method is simple and applies Bayesian approximation during inference to standard neural network architectures for estimating uncertainty. The authors compared the predictions between the probabilistic neural network (NN) models with the standard deterministic models. The results indicate that the mean predicted path of probabilistic models was closer to the ground truth when compared with the deterministic prediction. Further, the effect of stochastic dropout of weights and long-term prediction on future state uncertainty has been studied. It was found that the probabilistic models produced better performance metrics like average displacement error (ADE) and final displacement error (FDE). Finally, the study has been extended to multiple datasets providing a comprehensive comparison for each model.
[ { "created": "Wed, 4 May 2022 04:23:38 GMT", "version": "v1" } ]
2023-01-16
[ [ "Nayak", "Anshul", "" ], [ "Eskandarian", "Azim", "" ], [ "Doerzaph", "Zachary", "" ] ]
Past research on pedestrian trajectory forecasting mainly focused on deterministic predictions which provide only point estimates of future states. These future estimates can help an autonomous vehicle plan its trajectory and avoid collision. However, under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy. Rather, estimating the uncertainty associated with the predicted states with a certain level of confidence can lead to robust path planning. Hence, the authors propose to quantify this uncertainty during forecasting using stochastic approximation which deterministic approaches fail to capture. The current method is simple and applies Bayesian approximation during inference to standard neural network architectures for estimating uncertainty. The authors compared the predictions between the probabilistic neural network (NN) models with the standard deterministic models. The results indicate that the mean predicted path of probabilistic models was closer to the ground truth when compared with the deterministic prediction. Further, the effect of stochastic dropout of weights and long-term prediction on future state uncertainty has been studied. It was found that the probabilistic models produced better performance metrics like average displacement error (ADE) and final displacement error (FDE). Finally, the study has been extended to multiple datasets providing a comprehensive comparison for each model.
0907.1054
Kaushik Sinha
Mikhail Belkin and Kaushik Sinha
Learning Gaussian Mixtures with Arbitrary Separation
null
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a method for learning the parameters of a mixture of $k$ identical spherical Gaussians in $n$-dimensional space with an arbitrarily small separation between the components. Our algorithm is polynomial in all parameters other than $k$. The algorithm is based on an appropriate grid search over the space of parameters. The theoretical analysis of the algorithm hinges on a reduction of the problem to 1 dimension and showing that two 1-dimensional mixtures whose densities are close in the $L^2$ norm must have similar means and mixing coefficients. To produce such a lower bound for the $L^2$ norm in terms of the distances between the corresponding means, we analyze the behavior of the Fourier transform of a mixture of Gaussians in 1 dimension around the origin, which turns out to be closely related to the properties of the Vandermonde matrix obtained from the component means. Analysis of this matrix together with basic function approximation results allows us to provide a lower bound for the norm of the mixture in the Fourier domain. In recent years much research has been aimed at understanding the computational aspects of learning parameters of Gaussians mixture distributions in high dimension. To the best of our knowledge all existing work on learning parameters of Gaussian mixtures assumes minimum separation between components of the mixture which is an increasing function of either the dimension of the space $n$ or the number of components $k$. In our paper we prove the first result showing that parameters of a $n$-dimensional Gaussian mixture model with arbitrarily small component separation can be learned in time polynomial in $n$.
[ { "created": "Mon, 6 Jul 2009 17:41:57 GMT", "version": "v1" }, { "created": "Thu, 13 May 2010 19:20:36 GMT", "version": "v2" } ]
2010-05-14
[ [ "Belkin", "Mikhail", "" ], [ "Sinha", "Kaushik", "" ] ]
In this paper we present a method for learning the parameters of a mixture of $k$ identical spherical Gaussians in $n$-dimensional space with an arbitrarily small separation between the components. Our algorithm is polynomial in all parameters other than $k$. The algorithm is based on an appropriate grid search over the space of parameters. The theoretical analysis of the algorithm hinges on a reduction of the problem to 1 dimension and showing that two 1-dimensional mixtures whose densities are close in the $L^2$ norm must have similar means and mixing coefficients. To produce such a lower bound for the $L^2$ norm in terms of the distances between the corresponding means, we analyze the behavior of the Fourier transform of a mixture of Gaussians in 1 dimension around the origin, which turns out to be closely related to the properties of the Vandermonde matrix obtained from the component means. Analysis of this matrix together with basic function approximation results allows us to provide a lower bound for the norm of the mixture in the Fourier domain. In recent years much research has been aimed at understanding the computational aspects of learning parameters of Gaussians mixture distributions in high dimension. To the best of our knowledge all existing work on learning parameters of Gaussian mixtures assumes minimum separation between components of the mixture which is an increasing function of either the dimension of the space $n$ or the number of components $k$. In our paper we prove the first result showing that parameters of a $n$-dimensional Gaussian mixture model with arbitrarily small component separation can be learned in time polynomial in $n$.
2405.08909
Shuxiao Ding
Shuxiao Ding, Lukas Schneider, Marius Cordts, Juergen Gall
ADA-Track: End-to-End Multi-Camera 3D Multi-Object Tracking with Alternating Detection and Association
14 pages, 3 figures, accepted by CVPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Many query-based approaches for 3D Multi-Object Tracking (MOT) adopt the tracking-by-attention paradigm, utilizing track queries for identity-consistent detection and object queries for identity-agnostic track spawning. Tracking-by-attention, however, entangles detection and tracking queries in one embedding for both the detection and tracking task, which is sub-optimal. Other approaches resemble the tracking-by-detection paradigm, detecting objects using decoupled track and detection queries followed by a subsequent association. These methods, however, do not leverage synergies between the detection and association task. Combining the strengths of both paradigms, we introduce ADA-Track, a novel end-to-end framework for 3D MOT from multi-view cameras. We introduce a learnable data association module based on edge-augmented cross-attention, leveraging appearance and geometric features. Furthermore, we integrate this association module into the decoder layer of a DETR-based 3D detector, enabling simultaneous DETR-like query-to-image cross-attention for detection and query-to-query cross-attention for data association. By stacking these decoder layers, queries are refined for the detection and association task alternately, effectively harnessing the task dependencies. We evaluate our method on the nuScenes dataset and demonstrate the advantage of our approach compared to the two previous paradigms. Code is available at https://github.com/dsx0511/ADA-Track.
[ { "created": "Tue, 14 May 2024 19:02:33 GMT", "version": "v1" } ]
2024-05-16
[ [ "Ding", "Shuxiao", "" ], [ "Schneider", "Lukas", "" ], [ "Cordts", "Marius", "" ], [ "Gall", "Juergen", "" ] ]
Many query-based approaches for 3D Multi-Object Tracking (MOT) adopt the tracking-by-attention paradigm, utilizing track queries for identity-consistent detection and object queries for identity-agnostic track spawning. Tracking-by-attention, however, entangles detection and tracking queries in one embedding for both the detection and tracking task, which is sub-optimal. Other approaches resemble the tracking-by-detection paradigm, detecting objects using decoupled track and detection queries followed by a subsequent association. These methods, however, do not leverage synergies between the detection and association task. Combining the strengths of both paradigms, we introduce ADA-Track, a novel end-to-end framework for 3D MOT from multi-view cameras. We introduce a learnable data association module based on edge-augmented cross-attention, leveraging appearance and geometric features. Furthermore, we integrate this association module into the decoder layer of a DETR-based 3D detector, enabling simultaneous DETR-like query-to-image cross-attention for detection and query-to-query cross-attention for data association. By stacking these decoder layers, queries are refined for the detection and association task alternately, effectively harnessing the task dependencies. We evaluate our method on the nuScenes dataset and demonstrate the advantage of our approach compared to the two previous paradigms. Code is available at https://github.com/dsx0511/ADA-Track.
2301.08018
Ahmad Sheikh
Ahmad T Sheikh, Ali Shoker, and Paulo Esteves-Verissimo
System on Chip Rejuvenation in the Wake of Persistent Attacks
null
null
null
null
cs.CR cs.AR
http://creativecommons.org/licenses/by/4.0/
To cope with the ever increasing threats of dynamic and adaptive persistent attacks, Fault and Intrusion Tolerance (FIT) is being studied at the hardware level to increase critical systems resilience. Based on state-machine replication, FIT is known to be effective if replicas are compromised and fail independently. This requires different ways of diversification at the software and hardware levels. In this paper, we introduce the first hardware-based rejuvenation framework, we call Samsara, that allows for creating new computing cores (on which FIT replicas run) with diverse architectures. This is made possible by taking advantage of the programmable and reconfigurable features of MPSoC with an FPGA. A persistent attack that analyzes and exploits the vulnerability of a core will not be able to exploit it as rejuvenation to a different core architecture is made fast enough. We discuss the feasibility of this design, and we leave the empirical evaluations for future work.
[ { "created": "Thu, 19 Jan 2023 11:41:28 GMT", "version": "v1" } ]
2023-01-20
[ [ "Sheikh", "Ahmad T", "" ], [ "Shoker", "Ali", "" ], [ "Esteves-Verissimo", "Paulo", "" ] ]
To cope with the ever increasing threats of dynamic and adaptive persistent attacks, Fault and Intrusion Tolerance (FIT) is being studied at the hardware level to increase critical systems resilience. Based on state-machine replication, FIT is known to be effective if replicas are compromised and fail independently. This requires different ways of diversification at the software and hardware levels. In this paper, we introduce the first hardware-based rejuvenation framework, we call Samsara, that allows for creating new computing cores (on which FIT replicas run) with diverse architectures. This is made possible by taking advantage of the programmable and reconfigurable features of MPSoC with an FPGA. A persistent attack that analyzes and exploits the vulnerability of a core will not be able to exploit it as rejuvenation to a different core architecture is made fast enough. We discuss the feasibility of this design, and we leave the empirical evaluations for future work.
0809.4794
Adam Smith
Adam Smith
Efficient, Differentially Private Point Estimators
9 pages
null
null
null
cs.CR cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (further) compelling evidence that rigorous notions of privacy in statistical databases can be consistent with statistically valid inference.
[ { "created": "Sat, 27 Sep 2008 19:57:34 GMT", "version": "v1" } ]
2008-09-30
[ [ "Smith", "Adam", "" ] ]
Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (further) compelling evidence that rigorous notions of privacy in statistical databases can be consistent with statistically valid inference.
2401.03205
Junyi Li
Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie and Ji-Rong Wen
The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models
24 pages, 8 figures, 13 tables
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of large language models (LLMs), hallucination (i.e., the tendency to generate factually incorrect content) poses great challenge to trustworthy and reliable deployment of LLMs in real-world applications. To tackle the LLM hallucination, three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them (mitigation). To address these challenges, this work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation. Specially, we construct a new hallucination benchmark HaluEval 2.0, and designs a simple yet effective detection method for LLM hallucination. Furthermore, we zoom into the different training or utilization stages of LLMs and extensively analyze the potential factors that lead to the LLM hallucination. Finally, we implement and examine a series of widely used techniques to mitigate the hallucinations in LLMs. Our work has led to several important findings to understand the hallucination origin and mitigate the hallucinations in LLMs. Our code and data can be accessed at https://github.com/RUCAIBox/HaluEval-2.0.
[ { "created": "Sat, 6 Jan 2024 12:40:45 GMT", "version": "v1" } ]
2024-01-09
[ [ "Li", "Junyi", "" ], [ "Chen", "Jie", "" ], [ "Ren", "Ruiyang", "" ], [ "Cheng", "Xiaoxue", "" ], [ "Zhao", "Wayne Xin", "" ], [ "Nie", "Jian-Yun", "" ], [ "Wen", "Ji-Rong", "" ] ]
In the era of large language models (LLMs), hallucination (i.e., the tendency to generate factually incorrect content) poses great challenge to trustworthy and reliable deployment of LLMs in real-world applications. To tackle the LLM hallucination, three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them (mitigation). To address these challenges, this work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation. Specially, we construct a new hallucination benchmark HaluEval 2.0, and designs a simple yet effective detection method for LLM hallucination. Furthermore, we zoom into the different training or utilization stages of LLMs and extensively analyze the potential factors that lead to the LLM hallucination. Finally, we implement and examine a series of widely used techniques to mitigate the hallucinations in LLMs. Our work has led to several important findings to understand the hallucination origin and mitigate the hallucinations in LLMs. Our code and data can be accessed at https://github.com/RUCAIBox/HaluEval-2.0.
1811.07376
Shanxin Yuan
Shanxin Yuan, Bjorn Stenger, Tae-Kyun Kim
RGB-based 3D Hand Pose Estimation via Privileged Learning with Depth Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depth-based network of the paired depth images to constrain mid-level RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.
[ { "created": "Sun, 18 Nov 2018 18:52:08 GMT", "version": "v1" } ]
2018-11-20
[ [ "Yuan", "Shanxin", "" ], [ "Stenger", "Bjorn", "" ], [ "Kim", "Tae-Kyun", "" ] ]
This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depth-based network of the paired depth images to constrain mid-level RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.
1711.07970
Arthur Pajot
Emmanuel de Bezenac, Arthur Pajot, Patrick Gallinari
Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge
null
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided.
[ { "created": "Tue, 21 Nov 2017 18:49:47 GMT", "version": "v1" }, { "created": "Tue, 9 Jan 2018 16:43:39 GMT", "version": "v2" } ]
2018-01-10
[ [ "de Bezenac", "Emmanuel", "" ], [ "Pajot", "Arthur", "" ], [ "Gallinari", "Patrick", "" ] ]
We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided.
2304.13854
Mingchen Li
Mingchen Li and Lifu Huang
Understand the Dynamic World: An End-to-End Knowledge Informed Framework for Open Domain Entity State Tracking
Published as a conference paper at SIGIR 2023
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Open domain entity state tracking aims to predict reasonable state changes of entities (i.e., [attribute] of [entity] was [before_state] and [after_state] afterwards) given the action descriptions. It's important to many reasoning tasks to support human everyday activities. However, it's challenging as the model needs to predict an arbitrary number of entity state changes caused by the action while most of the entities are implicitly relevant to the actions and their attributes as well as states are from open vocabularies. To tackle these challenges, we propose a novel end-to-end Knowledge Informed framework for open domain Entity State Tracking, namely KIEST, which explicitly retrieves the relevant entities and attributes from external knowledge graph (i.e., ConceptNet) and incorporates them to autoregressively generate all the entity state changes with a novel dynamic knowledge grained encoder-decoder framework. To enforce the logical coherence among the predicted entities, attributes, and states, we design a new constraint decoding strategy and employ a coherence reward to improve the decoding process. Experimental results show that our proposed KIEST framework significantly outperforms the strong baselines on the public benchmark dataset OpenPI.
[ { "created": "Wed, 26 Apr 2023 22:45:30 GMT", "version": "v1" } ]
2023-04-28
[ [ "Li", "Mingchen", "" ], [ "Huang", "Lifu", "" ] ]
Open domain entity state tracking aims to predict reasonable state changes of entities (i.e., [attribute] of [entity] was [before_state] and [after_state] afterwards) given the action descriptions. It's important to many reasoning tasks to support human everyday activities. However, it's challenging as the model needs to predict an arbitrary number of entity state changes caused by the action while most of the entities are implicitly relevant to the actions and their attributes as well as states are from open vocabularies. To tackle these challenges, we propose a novel end-to-end Knowledge Informed framework for open domain Entity State Tracking, namely KIEST, which explicitly retrieves the relevant entities and attributes from external knowledge graph (i.e., ConceptNet) and incorporates them to autoregressively generate all the entity state changes with a novel dynamic knowledge grained encoder-decoder framework. To enforce the logical coherence among the predicted entities, attributes, and states, we design a new constraint decoding strategy and employ a coherence reward to improve the decoding process. Experimental results show that our proposed KIEST framework significantly outperforms the strong baselines on the public benchmark dataset OpenPI.
1702.04562
Shujie Chen
Shu-Jie Chen and Hui-Liang Shen
Normalized Total Gradient: A New Measure for Multispectral Image Registration
12 pages, 11 figures
null
10.1109/TIP.2017.2776753
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image registration is a fundamental issue in multispectral image processing. In filter wheel based multispectral imaging systems, the non-coplanar placement of the filters always causes the misalignment of multiple channel images. The selective characteristic of spectral response in multispectral imaging raises two challenges to image registration. First, the intensity levels of a local region may be different in individual channel images. Second, the local intensity may vary rapidly in some channel images while keeps stationary in others. Conventional multimodal measures, such as mutual information, correlation coefficient, and correlation ratio, can register images with different regional intensity levels, but will fail in the circumstance of severe local intensity variation. In this paper, a new measure, namely normalized total gradient (NTG), is proposed for multispectral image registration. The NTG is applied on the difference between two channel images. This measure is based on the key assumption (observation) that the gradient of difference image between two aligned channel images is sparser than that between two misaligned ones. A registration framework, which incorporates image pyramid and global/local optimization, is further introduced for rigid transform. Experimental results validate that the proposed method is effective for multispectral image registration and performs better than conventional methods.
[ { "created": "Wed, 15 Feb 2017 11:52:38 GMT", "version": "v1" } ]
2018-02-14
[ [ "Chen", "Shu-Jie", "" ], [ "Shen", "Hui-Liang", "" ] ]
Image registration is a fundamental issue in multispectral image processing. In filter wheel based multispectral imaging systems, the non-coplanar placement of the filters always causes the misalignment of multiple channel images. The selective characteristic of spectral response in multispectral imaging raises two challenges to image registration. First, the intensity levels of a local region may be different in individual channel images. Second, the local intensity may vary rapidly in some channel images while keeps stationary in others. Conventional multimodal measures, such as mutual information, correlation coefficient, and correlation ratio, can register images with different regional intensity levels, but will fail in the circumstance of severe local intensity variation. In this paper, a new measure, namely normalized total gradient (NTG), is proposed for multispectral image registration. The NTG is applied on the difference between two channel images. This measure is based on the key assumption (observation) that the gradient of difference image between two aligned channel images is sparser than that between two misaligned ones. A registration framework, which incorporates image pyramid and global/local optimization, is further introduced for rigid transform. Experimental results validate that the proposed method is effective for multispectral image registration and performs better than conventional methods.
2203.04803
Roy Friedman
Roy Friedman and Or Goaz and Dor Hovav
Limited Associativity Caching in the Data Plane
null
null
null
null
cs.NI cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-network caching promises to improve the performance of networked and edge applications as it shortens the paths data need to travel. This is by storing so-called hot items in the network switches on-route between clients who access the data and the storage servers who maintain it. Since the data flows through those switches in any case, it is natural to cache hot items there. Most software-managed caches treat the cache as a fully associative region. Alas, a fully associative design seems to be at odds with programmable switches' goal of handling packets in a short bounded amount of time, as well as their restricted programming model. In this work, we present PKache, a generic limited associativity cache implementation in the programmable switches' domain-specific P4 language, and demonstrate its utility by realizing multiple popular cache management schemes.
[ { "created": "Wed, 9 Mar 2022 15:32:40 GMT", "version": "v1" } ]
2022-03-10
[ [ "Friedman", "Roy", "" ], [ "Goaz", "Or", "" ], [ "Hovav", "Dor", "" ] ]
In-network caching promises to improve the performance of networked and edge applications as it shortens the paths data need to travel. This is by storing so-called hot items in the network switches on-route between clients who access the data and the storage servers who maintain it. Since the data flows through those switches in any case, it is natural to cache hot items there. Most software-managed caches treat the cache as a fully associative region. Alas, a fully associative design seems to be at odds with programmable switches' goal of handling packets in a short bounded amount of time, as well as their restricted programming model. In this work, we present PKache, a generic limited associativity cache implementation in the programmable switches' domain-specific P4 language, and demonstrate its utility by realizing multiple popular cache management schemes.
1412.6618
Martin Kiefel
Martin Kiefel, Varun Jampani and Peter V. Gehler
Permutohedral Lattice CNNs
null
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a convolutional layer that is able to process sparse input features. As an example, for image recognition problems this allows an efficient filtering of signals that do not lie on a dense grid (like pixel position), but of more general features (such as color values). The presented algorithm makes use of the permutohedral lattice data structure. The permutohedral lattice was introduced to efficiently implement a bilateral filter, a commonly used image processing operation. Its use allows for a generalization of the convolution type found in current (spatial) convolutional network architectures.
[ { "created": "Sat, 20 Dec 2014 07:08:54 GMT", "version": "v1" }, { "created": "Thu, 26 Feb 2015 14:16:58 GMT", "version": "v2" }, { "created": "Sun, 3 May 2015 11:26:34 GMT", "version": "v3" } ]
2015-05-05
[ [ "Kiefel", "Martin", "" ], [ "Jampani", "Varun", "" ], [ "Gehler", "Peter V.", "" ] ]
This paper presents a convolutional layer that is able to process sparse input features. As an example, for image recognition problems this allows an efficient filtering of signals that do not lie on a dense grid (like pixel position), but of more general features (such as color values). The presented algorithm makes use of the permutohedral lattice data structure. The permutohedral lattice was introduced to efficiently implement a bilateral filter, a commonly used image processing operation. Its use allows for a generalization of the convolution type found in current (spatial) convolutional network architectures.
1709.06661
Murat Arcak
Murat Arcak and John Maidens
Simulation-based reachability analysis for nonlinear systems using componentwise contraction properties
null
null
null
null
cs.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
A shortcoming of existing reachability approaches for nonlinear systems is the poor scalability with the number of continuous state variables. To mitigate this problem we present a simulation-based approach where we first sample a number of trajectories of the system and next establish bounds on the convergence or divergence between the samples and neighboring trajectories. We compute these bounds using contraction theory and reduce the conservatism by partitioning the state vector into several components and analyzing contraction properties separately in each direction. Among other benefits this allows us to analyze the effect of constant but uncertain parameters by treating them as state variables and partitioning them into a separate direction. We next present a numerical procedure to search for weighted norms that yield a prescribed contraction rate, which can be incorporated in the reachability algorithm to adjust the weights to minimize the growth of the reachable set.
[ { "created": "Tue, 19 Sep 2017 22:04:58 GMT", "version": "v1" } ]
2017-09-21
[ [ "Arcak", "Murat", "" ], [ "Maidens", "John", "" ] ]
A shortcoming of existing reachability approaches for nonlinear systems is the poor scalability with the number of continuous state variables. To mitigate this problem we present a simulation-based approach where we first sample a number of trajectories of the system and next establish bounds on the convergence or divergence between the samples and neighboring trajectories. We compute these bounds using contraction theory and reduce the conservatism by partitioning the state vector into several components and analyzing contraction properties separately in each direction. Among other benefits this allows us to analyze the effect of constant but uncertain parameters by treating them as state variables and partitioning them into a separate direction. We next present a numerical procedure to search for weighted norms that yield a prescribed contraction rate, which can be incorporated in the reachability algorithm to adjust the weights to minimize the growth of the reachable set.
2207.12515
Yongfeng Zhang
Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, Yongfeng Zhang
A Survey on Trustworthy Recommender Systems
Accepted by ACM Transactions on Recommender Systems (TORS)
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner of the web and facilitate the human decision-making process. However, despite their enormous capabilities and potential, RS may also lead to undesired effects on users, items, producers, platforms, or even the society at large, such as compromised user trust due to non-transparency, unfair treatment of different consumers, or producers, privacy concerns due to extensive use of user's private data for personalization, just to name a few. All of these create an urgent need for Trustworthy Recommender Systems (TRS) so as to mitigate or avoid such adverse impacts and risks. In this survey, we will introduce techniques related to trustworthy recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation, user-controllable recommendation, as well as the relationship between these different perspectives in terms of trustworthy recommendation. Through this survey, we hope to deliver readers with a comprehensive view of the research area and raise attention to the community about the importance, existing research achievements, and future research directions on trustworthy recommendation.
[ { "created": "Mon, 25 Jul 2022 20:23:25 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2024 02:14:55 GMT", "version": "v2" } ]
2024-02-23
[ [ "Ge", "Yingqiang", "" ], [ "Liu", "Shuchang", "" ], [ "Fu", "Zuohui", "" ], [ "Tan", "Juntao", "" ], [ "Li", "Zelong", "" ], [ "Xu", "Shuyuan", "" ], [ "Li", "Yunqi", "" ], [ "Xian", "Yikun", "" ], [ "Zhang", "Yongfeng", "" ] ]
Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner of the web and facilitate the human decision-making process. However, despite their enormous capabilities and potential, RS may also lead to undesired effects on users, items, producers, platforms, or even the society at large, such as compromised user trust due to non-transparency, unfair treatment of different consumers, or producers, privacy concerns due to extensive use of user's private data for personalization, just to name a few. All of these create an urgent need for Trustworthy Recommender Systems (TRS) so as to mitigate or avoid such adverse impacts and risks. In this survey, we will introduce techniques related to trustworthy recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation, user-controllable recommendation, as well as the relationship between these different perspectives in terms of trustworthy recommendation. Through this survey, we hope to deliver readers with a comprehensive view of the research area and raise attention to the community about the importance, existing research achievements, and future research directions on trustworthy recommendation.
2202.02390
Shinjiro Sueda
Nicholas J. Weidner, Theodore Kim, Shinjiro Sueda
Condensation Jacobian with Adaptivity
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new approach that allows large time steps in dynamic simulations. Our approach, ConJac, is based on condensation, a technique for eliminating many degrees of freedom (DOFs) by expressing them in terms of the remaining degrees of freedom. In this work, we choose a subset of nodes to be dynamic nodes, and apply condensation at the velocity level by defining a linear mapping from the velocities of these chosen dynamic DOFs to the velocities of the remaining quasistatic DOFs. We then use this mapping to derive reduced equations of motion involving only the dynamic DOFs. We also derive a novel stabilization term that enables us to use complex nonlinear material models. ConJac remains stable at large time steps, exhibits highly dynamic motion, and displays minimal numerical damping. In marked contrast to subspace approaches, ConJac gives exactly the same configuration as the full space approach once the static state is reached. Furthermore, ConJac can automatically choose which parts of the object are to be simulated dynamically or quasistatically. Finally, ConJac works with a wide range of moderate to stiff materials, supports anisotropy and heterogeneity, handles topology changes, and can be combined with existing solvers including rigid body dynamics.
[ { "created": "Fri, 4 Feb 2022 21:03:44 GMT", "version": "v1" } ]
2022-02-08
[ [ "Weidner", "Nicholas J.", "" ], [ "Kim", "Theodore", "" ], [ "Sueda", "Shinjiro", "" ] ]
We present a new approach that allows large time steps in dynamic simulations. Our approach, ConJac, is based on condensation, a technique for eliminating many degrees of freedom (DOFs) by expressing them in terms of the remaining degrees of freedom. In this work, we choose a subset of nodes to be dynamic nodes, and apply condensation at the velocity level by defining a linear mapping from the velocities of these chosen dynamic DOFs to the velocities of the remaining quasistatic DOFs. We then use this mapping to derive reduced equations of motion involving only the dynamic DOFs. We also derive a novel stabilization term that enables us to use complex nonlinear material models. ConJac remains stable at large time steps, exhibits highly dynamic motion, and displays minimal numerical damping. In marked contrast to subspace approaches, ConJac gives exactly the same configuration as the full space approach once the static state is reached. Furthermore, ConJac can automatically choose which parts of the object are to be simulated dynamically or quasistatically. Finally, ConJac works with a wide range of moderate to stiff materials, supports anisotropy and heterogeneity, handles topology changes, and can be combined with existing solvers including rigid body dynamics.
1801.08351
Paolo Baracca
Paolo Baracca, Andreas Weber, Thorsten Wild, Christophe Grangeat
A Statistical Approach for RF Exposure Compliance Boundary Assessment in Massive MIMO Systems
Accepted at the International Workshop on Smart Antennas (WSA), Bochum (Germany), Mar. 2018
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive multiple-input multiple-output (MIMO) is a fundamental enabler to provide high data throughput in next generation cellular networks. By equipping the base stations (BSs) with tens or hundreds of antenna elements, narrow and high gain beams can be used to spatially multiplex several user equipment (UE) devices. While increasing the achievable performance, focusing the transmit power into specific UE directions also poses new issues when performing the radio frequency (RF) exposure assessment. In fact, the spatial distribution of the actual BS transmit power strongly depends on the deployment scenario and on the position of the UEs. Traditional methods for assessing the RF exposure compliance boundaries around BS sites are generally based on maximum transmit power and static beams. In massive MIMO systems, these approaches tend to be very conservative, in particular when time averaging is properly considered. In this work, we propose to leverage the three dimensional spatial channel model standardized by the Third Generation Partnership Project in order to assess reasonably foreseeable compliance boundaries of massive MIMO BSs. The analysis is performed by considering BSs fully loaded and different configurations of active UEs per cell. Numerical results show that the statistical approach developed in this paper allows reducing to nearly half the compliance distance when compared to the traditional method.
[ { "created": "Thu, 25 Jan 2018 11:11:59 GMT", "version": "v1" } ]
2018-01-26
[ [ "Baracca", "Paolo", "" ], [ "Weber", "Andreas", "" ], [ "Wild", "Thorsten", "" ], [ "Grangeat", "Christophe", "" ] ]
Massive multiple-input multiple-output (MIMO) is a fundamental enabler to provide high data throughput in next generation cellular networks. By equipping the base stations (BSs) with tens or hundreds of antenna elements, narrow and high gain beams can be used to spatially multiplex several user equipment (UE) devices. While increasing the achievable performance, focusing the transmit power into specific UE directions also poses new issues when performing the radio frequency (RF) exposure assessment. In fact, the spatial distribution of the actual BS transmit power strongly depends on the deployment scenario and on the position of the UEs. Traditional methods for assessing the RF exposure compliance boundaries around BS sites are generally based on maximum transmit power and static beams. In massive MIMO systems, these approaches tend to be very conservative, in particular when time averaging is properly considered. In this work, we propose to leverage the three dimensional spatial channel model standardized by the Third Generation Partnership Project in order to assess reasonably foreseeable compliance boundaries of massive MIMO BSs. The analysis is performed by considering BSs fully loaded and different configurations of active UEs per cell. Numerical results show that the statistical approach developed in this paper allows reducing to nearly half the compliance distance when compared to the traditional method.
1904.01160
Yucheng Shi
Yucheng Shi, Siyu Wang, Yahong Han
Curls & Whey: Boosting Black-Box Adversarial Attacks
CVPR 2019 Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image classifiers based on deep neural networks suffer from harassment caused by adversarial examples. Two defects exist in black-box iterative attacks that generate adversarial examples by incrementally adjusting the noise-adding direction for each step. On the one hand, existing iterative attacks add noises monotonically along the direction of gradient ascent, resulting in a lack of diversity and adaptability of the generated iterative trajectories. On the other hand, it is trivial to perform adversarial attack by adding excessive noises, but currently there is no refinement mechanism to squeeze redundant noises. In this work, we propose Curls & Whey black-box attack to fix the above two defects. During Curls iteration, by combining gradient ascent and descent, we `curl' up iterative trajectories to integrate more diversity and transferability into adversarial examples. Curls iteration also alleviates the diminishing marginal effect in existing iterative attacks. The Whey optimization further squeezes the `whey' of noises by exploiting the robustness of adversarial perturbation. Extensive experiments on Imagenet and Tiny-Imagenet demonstrate that our approach achieves impressive decrease on noise magnitude in l2 norm. Curls & Whey attack also shows promising transferability against ensemble models as well as adversarially trained models. In addition, we extend our attack to the targeted misclassification, effectively reducing the difficulty of targeted attacks under black-box condition.
[ { "created": "Tue, 2 Apr 2019 01:16:01 GMT", "version": "v1" } ]
2019-04-03
[ [ "Shi", "Yucheng", "" ], [ "Wang", "Siyu", "" ], [ "Han", "Yahong", "" ] ]
Image classifiers based on deep neural networks suffer from harassment caused by adversarial examples. Two defects exist in black-box iterative attacks that generate adversarial examples by incrementally adjusting the noise-adding direction for each step. On the one hand, existing iterative attacks add noises monotonically along the direction of gradient ascent, resulting in a lack of diversity and adaptability of the generated iterative trajectories. On the other hand, it is trivial to perform adversarial attack by adding excessive noises, but currently there is no refinement mechanism to squeeze redundant noises. In this work, we propose Curls & Whey black-box attack to fix the above two defects. During Curls iteration, by combining gradient ascent and descent, we `curl' up iterative trajectories to integrate more diversity and transferability into adversarial examples. Curls iteration also alleviates the diminishing marginal effect in existing iterative attacks. The Whey optimization further squeezes the `whey' of noises by exploiting the robustness of adversarial perturbation. Extensive experiments on Imagenet and Tiny-Imagenet demonstrate that our approach achieves impressive decrease on noise magnitude in l2 norm. Curls & Whey attack also shows promising transferability against ensemble models as well as adversarially trained models. In addition, we extend our attack to the targeted misclassification, effectively reducing the difficulty of targeted attacks under black-box condition.
1110.3089
Nigel Collier
Nigel Collier, Nguyen Truong Son, Ngoc Mai Nguyen
OMG U got flu? Analysis of shared health messages for bio-surveillance
null
Journal of Biomedical Semantics 2011, 2(Suppl 5):S9
10.1186/2041-1480-2-S5-S9
null
cs.CL cs.IR cs.SI
http://creativecommons.org/licenses/by/3.0/
Background: Micro-blogging services such as Twitter offer the potential to crowdsource epidemics in real-time. However, Twitter posts ('tweets') are often ambiguous and reactive to media trends. In order to ground user messages in epidemic response we focused on tracking reports of self-protective behaviour such as avoiding public gatherings or increased sanitation as the basis for further risk analysis. Results: We created guidelines for tagging self protective behaviour based on Jones and Salath\'e (2009)'s behaviour response survey. Applying the guidelines to a corpus of 5283 Twitter messages related to influenza like illness showed a high level of inter-annotator agreement (kappa 0.86). We employed supervised learning using unigrams, bigrams and regular expressions as features with two supervised classifiers (SVM and Naive Bayes) to classify tweets into 4 self-reported protective behaviour categories plus a self-reported diagnosis. In addition to classification performance we report moderately strong Spearman's Rho correlation by comparing classifier output against WHO/NREVSS laboratory data for A(H1N1) in the USA during the 2009-2010 influenza season. Conclusions: The study adds to evidence supporting a high degree of correlation between pre-diagnostic social media signals and diagnostic influenza case data, pointing the way towards low cost sensor networks. We believe that the signals we have modelled may be applicable to a wide range of diseases.
[ { "created": "Thu, 13 Oct 2011 23:15:44 GMT", "version": "v1" } ]
2011-10-17
[ [ "Collier", "Nigel", "" ], [ "Son", "Nguyen Truong", "" ], [ "Nguyen", "Ngoc Mai", "" ] ]
Background: Micro-blogging services such as Twitter offer the potential to crowdsource epidemics in real-time. However, Twitter posts ('tweets') are often ambiguous and reactive to media trends. In order to ground user messages in epidemic response we focused on tracking reports of self-protective behaviour such as avoiding public gatherings or increased sanitation as the basis for further risk analysis. Results: We created guidelines for tagging self protective behaviour based on Jones and Salath\'e (2009)'s behaviour response survey. Applying the guidelines to a corpus of 5283 Twitter messages related to influenza like illness showed a high level of inter-annotator agreement (kappa 0.86). We employed supervised learning using unigrams, bigrams and regular expressions as features with two supervised classifiers (SVM and Naive Bayes) to classify tweets into 4 self-reported protective behaviour categories plus a self-reported diagnosis. In addition to classification performance we report moderately strong Spearman's Rho correlation by comparing classifier output against WHO/NREVSS laboratory data for A(H1N1) in the USA during the 2009-2010 influenza season. Conclusions: The study adds to evidence supporting a high degree of correlation between pre-diagnostic social media signals and diagnostic influenza case data, pointing the way towards low cost sensor networks. We believe that the signals we have modelled may be applicable to a wide range of diseases.
1905.08289
Roman Lukyanenko
Roman Lukyanenko, Andrea Wiggins, Holly K. Rosser
Citizen Science: An Information Quality Research Frontier
null
2019, Information Systems Frontiers
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
The rapid proliferation of online content producing and sharing technologies resulted in an explosion of user-generated content (UGC), which now extends to scientific data. Citizen science, in which ordinary people contribute information for scientific research, epitomizes UGC. Citizen science projects are typically open to everyone, engage diverse audiences, and challenge ordinary people to produce data of highest quality to be usable in science. This also makes citizen science a very exciting area to study both traditional and innovative approaches to information quality management. With this paper we position citizen science as a leading information quality research frontier. We also show how citizen science opens a unique opportunity for the information systems community to contribute to a broad range of disciplines in natural and social sciences and humanities.
[ { "created": "Mon, 20 May 2019 18:39:57 GMT", "version": "v1" } ]
2019-05-22
[ [ "Lukyanenko", "Roman", "" ], [ "Wiggins", "Andrea", "" ], [ "Rosser", "Holly K.", "" ] ]
The rapid proliferation of online content producing and sharing technologies resulted in an explosion of user-generated content (UGC), which now extends to scientific data. Citizen science, in which ordinary people contribute information for scientific research, epitomizes UGC. Citizen science projects are typically open to everyone, engage diverse audiences, and challenge ordinary people to produce data of highest quality to be usable in science. This also makes citizen science a very exciting area to study both traditional and innovative approaches to information quality management. With this paper we position citizen science as a leading information quality research frontier. We also show how citizen science opens a unique opportunity for the information systems community to contribute to a broad range of disciplines in natural and social sciences and humanities.
1104.3103
Yevgeniy Vorobeychik
Yevgeniy Vorobeychik, Jackson Mayo, Robert Armstrong, Joseph Ruthruff
Noncooperatively Optimized Tolerance: Decentralized Strategic Optimization in Complex Systems
null
Physical Review Letters, 107:108702, 2011
10.1103/PhysRevLett.107.108702
null
cs.GT cond-mat.dis-nn physics.soc-ph
http://creativecommons.org/licenses/publicdomain/
We introduce noncooperatively optimized tolerance (NOT), a generalization of highly optimized tolerance (HOT) that involves strategic (game theoretic) interactions between parties in a complex system. We illustrate our model in the forest fire (percolation) framework. As the number of players increases, our model retains features of HOT, such as robustness, high yield combined with high density, and self-dissimilar landscapes, but also develops features of self-organized criticality (SOC) when the number of players is large enough. For example, the forest landscape becomes increasingly homogeneous and protection from adverse events (lightning strikes) becomes less closely correlated with the spatial distribution of these events. While HOT is a special case of our model, the resemblance to SOC is only partial; for example, the distribution of cascades, while becoming increasingly heavy-tailed as the number of players increases, also deviates more significantly from a power law in this regime. Surprisingly, the system retains considerable robustness even as it becomes fractured, due in part to emergent cooperation between neighboring players. At the same time, increasing homogeneity promotes resilience against changes in the lightning distribution, giving rise to intermediate regimes where the system is robust to a particular distribution of adverse events, yet not very fragile to changes.
[ { "created": "Fri, 15 Apr 2011 16:26:08 GMT", "version": "v1" } ]
2016-08-30
[ [ "Vorobeychik", "Yevgeniy", "" ], [ "Mayo", "Jackson", "" ], [ "Armstrong", "Robert", "" ], [ "Ruthruff", "Joseph", "" ] ]
We introduce noncooperatively optimized tolerance (NOT), a generalization of highly optimized tolerance (HOT) that involves strategic (game theoretic) interactions between parties in a complex system. We illustrate our model in the forest fire (percolation) framework. As the number of players increases, our model retains features of HOT, such as robustness, high yield combined with high density, and self-dissimilar landscapes, but also develops features of self-organized criticality (SOC) when the number of players is large enough. For example, the forest landscape becomes increasingly homogeneous and protection from adverse events (lightning strikes) becomes less closely correlated with the spatial distribution of these events. While HOT is a special case of our model, the resemblance to SOC is only partial; for example, the distribution of cascades, while becoming increasingly heavy-tailed as the number of players increases, also deviates more significantly from a power law in this regime. Surprisingly, the system retains considerable robustness even as it becomes fractured, due in part to emergent cooperation between neighboring players. At the same time, increasing homogeneity promotes resilience against changes in the lightning distribution, giving rise to intermediate regimes where the system is robust to a particular distribution of adverse events, yet not very fragile to changes.
2305.15641
Huy Mai
Huy Mai, Wen Huang, Wei Du, Xintao Wu
A Robust Classifier Under Missing-Not-At-Random Sample Selection Bias
12 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The shift between the training and testing distributions is commonly due to sample selection bias, a type of bias caused by non-random sampling of examples to be included in the training set. Although there are many approaches proposed to learn a classifier under sample selection bias, few address the case where a subset of labels in the training set are missing-not-at-random (MNAR) as a result of the selection process. In statistics, Greene's method formulates this type of sample selection with logistic regression as the prediction model. However, we find that simply integrating this method into a robust classification framework is not effective for this bias setting. In this paper, we propose BiasCorr, an algorithm that improves on Greene's method by modifying the original training set in order for a classifier to learn under MNAR sample selection bias. We provide theoretical guarantee for the improvement of BiasCorr over Greene's method by analyzing its bias. Experimental results on real-world datasets demonstrate that BiasCorr produces robust classifiers and can be extended to outperform state-of-the-art classifiers that have been proposed to train under sample selection bias.
[ { "created": "Thu, 25 May 2023 01:39:51 GMT", "version": "v1" } ]
2023-05-26
[ [ "Mai", "Huy", "" ], [ "Huang", "Wen", "" ], [ "Du", "Wei", "" ], [ "Wu", "Xintao", "" ] ]
The shift between the training and testing distributions is commonly due to sample selection bias, a type of bias caused by non-random sampling of examples to be included in the training set. Although there are many approaches proposed to learn a classifier under sample selection bias, few address the case where a subset of labels in the training set are missing-not-at-random (MNAR) as a result of the selection process. In statistics, Greene's method formulates this type of sample selection with logistic regression as the prediction model. However, we find that simply integrating this method into a robust classification framework is not effective for this bias setting. In this paper, we propose BiasCorr, an algorithm that improves on Greene's method by modifying the original training set in order for a classifier to learn under MNAR sample selection bias. We provide theoretical guarantee for the improvement of BiasCorr over Greene's method by analyzing its bias. Experimental results on real-world datasets demonstrate that BiasCorr produces robust classifiers and can be extended to outperform state-of-the-art classifiers that have been proposed to train under sample selection bias.
2109.00217
Hao Tang
Hao Tang, Guoshuai Zhao, Yuxia Wu, Xueming Qian
Multi-Sample based Contrastive Loss for Top-k Recommendation
12 pages,7 figures,6 tables
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
The top-k recommendation is a fundamental task in recommendation systems which is generally learned by comparing positive and negative pairs. The Contrastive Loss (CL) is the key in contrastive learning that has received more attention recently and we find it is well suited for top-k recommendations. However, it is a problem that CL treats the importance of the positive and negative samples as the same. On the one hand, CL faces the imbalance problem of one positive sample and many negative samples. On the other hand, positive items are so few in sparser datasets that their importance should be emphasized. Moreover, the other important issue is that the sparse positive items are still not sufficiently utilized in recommendations. So we propose a new data augmentation method by using multiple positive items (or samples) simultaneously with the CL loss function. Therefore, we propose a Multi-Sample based Contrastive Loss (MSCL) function which solves the two problems by balancing the importance of positive and negative samples and data augmentation. And based on the graph convolution network (GCN) method, experimental results demonstrate the state-of-the-art performance of MSCL. The proposed MSCL is simple and can be applied in many methods. We will release our code on GitHub upon the acceptance.
[ { "created": "Wed, 1 Sep 2021 07:32:13 GMT", "version": "v1" } ]
2021-09-02
[ [ "Tang", "Hao", "" ], [ "Zhao", "Guoshuai", "" ], [ "Wu", "Yuxia", "" ], [ "Qian", "Xueming", "" ] ]
The top-k recommendation is a fundamental task in recommendation systems which is generally learned by comparing positive and negative pairs. The Contrastive Loss (CL) is the key in contrastive learning that has received more attention recently and we find it is well suited for top-k recommendations. However, it is a problem that CL treats the importance of the positive and negative samples as the same. On the one hand, CL faces the imbalance problem of one positive sample and many negative samples. On the other hand, positive items are so few in sparser datasets that their importance should be emphasized. Moreover, the other important issue is that the sparse positive items are still not sufficiently utilized in recommendations. So we propose a new data augmentation method by using multiple positive items (or samples) simultaneously with the CL loss function. Therefore, we propose a Multi-Sample based Contrastive Loss (MSCL) function which solves the two problems by balancing the importance of positive and negative samples and data augmentation. And based on the graph convolution network (GCN) method, experimental results demonstrate the state-of-the-art performance of MSCL. The proposed MSCL is simple and can be applied in many methods. We will release our code on GitHub upon the acceptance.
1901.10262
Harrie Oosterhuis
Harrie Oosterhuis, Maarten de Rijke
Optimizing Ranking Models in an Online Setting
European Conference on Information Retrieval (ECIR) 2019
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online Learning to Rank (OLTR) methods optimize ranking models by directly interacting with users, which allows them to be very efficient and responsive. All OLTR methods introduced during the past decade have extended on the original OLTR method: Dueling Bandit Gradient Descent (DBGD). Recently, a fundamentally different approach was introduced with the Pairwise Differentiable Gradient Descent (PDGD) algorithm. To date the only comparisons of the two approaches are limited to simulations with cascading click models and low levels of noise. The main outcome so far is that PDGD converges at higher levels of performance and learns considerably faster than DBGD-based methods. However, the PDGD algorithm assumes cascading user behavior, potentially giving it an unfair advantage. Furthermore, the robustness of both methods to high levels of noise has not been investigated. Therefore, it is unclear whether the reported advantages of PDGD over DBGD generalize to different experimental conditions. In this paper, we investigate whether the previous conclusions about the PDGD and DBGD comparison generalize from ideal to worst-case circumstances. We do so in two ways. First, we compare the theoretical properties of PDGD and DBGD, by taking a critical look at previously proven properties in the context of ranking. Second, we estimate an upper and lower bound on the performance of methods by simulating both ideal user behavior and extremely difficult behavior, i.e., almost-random non-cascading user models. Our findings show that the theoretical bounds of DBGD do not apply to any common ranking model and, furthermore, that the performance of DBGD is substantially worse than PDGD in both ideal and worst-case circumstances. These results reproduce previously published findings about the relative performance of PDGD vs. DBGD and generalize them to extremely noisy and non-cascading circumstances.
[ { "created": "Tue, 29 Jan 2019 13:04:54 GMT", "version": "v1" } ]
2019-01-30
[ [ "Oosterhuis", "Harrie", "" ], [ "de Rijke", "Maarten", "" ] ]
Online Learning to Rank (OLTR) methods optimize ranking models by directly interacting with users, which allows them to be very efficient and responsive. All OLTR methods introduced during the past decade have extended on the original OLTR method: Dueling Bandit Gradient Descent (DBGD). Recently, a fundamentally different approach was introduced with the Pairwise Differentiable Gradient Descent (PDGD) algorithm. To date the only comparisons of the two approaches are limited to simulations with cascading click models and low levels of noise. The main outcome so far is that PDGD converges at higher levels of performance and learns considerably faster than DBGD-based methods. However, the PDGD algorithm assumes cascading user behavior, potentially giving it an unfair advantage. Furthermore, the robustness of both methods to high levels of noise has not been investigated. Therefore, it is unclear whether the reported advantages of PDGD over DBGD generalize to different experimental conditions. In this paper, we investigate whether the previous conclusions about the PDGD and DBGD comparison generalize from ideal to worst-case circumstances. We do so in two ways. First, we compare the theoretical properties of PDGD and DBGD, by taking a critical look at previously proven properties in the context of ranking. Second, we estimate an upper and lower bound on the performance of methods by simulating both ideal user behavior and extremely difficult behavior, i.e., almost-random non-cascading user models. Our findings show that the theoretical bounds of DBGD do not apply to any common ranking model and, furthermore, that the performance of DBGD is substantially worse than PDGD in both ideal and worst-case circumstances. These results reproduce previously published findings about the relative performance of PDGD vs. DBGD and generalize them to extremely noisy and non-cascading circumstances.
2407.13592
Florian Hofherr
Mihir Mahajan and Florian Hofherr and Daniel Cremers
MeshFeat: Multi-Resolution Features for Neural Fields on Meshes
To appear at European Conference on Computer Vision (ECCV), 2024
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parametric feature grid encodings have gained significant attention as an encoding approach for neural fields since they allow for much smaller MLPs, which significantly decreases the inference time of the models. In this work, we propose MeshFeat, a parametric feature encoding tailored to meshes, for which we adapt the idea of multi-resolution feature grids from Euclidean space. We start from the structure provided by the given vertex topology and use a mesh simplification algorithm to construct a multi-resolution feature representation directly on the mesh. The approach allows the usage of small MLPs for neural fields on meshes, and we show a significant speed-up compared to previous representations while maintaining comparable reconstruction quality for texture reconstruction and BRDF representation. Given its intrinsic coupling to the vertices, the method is particularly well-suited for representations on deforming meshes, making it a good fit for object animation.
[ { "created": "Thu, 18 Jul 2024 15:29:48 GMT", "version": "v1" } ]
2024-07-19
[ [ "Mahajan", "Mihir", "" ], [ "Hofherr", "Florian", "" ], [ "Cremers", "Daniel", "" ] ]
Parametric feature grid encodings have gained significant attention as an encoding approach for neural fields since they allow for much smaller MLPs, which significantly decreases the inference time of the models. In this work, we propose MeshFeat, a parametric feature encoding tailored to meshes, for which we adapt the idea of multi-resolution feature grids from Euclidean space. We start from the structure provided by the given vertex topology and use a mesh simplification algorithm to construct a multi-resolution feature representation directly on the mesh. The approach allows the usage of small MLPs for neural fields on meshes, and we show a significant speed-up compared to previous representations while maintaining comparable reconstruction quality for texture reconstruction and BRDF representation. Given its intrinsic coupling to the vertices, the method is particularly well-suited for representations on deforming meshes, making it a good fit for object animation.
1408.4468
David Toman
David Toman and Grant Weddell
Undecidability of Finite Model Reasoning in DLFD
null
null
null
null
cs.DB cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We resolve an open problem concerning finite logical implication for path functional dependencies (PFDs).
[ { "created": "Tue, 19 Aug 2014 20:14:34 GMT", "version": "v1" } ]
2014-08-21
[ [ "Toman", "David", "" ], [ "Weddell", "Grant", "" ] ]
We resolve an open problem concerning finite logical implication for path functional dependencies (PFDs).
1402.1587
Paul Bonsma
Paul Bonsma
Independent Set Reconfiguration in Cographs
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the following independent set reconfiguration problem, called TAR-Reachability: given two independent sets $I$ and $J$ of a graph $G$, both of size at least $k$, is it possible to transform $I$ into $J$ by adding and removing vertices one-by-one, while maintaining an independent set of size at least $k$ throughout? This problem is known to be PSPACE-hard in general. For the case that $G$ is a cograph (i.e. $P_4$-free graph) on $n$ vertices, we show that it can be solved in time $O(n^2)$, and that the length of a shortest reconfiguration sequence from $I$ to $J$ is bounded by $4n-2k$, if such a sequence exists. More generally, we show that if $X$ is a graph class for which (i) TAR-Reachability can be solved efficiently, (ii) maximum independent sets can be computed efficiently, and which satisfies a certain additional property, then the problem can be solved efficiently for any graph that can be obtained from a collection of graphs in $X$ using disjoint union and complete join operations. Chordal graphs are given as an example of such a class $X$.
[ { "created": "Fri, 7 Feb 2014 10:12:05 GMT", "version": "v1" } ]
2014-02-10
[ [ "Bonsma", "Paul", "" ] ]
We study the following independent set reconfiguration problem, called TAR-Reachability: given two independent sets $I$ and $J$ of a graph $G$, both of size at least $k$, is it possible to transform $I$ into $J$ by adding and removing vertices one-by-one, while maintaining an independent set of size at least $k$ throughout? This problem is known to be PSPACE-hard in general. For the case that $G$ is a cograph (i.e. $P_4$-free graph) on $n$ vertices, we show that it can be solved in time $O(n^2)$, and that the length of a shortest reconfiguration sequence from $I$ to $J$ is bounded by $4n-2k$, if such a sequence exists. More generally, we show that if $X$ is a graph class for which (i) TAR-Reachability can be solved efficiently, (ii) maximum independent sets can be computed efficiently, and which satisfies a certain additional property, then the problem can be solved efficiently for any graph that can be obtained from a collection of graphs in $X$ using disjoint union and complete join operations. Chordal graphs are given as an example of such a class $X$.
1502.05149
Remy Cazabet
Remy Cazabet, Rathachai Chawuthai, and Hideaki Takeda
Using multiple-criteria methods to evaluate community partitions
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection is one of the most studied problems on complex networks. Although hundreds of methods have been proposed so far, there is still no universally accepted formal definition of what is a good community. As a consequence, the problem of the evaluation and the comparison of the quality of the solutions produced by these algorithms is still an open question, despite constant progress on the topic. In this article, we investigate how using a multi-criteria evaluation can solve some of the existing problems of community evaluation, in particular the question of multiple equally-relevant solutions of different granularity. After exploring several approaches, we introduce a new quality function, called MDensity, and propose a method that can be related both to a widely used community detection metric, the Modularity, and to the Precision/Recall approach, ubiquitous in information retrieval.
[ { "created": "Wed, 18 Feb 2015 08:01:29 GMT", "version": "v1" } ]
2015-02-19
[ [ "Cazabet", "Remy", "" ], [ "Chawuthai", "Rathachai", "" ], [ "Takeda", "Hideaki", "" ] ]
Community detection is one of the most studied problems on complex networks. Although hundreds of methods have been proposed so far, there is still no universally accepted formal definition of what is a good community. As a consequence, the problem of the evaluation and the comparison of the quality of the solutions produced by these algorithms is still an open question, despite constant progress on the topic. In this article, we investigate how using a multi-criteria evaluation can solve some of the existing problems of community evaluation, in particular the question of multiple equally-relevant solutions of different granularity. After exploring several approaches, we introduce a new quality function, called MDensity, and propose a method that can be related both to a widely used community detection metric, the Modularity, and to the Precision/Recall approach, ubiquitous in information retrieval.
2111.01431
Seokjun Kim
Seokjun Kim, Jaeeun Jang, Hyeoncheol Kim
Deductive Association Networks
A simple experiment was conducted as a series of artificial association networks
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
we introduce deductive association networks(DANs), a network that performs deductive reasoning. To have high-dimensional thinking, combining various axioms and putting the results back into another axiom is necessary to produce new relationships and results. For example, it would be given two propositions: "Socrates is a man." and "All men are mortals." and two propositions could be used to infer the new proposition, "Therefore Socrates is mortal.". To evaluate, we used MNIST Dataset, a handwritten numerical image dataset, to apply it to the group theory and show the results of performing deductive learning.
[ { "created": "Tue, 2 Nov 2021 08:47:04 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2021 16:54:10 GMT", "version": "v2" }, { "created": "Mon, 27 Dec 2021 17:41:53 GMT", "version": "v3" } ]
2021-12-28
[ [ "Kim", "Seokjun", "" ], [ "Jang", "Jaeeun", "" ], [ "Kim", "Hyeoncheol", "" ] ]
we introduce deductive association networks(DANs), a network that performs deductive reasoning. To have high-dimensional thinking, combining various axioms and putting the results back into another axiom is necessary to produce new relationships and results. For example, it would be given two propositions: "Socrates is a man." and "All men are mortals." and two propositions could be used to infer the new proposition, "Therefore Socrates is mortal.". To evaluate, we used MNIST Dataset, a handwritten numerical image dataset, to apply it to the group theory and show the results of performing deductive learning.
2307.06033
Amy Smith Miss
Amy Smith and Michael Cook
AI-Generated Imagery: A New Era for the `Readymade'
5 pages, 1 figure
null
null
null
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
While the term `art' defies any concrete definition, this paper aims to examine how digital images produced by generative AI systems, such as Midjourney, have come to be so regularly referred to as such. The discourse around the classification of AI-generated imagery as art is currently somewhat homogeneous, lacking the more nuanced aspects that would apply to more traditional modes of artistic media production. This paper aims to bring important philosophical considerations to the surface of the discussion around AI-generated imagery in the context of art. We employ existing philosophical frameworks and theories of language to suggest that some AI-generated imagery, by virtue of its visual properties within these frameworks, can be presented as `readymades' for consideration as art.
[ { "created": "Wed, 12 Jul 2023 09:25:56 GMT", "version": "v1" } ]
2023-07-13
[ [ "Smith", "Amy", "" ], [ "Cook", "Michael", "" ] ]
While the term `art' defies any concrete definition, this paper aims to examine how digital images produced by generative AI systems, such as Midjourney, have come to be so regularly referred to as such. The discourse around the classification of AI-generated imagery as art is currently somewhat homogeneous, lacking the more nuanced aspects that would apply to more traditional modes of artistic media production. This paper aims to bring important philosophical considerations to the surface of the discussion around AI-generated imagery in the context of art. We employ existing philosophical frameworks and theories of language to suggest that some AI-generated imagery, by virtue of its visual properties within these frameworks, can be presented as `readymades' for consideration as art.
2302.11747
Pengrun Jia
Yaoming Zhuang, Pengrun Jia, Zheng Liu, Li Li, Chengdong Wu, Wei cui, Zhanlin Liu
Amos-SLAM: An Anti-Dynamics Two-stage SLAM Approach
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The traditional Simultaneous Localization And Mapping (SLAM) systems rely on the assumption of a static environment and fail to accurately estimate the system's location when dynamic objects are present in the background. While learning-based dynamic SLAM systems have difficulties in handling unknown moving objects, geometry-based methods have limited success in addressing the residual effects of unidentified dynamic objects on location estimation. To address these issues, we propose an anti-dynamics two-stage SLAM approach. Firstly, the potential motion regions of both prior and non-prior dynamic objects are extracted and pose estimates for dynamic discrimination are quickly obtained using optical flow tracking and model generation methods. Secondly, dynamic points in each frame are removed through dynamic judgment. For non-prior dynamic objects, we present a approach that uses super-pixel extraction and geometric clustering to determine the potential motion regions based on color and geometric information in the image. Evaluations on multiple low and high dynamic sequences in a public RGB-D dataset show that our proposed method outperforms state-of-the-art dynamic SLAM methods.
[ { "created": "Thu, 23 Feb 2023 02:27:42 GMT", "version": "v1" } ]
2023-02-24
[ [ "Zhuang", "Yaoming", "" ], [ "Jia", "Pengrun", "" ], [ "Liu", "Zheng", "" ], [ "Li", "Li", "" ], [ "Wu", "Chengdong", "" ], [ "cui", "Wei", "" ], [ "Liu", "Zhanlin", "" ] ]
The traditional Simultaneous Localization And Mapping (SLAM) systems rely on the assumption of a static environment and fail to accurately estimate the system's location when dynamic objects are present in the background. While learning-based dynamic SLAM systems have difficulties in handling unknown moving objects, geometry-based methods have limited success in addressing the residual effects of unidentified dynamic objects on location estimation. To address these issues, we propose an anti-dynamics two-stage SLAM approach. Firstly, the potential motion regions of both prior and non-prior dynamic objects are extracted and pose estimates for dynamic discrimination are quickly obtained using optical flow tracking and model generation methods. Secondly, dynamic points in each frame are removed through dynamic judgment. For non-prior dynamic objects, we present a approach that uses super-pixel extraction and geometric clustering to determine the potential motion regions based on color and geometric information in the image. Evaluations on multiple low and high dynamic sequences in a public RGB-D dataset show that our proposed method outperforms state-of-the-art dynamic SLAM methods.
2304.05498
Daniel Manu
Daniel Manu, Jingjing Yao, Wuji Liu, and Xiang Sun
GraphGANFed: A Federated Generative Framework for Graph-Structured Molecules Towards Efficient Drug Discovery
13 pages, 9 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advances in deep learning have accelerated its use in various applications, such as cellular image analysis and molecular discovery. In molecular discovery, a generative adversarial network (GAN), which comprises a discriminator to distinguish generated molecules from existing molecules and a generator to generate new molecules, is one of the premier technologies due to its ability to learn from a large molecular data set efficiently and generate novel molecules that preserve similar properties. However, different pharmaceutical companies may be unwilling or unable to share their local data sets due to the geo-distributed and sensitive nature of molecular data sets, making it impossible to train GANs in a centralized manner. In this paper, we propose a Graph convolutional network in Generative Adversarial Networks via Federated learning (GraphGANFed) framework, which integrates graph convolutional neural Network (GCN), GAN, and federated learning (FL) as a whole system to generate novel molecules without sharing local data sets. In GraphGANFed, the discriminator is implemented as a GCN to better capture features from molecules represented as molecular graphs, and FL is used to train both the discriminator and generator in a distributive manner to preserve data privacy. Extensive simulations are conducted based on the three bench-mark data sets to demonstrate the feasibility and effectiveness of GraphGANFed. The molecules generated by GraphGANFed can achieve high novelty (=100) and diversity (> 0.9). The simulation results also indicate that 1) a lower complexity discriminator model can better avoid mode collapse for a smaller data set, 2) there is a tradeoff among different evaluation metrics, and 3) having the right dropout ratio of the generator and discriminator can avoid mode collapse.
[ { "created": "Tue, 11 Apr 2023 21:15:28 GMT", "version": "v1" } ]
2023-04-13
[ [ "Manu", "Daniel", "" ], [ "Yao", "Jingjing", "" ], [ "Liu", "Wuji", "" ], [ "Sun", "Xiang", "" ] ]
Recent advances in deep learning have accelerated its use in various applications, such as cellular image analysis and molecular discovery. In molecular discovery, a generative adversarial network (GAN), which comprises a discriminator to distinguish generated molecules from existing molecules and a generator to generate new molecules, is one of the premier technologies due to its ability to learn from a large molecular data set efficiently and generate novel molecules that preserve similar properties. However, different pharmaceutical companies may be unwilling or unable to share their local data sets due to the geo-distributed and sensitive nature of molecular data sets, making it impossible to train GANs in a centralized manner. In this paper, we propose a Graph convolutional network in Generative Adversarial Networks via Federated learning (GraphGANFed) framework, which integrates graph convolutional neural Network (GCN), GAN, and federated learning (FL) as a whole system to generate novel molecules without sharing local data sets. In GraphGANFed, the discriminator is implemented as a GCN to better capture features from molecules represented as molecular graphs, and FL is used to train both the discriminator and generator in a distributive manner to preserve data privacy. Extensive simulations are conducted based on the three bench-mark data sets to demonstrate the feasibility and effectiveness of GraphGANFed. The molecules generated by GraphGANFed can achieve high novelty (=100) and diversity (> 0.9). The simulation results also indicate that 1) a lower complexity discriminator model can better avoid mode collapse for a smaller data set, 2) there is a tradeoff among different evaluation metrics, and 3) having the right dropout ratio of the generator and discriminator can avoid mode collapse.
1505.02322
Tuomo Lempi\"ainen
Tuomo Lempi\"ainen
Ability to Count Is Worth $\Theta(\Delta)$ Rounds
23 pages, 6 figures
null
null
null
cs.DC cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hella et al. (PODC 2012, Distributed Computing 2015) identified seven different models of distributed computing - one of which is the port-numbering model - and provided a complete classification of their computational power relative to each other. However, one of their simulation results involves an additive overhead of $2\Delta-2$ communication rounds, and it was not clear, if this is actually optimal. In this paper we give a positive answer: there is a matching linear-in-$\Delta$ lower bound. This closes the final gap in our understanding of the models, with respect to the number of communication rounds.
[ { "created": "Sat, 9 May 2015 22:03:32 GMT", "version": "v1" } ]
2015-05-12
[ [ "Lempiäinen", "Tuomo", "" ] ]
Hella et al. (PODC 2012, Distributed Computing 2015) identified seven different models of distributed computing - one of which is the port-numbering model - and provided a complete classification of their computational power relative to each other. However, one of their simulation results involves an additive overhead of $2\Delta-2$ communication rounds, and it was not clear, if this is actually optimal. In this paper we give a positive answer: there is a matching linear-in-$\Delta$ lower bound. This closes the final gap in our understanding of the models, with respect to the number of communication rounds.
2301.00036
Altan Cakir
Altan Cakir and Mert Gurkan
Modified Query Expansion Through Generative Adversarial Networks for Information Extraction in E-Commerce
Submitted to Expert Systems with Applications
null
null
null
cs.LG cs.CR cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
This work addresses an alternative approach for query expansion (QE) using a generative adversarial network (GAN) to enhance the effectiveness of information search in e-commerce. We propose a modified QE conditional GAN (mQE-CGAN) framework, which resolves keywords by expanding the query with a synthetically generated query that proposes semantic information from text input. We train a sequence-to-sequence transformer model as the generator to produce keywords and use a recurrent neural network model as the discriminator to classify an adversarial output with the generator. With the modified CGAN framework, various forms of semantic insights gathered from the query document corpus are introduced to the generation process. We leverage these insights as conditions for the generator model and discuss their effectiveness for the query expansion task. Our experiments demonstrate that the utilization of condition structures within the mQE-CGAN framework can increase the semantic similarity between generated sequences and reference documents up to nearly 10% compared to baseline models
[ { "created": "Fri, 30 Dec 2022 19:21:44 GMT", "version": "v1" } ]
2023-01-03
[ [ "Cakir", "Altan", "" ], [ "Gurkan", "Mert", "" ] ]
This work addresses an alternative approach for query expansion (QE) using a generative adversarial network (GAN) to enhance the effectiveness of information search in e-commerce. We propose a modified QE conditional GAN (mQE-CGAN) framework, which resolves keywords by expanding the query with a synthetically generated query that proposes semantic information from text input. We train a sequence-to-sequence transformer model as the generator to produce keywords and use a recurrent neural network model as the discriminator to classify an adversarial output with the generator. With the modified CGAN framework, various forms of semantic insights gathered from the query document corpus are introduced to the generation process. We leverage these insights as conditions for the generator model and discuss their effectiveness for the query expansion task. Our experiments demonstrate that the utilization of condition structures within the mQE-CGAN framework can increase the semantic similarity between generated sequences and reference documents up to nearly 10% compared to baseline models
2307.13782
Anusha Srikanthan
Anusha Srikanthan, Fengjun Yang, Igor Spasojevic, Dinesh Thakur, Vijay Kumar, Nikolai Matni
A Data-Driven Approach to Synthesizing Dynamics-Aware Trajectories for Underactuated Robotic Systems
8 pages, 6 figures, accepted and will appear in the proceedings at IROS 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider joint trajectory generation and tracking control for under-actuated robotic systems. A common solution is to use a layered control architecture, where the top layer uses a simplified model of system dynamics for trajectory generation, and the low layer ensures approximate tracking of this trajectory via feedback control. While such layered control architectures are standard and work well in practice, selecting the simplified model used for trajectory generation typically relies on engineering intuition and experience. In this paper, we propose an alternative data-driven approach to dynamics-aware trajectory generation. We show that a suitable augmented Lagrangian reformulation of a global nonlinear optimal control problem results in a layered decomposition of the overall problem into trajectory planning and feedback control layers. Crucially, the resulting trajectory optimization is dynamics-aware, in that, it is modified with a tracking penalty regularizer encoding the dynamic feasibility of the generated trajectory. We show that this tracking penalty regularizer can be learned from system rollouts for independently-designed low layer feedback control policies, and instantiate our framework in the context of a unicycle and a quadrotor control problem in simulation. Further, we show that our approach handles the sim-to-real gap through experiments on the quadrotor hardware platform without any additional training. For both the synthetic unicycle example and the quadrotor system, our framework shows significant improvements in both computation time and dynamic feasibility in simulation and hardware experiments.
[ { "created": "Tue, 25 Jul 2023 19:38:26 GMT", "version": "v1" } ]
2023-07-27
[ [ "Srikanthan", "Anusha", "" ], [ "Yang", "Fengjun", "" ], [ "Spasojevic", "Igor", "" ], [ "Thakur", "Dinesh", "" ], [ "Kumar", "Vijay", "" ], [ "Matni", "Nikolai", "" ] ]
We consider joint trajectory generation and tracking control for under-actuated robotic systems. A common solution is to use a layered control architecture, where the top layer uses a simplified model of system dynamics for trajectory generation, and the low layer ensures approximate tracking of this trajectory via feedback control. While such layered control architectures are standard and work well in practice, selecting the simplified model used for trajectory generation typically relies on engineering intuition and experience. In this paper, we propose an alternative data-driven approach to dynamics-aware trajectory generation. We show that a suitable augmented Lagrangian reformulation of a global nonlinear optimal control problem results in a layered decomposition of the overall problem into trajectory planning and feedback control layers. Crucially, the resulting trajectory optimization is dynamics-aware, in that, it is modified with a tracking penalty regularizer encoding the dynamic feasibility of the generated trajectory. We show that this tracking penalty regularizer can be learned from system rollouts for independently-designed low layer feedback control policies, and instantiate our framework in the context of a unicycle and a quadrotor control problem in simulation. Further, we show that our approach handles the sim-to-real gap through experiments on the quadrotor hardware platform without any additional training. For both the synthetic unicycle example and the quadrotor system, our framework shows significant improvements in both computation time and dynamic feasibility in simulation and hardware experiments.
1811.09789
Omid Mohamad Nezami
Omid Mohamad Nezami, Mark Dras, Stephen Wan, Cecile Paris
Senti-Attend: Image Captioning using Sentiment and Attention
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been much recent work on image captioning models that describe the factual aspects of an image. Recently, some models have incorporated non-factual aspects into the captions, such as sentiment or style. However, such models typically have difficulty in balancing the semantic aspects of the image and the non-factual dimensions of the caption; in addition, it can be observed that humans may focus on different aspects of an image depending on the chosen sentiment or style of the caption. To address this, we design an attention-based model to better add sentiment to image captions. The model embeds and learns sentiment with respect to image-caption data, and uses both high-level and word-level sentiment information during the learning process. The model outperforms the state-of-the-art work in image captioning with sentiment using standard evaluation metrics. An analysis of generated captions also shows that our model does this by a better selection of the sentiment-bearing adjectives and adjective-noun pairs.
[ { "created": "Sat, 24 Nov 2018 08:47:16 GMT", "version": "v1" } ]
2018-11-27
[ [ "Nezami", "Omid Mohamad", "" ], [ "Dras", "Mark", "" ], [ "Wan", "Stephen", "" ], [ "Paris", "Cecile", "" ] ]
There has been much recent work on image captioning models that describe the factual aspects of an image. Recently, some models have incorporated non-factual aspects into the captions, such as sentiment or style. However, such models typically have difficulty in balancing the semantic aspects of the image and the non-factual dimensions of the caption; in addition, it can be observed that humans may focus on different aspects of an image depending on the chosen sentiment or style of the caption. To address this, we design an attention-based model to better add sentiment to image captions. The model embeds and learns sentiment with respect to image-caption data, and uses both high-level and word-level sentiment information during the learning process. The model outperforms the state-of-the-art work in image captioning with sentiment using standard evaluation metrics. An analysis of generated captions also shows that our model does this by a better selection of the sentiment-bearing adjectives and adjective-noun pairs.
1811.07536
Constance Thierry
Constance Thierry (DRUID), Jean-Christophe Dubois (DRUID), Yolande Le Gall (DRUID), Arnaud Martin (DRUID)
Contributors profile modelization in crowdsourcing platforms
in French, 27\`emes rencontres francophones sur la logique floue et ses applications, Nov 2018, Arras, France
null
null
null
cs.AI cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The crowdsourcing consists in the externalisation of tasks to a crowd of people remunerated to execute this ones. The crowd, usually diversified, can include users without qualification and/or motivation for the tasks. In this paper we will introduce a new method of user expertise modelization in the crowdsourcing platforms based on the theory of belief functions in order to identify serious and qualificated users.
[ { "created": "Mon, 19 Nov 2018 07:51:25 GMT", "version": "v1" } ]
2018-11-20
[ [ "Thierry", "Constance", "", "DRUID" ], [ "Dubois", "Jean-Christophe", "", "DRUID" ], [ "Gall", "Yolande Le", "", "DRUID" ], [ "Martin", "Arnaud", "", "DRUID" ] ]
The crowdsourcing consists in the externalisation of tasks to a crowd of people remunerated to execute this ones. The crowd, usually diversified, can include users without qualification and/or motivation for the tasks. In this paper we will introduce a new method of user expertise modelization in the crowdsourcing platforms based on the theory of belief functions in order to identify serious and qualificated users.
2205.08809
Ezekiel Soremekun
Ezekiel Soremekun, Mike Papadakis, Maxime Cordy, and Yves Le Traon
Software Fairness: An Analysis and Survey
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last decade, researchers have studied fairness as a software property. In particular, how to engineer fair software systems? This includes specifying, designing, and validating fairness properties. However, the landscape of works addressing bias as a software engineering concern is unclear, i.e., techniques and studies that analyze the fairness properties of learning-based software. In this work, we provide a clear view of the state-of-the-art in software fairness analysis. To this end, we collect, categorize and conduct an in-depth analysis of 164 publications investigating the fairness of learning-based software systems. Specifically, we study the evaluated fairness measure, the studied tasks, the type of fairness analysis, the main idea of the proposed approaches, and the access level (e.g., black, white, or grey box). Our findings include the following: (1) Fairness concerns (such as fairness specification and requirements engineering) are under-studied; (2) Fairness measures such as conditional, sequential, and intersectional fairness are under-explored; (3) Unstructured datasets (e.g., audio, image, and text) are barely studied for fairness analysis; and (4) Software fairness analysis techniques hardly employ white-box, in-processing machine learning (ML) analysis methods. In summary, we observed several open challenges including the need to study intersectional/sequential bias, policy-based bias handling, and human-in-the-loop, socio-technical bias mitigation.
[ { "created": "Wed, 18 May 2022 09:18:08 GMT", "version": "v1" } ]
2022-05-19
[ [ "Soremekun", "Ezekiel", "" ], [ "Papadakis", "Mike", "" ], [ "Cordy", "Maxime", "" ], [ "Traon", "Yves Le", "" ] ]
In the last decade, researchers have studied fairness as a software property. In particular, how to engineer fair software systems? This includes specifying, designing, and validating fairness properties. However, the landscape of works addressing bias as a software engineering concern is unclear, i.e., techniques and studies that analyze the fairness properties of learning-based software. In this work, we provide a clear view of the state-of-the-art in software fairness analysis. To this end, we collect, categorize and conduct an in-depth analysis of 164 publications investigating the fairness of learning-based software systems. Specifically, we study the evaluated fairness measure, the studied tasks, the type of fairness analysis, the main idea of the proposed approaches, and the access level (e.g., black, white, or grey box). Our findings include the following: (1) Fairness concerns (such as fairness specification and requirements engineering) are under-studied; (2) Fairness measures such as conditional, sequential, and intersectional fairness are under-explored; (3) Unstructured datasets (e.g., audio, image, and text) are barely studied for fairness analysis; and (4) Software fairness analysis techniques hardly employ white-box, in-processing machine learning (ML) analysis methods. In summary, we observed several open challenges including the need to study intersectional/sequential bias, policy-based bias handling, and human-in-the-loop, socio-technical bias mitigation.
2403.07865
Qibing Ren
Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
ACL Findings 2024, Code is available at https://github.com/renqibing/CodeAttack
null
null
null
cs.CL cs.AI cs.CR cs.LG cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid advancement of Large Language Models (LLMs) has brought about remarkable generative capabilities but also raised concerns about their potential misuse. While strategies like supervised fine-tuning and reinforcement learning from human feedback have enhanced their safety, these methods primarily focus on natural languages, which may not generalize to other domains. This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs, presenting a novel environment for testing the safety generalization of LLMs. Our comprehensive studies on state-of-the-art LLMs including GPT-4, Claude-2, and Llama-2 series reveal a new and universal safety vulnerability of these models against code input: CodeAttack bypasses the safety guardrails of all models more than 80\% of the time. We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization, such as encoding natural language input with data structures. Furthermore, we give our hypotheses about the success of CodeAttack: the misaligned bias acquired by LLMs during code training, prioritizing code completion over avoiding the potential safety risk. Finally, we analyze potential mitigation measures. These findings highlight new safety risks in the code domain and the need for more robust safety alignment algorithms to match the code capabilities of LLMs.
[ { "created": "Tue, 12 Mar 2024 17:55:38 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 16:57:37 GMT", "version": "v2" }, { "created": "Sun, 7 Apr 2024 15:39:24 GMT", "version": "v3" }, { "created": "Sun, 9 Jun 2024 15:04:34 GMT", "version": "v4" } ]
2024-06-11
[ [ "Ren", "Qibing", "" ], [ "Gao", "Chang", "" ], [ "Shao", "Jing", "" ], [ "Yan", "Junchi", "" ], [ "Tan", "Xin", "" ], [ "Lam", "Wai", "" ], [ "Ma", "Lizhuang", "" ] ]
The rapid advancement of Large Language Models (LLMs) has brought about remarkable generative capabilities but also raised concerns about their potential misuse. While strategies like supervised fine-tuning and reinforcement learning from human feedback have enhanced their safety, these methods primarily focus on natural languages, which may not generalize to other domains. This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs, presenting a novel environment for testing the safety generalization of LLMs. Our comprehensive studies on state-of-the-art LLMs including GPT-4, Claude-2, and Llama-2 series reveal a new and universal safety vulnerability of these models against code input: CodeAttack bypasses the safety guardrails of all models more than 80\% of the time. We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization, such as encoding natural language input with data structures. Furthermore, we give our hypotheses about the success of CodeAttack: the misaligned bias acquired by LLMs during code training, prioritizing code completion over avoiding the potential safety risk. Finally, we analyze potential mitigation measures. These findings highlight new safety risks in the code domain and the need for more robust safety alignment algorithms to match the code capabilities of LLMs.
2210.02161
Nafiseh Soveizi Soveizi
Nafiseh Soveizi, Fatih Turkmen, Dimka Karastoyanova
Security and Privacy Concerns in Cloud-based Scientific and Business Workflows: A Systematic Review
16 pages, 8 figures, 5 tables
null
null
null
cs.CR cs.SE
http://creativecommons.org/licenses/by/4.0/
Today, the number of data-intensive and compute-intensive applications like business and scientific workflows has dramatically increased, which made cloud computing more popular in the matter of delivering a large amount of computing resources on demand. On the other hand, security is a critical issue affecting the wide adoption of cloud technologies, especially for workflows that are mostly dealing with sensitive data and tasks. In this paper, we carry out a review of the state-of-the-art on how security and privacy concerns in scientific and business workflows in cloud environments are being addressed and identify the limitations and gaps in the current body of knowledge in this area. In this extensive literature review, we first present a classification of the state-of-the-art security solutions organized according to the phases of the workflow life cycle they target. Based on our findings, we provide a detailed review and classification of the most relevant available literature focusing on the execution, monitoring, and adaptation phases of workflows. Finally, we present a list of open research issues related to the security of cloud-based workflows and discuss them.
[ { "created": "Wed, 5 Oct 2022 11:42:39 GMT", "version": "v1" } ]
2022-10-06
[ [ "Soveizi", "Nafiseh", "" ], [ "Turkmen", "Fatih", "" ], [ "Karastoyanova", "Dimka", "" ] ]
Today, the number of data-intensive and compute-intensive applications like business and scientific workflows has dramatically increased, which made cloud computing more popular in the matter of delivering a large amount of computing resources on demand. On the other hand, security is a critical issue affecting the wide adoption of cloud technologies, especially for workflows that are mostly dealing with sensitive data and tasks. In this paper, we carry out a review of the state-of-the-art on how security and privacy concerns in scientific and business workflows in cloud environments are being addressed and identify the limitations and gaps in the current body of knowledge in this area. In this extensive literature review, we first present a classification of the state-of-the-art security solutions organized according to the phases of the workflow life cycle they target. Based on our findings, we provide a detailed review and classification of the most relevant available literature focusing on the execution, monitoring, and adaptation phases of workflows. Finally, we present a list of open research issues related to the security of cloud-based workflows and discuss them.
2407.13419
Ruchira Dhar
Ruchira Dhar and Anders S{\o}gaard
From Words to Worlds: Compositionality for Cognitive Architectures
Accepted to ICML 2024 Workshop on LLMs & Cognition
null
null
null
cs.CL cs.AI cs.LG cs.SC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large language models (LLMs) are very performant connectionist systems, but do they exhibit more compositionality? More importantly, is that part of why they perform so well? We present empirical analyses across four LLM families (12 models) and three task categories, including a novel task introduced below. Our findings reveal a nuanced relationship in learning of compositional strategies by LLMs -- while scaling enhances compositional abilities, instruction tuning often has a reverse effect. Such disparity brings forth some open issues regarding the development and improvement of large language models in alignment with human cognitive capacities.
[ { "created": "Thu, 18 Jul 2024 11:42:13 GMT", "version": "v1" } ]
2024-07-19
[ [ "Dhar", "Ruchira", "" ], [ "Søgaard", "Anders", "" ] ]
Large language models (LLMs) are very performant connectionist systems, but do they exhibit more compositionality? More importantly, is that part of why they perform so well? We present empirical analyses across four LLM families (12 models) and three task categories, including a novel task introduced below. Our findings reveal a nuanced relationship in learning of compositional strategies by LLMs -- while scaling enhances compositional abilities, instruction tuning often has a reverse effect. Such disparity brings forth some open issues regarding the development and improvement of large language models in alignment with human cognitive capacities.
1802.10566
Zeyu Zhang
Amy Babay, Michael Dinitz, Zeyu Zhang
Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph $G$, a distance bound $L$, and $p$ pairs of vertices $(s_1,t_1),\cdots,(s_p,t_p)$, the objective is to find a minimum-cost subgraph $G'$ such that $s_i$ and $t_i$ have distance at most $L$ in $G'$ (for every $i \in [p]$). Our main result is on the fixed-parameter tractability of this problem with parameter $p$. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W$[1]$-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W$[1]$-hard even to approximate.
[ { "created": "Wed, 28 Feb 2018 18:13:23 GMT", "version": "v1" } ]
2018-03-01
[ [ "Babay", "Amy", "" ], [ "Dinitz", "Michael", "" ], [ "Zhang", "Zeyu", "" ] ]
We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph $G$, a distance bound $L$, and $p$ pairs of vertices $(s_1,t_1),\cdots,(s_p,t_p)$, the objective is to find a minimum-cost subgraph $G'$ such that $s_i$ and $t_i$ have distance at most $L$ in $G'$ (for every $i \in [p]$). Our main result is on the fixed-parameter tractability of this problem with parameter $p$. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W$[1]$-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W$[1]$-hard even to approximate.
2307.14480
Chen Chen
Chen Chen, Vasudev Gohil, Rahul Kande, Ahmad-Reza Sadeghi, Jeyavijayan Rajendran
PSOFuzz: Fuzzing Processors with Particle Swarm Optimization
To be published in the proceedings of the ICCAD, 2023
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
Hardware security vulnerabilities in computing systems compromise the security defenses of not only the hardware but also the software running on it. Recent research has shown that hardware fuzzing is a promising technique to efficiently detect such vulnerabilities in large-scale designs such as modern processors. However, the current fuzzing techniques do not adjust their strategies dynamically toward faster and higher design space exploration, resulting in slow vulnerability detection, evident through their low design coverage. To address this problem, we propose PSOFuzz, which uses particle swarm optimization (PSO) to schedule the mutation operators and to generate initial input programs dynamically with the objective of detecting vulnerabilities quickly. Unlike traditional PSO, which finds a single optimal solution, we use a modified PSO that dynamically computes the optimal solution for selecting mutation operators required to explore new design regions in hardware. We also address the challenge of inefficient initial seed generation by employing PSO-based seed generation. Including these optimizations, our final formulation outperforms fuzzers without PSO. Experiments show that PSOFuzz achieves up to 15.25$\times$ speedup for vulnerability detection and up to 2.22$\times$ speedup for coverage compared to the state-of-the-art simulation-based hardware fuzzer.
[ { "created": "Wed, 26 Jul 2023 20:08:01 GMT", "version": "v1" }, { "created": "Fri, 18 Aug 2023 18:16:32 GMT", "version": "v2" } ]
2023-08-22
[ [ "Chen", "Chen", "" ], [ "Gohil", "Vasudev", "" ], [ "Kande", "Rahul", "" ], [ "Sadeghi", "Ahmad-Reza", "" ], [ "Rajendran", "Jeyavijayan", "" ] ]
Hardware security vulnerabilities in computing systems compromise the security defenses of not only the hardware but also the software running on it. Recent research has shown that hardware fuzzing is a promising technique to efficiently detect such vulnerabilities in large-scale designs such as modern processors. However, the current fuzzing techniques do not adjust their strategies dynamically toward faster and higher design space exploration, resulting in slow vulnerability detection, evident through their low design coverage. To address this problem, we propose PSOFuzz, which uses particle swarm optimization (PSO) to schedule the mutation operators and to generate initial input programs dynamically with the objective of detecting vulnerabilities quickly. Unlike traditional PSO, which finds a single optimal solution, we use a modified PSO that dynamically computes the optimal solution for selecting mutation operators required to explore new design regions in hardware. We also address the challenge of inefficient initial seed generation by employing PSO-based seed generation. Including these optimizations, our final formulation outperforms fuzzers without PSO. Experiments show that PSOFuzz achieves up to 15.25$\times$ speedup for vulnerability detection and up to 2.22$\times$ speedup for coverage compared to the state-of-the-art simulation-based hardware fuzzer.
1903.10657
Chao Zhang
Takumi Nakane, Takuya Akashi, Xuequan Lu, Chao Zhang
A Probabilistic Bitwise Genetic Algorithm for B-Spline based Image Deformation Estimation
GECCO2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel genetic algorithm to solve the image deformation estimation problem by preserving the genetic diversity. As a classical problem, there is always a trade-off between the complexity of deformation models and the difficulty of parameters search in image deformation. 2D cubic B-spline surface is a highly free-form deformation model and is able to handle complex deformations such as fluid image distortions. However, it is challenging to estimate an apposite global solution. To tackle this problem, we develop a genetic operation named probabilistic bitwise operation (PBO) to replace the crossover and mutation operations, which can preserve the diversity during generation iteration and achieve better coverage ratio of the solution space. Furthermore, a selection strategy named annealing selection is proposed to control the convergence. Qualitative and quantitative results on synthetic data show the effectiveness of our method.
[ { "created": "Tue, 26 Mar 2019 02:24:07 GMT", "version": "v1" } ]
2019-03-27
[ [ "Nakane", "Takumi", "" ], [ "Akashi", "Takuya", "" ], [ "Lu", "Xuequan", "" ], [ "Zhang", "Chao", "" ] ]
We propose a novel genetic algorithm to solve the image deformation estimation problem by preserving the genetic diversity. As a classical problem, there is always a trade-off between the complexity of deformation models and the difficulty of parameters search in image deformation. 2D cubic B-spline surface is a highly free-form deformation model and is able to handle complex deformations such as fluid image distortions. However, it is challenging to estimate an apposite global solution. To tackle this problem, we develop a genetic operation named probabilistic bitwise operation (PBO) to replace the crossover and mutation operations, which can preserve the diversity during generation iteration and achieve better coverage ratio of the solution space. Furthermore, a selection strategy named annealing selection is proposed to control the convergence. Qualitative and quantitative results on synthetic data show the effectiveness of our method.
2109.04096
Shilei Liu
Shilei Liu, Xiaofeng Zhao, Bochao Li, Feiliang Ren, Longhui Zhang, Shujuan Yin
A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded Dialogue Generation
Accepted by EMNLP 2021 main conference
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Neural conversation models have shown great potentials towards generating fluent and informative responses by introducing external background knowledge. Nevertheless, it is laborious to construct such knowledge-grounded dialogues, and existing models usually perform poorly when transfer to new domains with limited training samples. Therefore, building a knowledge-grounded dialogue system under the low-resource setting is a still crucial issue. In this paper, we propose a novel three-stage learning framework based on weakly supervised learning which benefits from large scale ungrounded dialogues and unstructured knowledge base. To better cooperate with this framework, we devise a variant of Transformer with decoupled decoder which facilitates the disentangled learning of response generation and knowledge incorporation. Evaluation results on two benchmarks indicate that our approach can outperform other state-of-the-art methods with less training data, and even in zero-resource scenario, our approach still performs well.
[ { "created": "Thu, 9 Sep 2021 08:32:02 GMT", "version": "v1" } ]
2021-09-10
[ [ "Liu", "Shilei", "" ], [ "Zhao", "Xiaofeng", "" ], [ "Li", "Bochao", "" ], [ "Ren", "Feiliang", "" ], [ "Zhang", "Longhui", "" ], [ "Yin", "Shujuan", "" ] ]
Neural conversation models have shown great potentials towards generating fluent and informative responses by introducing external background knowledge. Nevertheless, it is laborious to construct such knowledge-grounded dialogues, and existing models usually perform poorly when transfer to new domains with limited training samples. Therefore, building a knowledge-grounded dialogue system under the low-resource setting is a still crucial issue. In this paper, we propose a novel three-stage learning framework based on weakly supervised learning which benefits from large scale ungrounded dialogues and unstructured knowledge base. To better cooperate with this framework, we devise a variant of Transformer with decoupled decoder which facilitates the disentangled learning of response generation and knowledge incorporation. Evaluation results on two benchmarks indicate that our approach can outperform other state-of-the-art methods with less training data, and even in zero-resource scenario, our approach still performs well.
1408.3810
Hossein Rahmani
Hossein Rahmani, Arif Mahmood, Du Huynh, Ajmal Mian
Action Classification with Locality-constrained Linear Coding
ICPR 2014
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an action classification algorithm which uses Locality-constrained Linear Coding (LLC) to capture discriminative information of human body variations in each spatiotemporal subsequence of a video sequence. Our proposed method divides the input video into equally spaced overlapping spatiotemporal subsequences, each of which is decomposed into blocks and then cells. We use the Histogram of Oriented Gradient (HOG3D) feature to encode the information in each cell. We justify the use of LLC for encoding the block descriptor by demonstrating its superiority over Sparse Coding (SC). Our sequence descriptor is obtained via a logistic regression classifier with L2 regularization. We evaluate and compare our algorithm with ten state-of-the-art algorithms on five benchmark datasets. Experimental results show that, on average, our algorithm gives better accuracy than these ten algorithms.
[ { "created": "Sun, 17 Aug 2014 10:46:45 GMT", "version": "v1" }, { "created": "Mon, 22 Sep 2014 06:54:34 GMT", "version": "v2" } ]
2014-09-23
[ [ "Rahmani", "Hossein", "" ], [ "Mahmood", "Arif", "" ], [ "Huynh", "Du", "" ], [ "Mian", "Ajmal", "" ] ]
We propose an action classification algorithm which uses Locality-constrained Linear Coding (LLC) to capture discriminative information of human body variations in each spatiotemporal subsequence of a video sequence. Our proposed method divides the input video into equally spaced overlapping spatiotemporal subsequences, each of which is decomposed into blocks and then cells. We use the Histogram of Oriented Gradient (HOG3D) feature to encode the information in each cell. We justify the use of LLC for encoding the block descriptor by demonstrating its superiority over Sparse Coding (SC). Our sequence descriptor is obtained via a logistic regression classifier with L2 regularization. We evaluate and compare our algorithm with ten state-of-the-art algorithms on five benchmark datasets. Experimental results show that, on average, our algorithm gives better accuracy than these ten algorithms.
2202.05659
Yabin Zhu
Yabin Zhu, Chenglong Li, Yao Liu, Xiao Wang, Jin Tang, Bin Luo, Zhixiang Huang
Tiny Object Tracking: A Large-scale Dataset and A Baseline
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tiny objects, frequently appearing in practical applications, have weak appearance and features, and receive increasing interests in meany vision tasks, such as object detection and segmentation. To promote the research and development of tiny object tracking, we create a large-scale video dataset, which contains 434 sequences with a total of more than 217K frames. Each frame is carefully annotated with a high-quality bounding box. In data creation, we take 12 challenge attributes into account to cover a broad range of viewpoints and scene complexities, and annotate these attributes for facilitating the attribute-based performance analysis. To provide a strong baseline in tiny object tracking, we propose a novel Multilevel Knowledge Distillation Network (MKDNet), which pursues three-level knowledge distillations in a unified framework to effectively enhance the feature representation, discrimination and localization abilities in tracking tiny objects. Extensive experiments are performed on the proposed dataset, and the results prove the superiority and effectiveness of MKDNet compared with state-of-the-art methods. The dataset, the algorithm code, and the evaluation code are available at https://github.com/mmic-lcl/Datasets-and-benchmark-code.
[ { "created": "Fri, 11 Feb 2022 15:00:32 GMT", "version": "v1" } ]
2022-02-14
[ [ "Zhu", "Yabin", "" ], [ "Li", "Chenglong", "" ], [ "Liu", "Yao", "" ], [ "Wang", "Xiao", "" ], [ "Tang", "Jin", "" ], [ "Luo", "Bin", "" ], [ "Huang", "Zhixiang", "" ] ]
Tiny objects, frequently appearing in practical applications, have weak appearance and features, and receive increasing interests in meany vision tasks, such as object detection and segmentation. To promote the research and development of tiny object tracking, we create a large-scale video dataset, which contains 434 sequences with a total of more than 217K frames. Each frame is carefully annotated with a high-quality bounding box. In data creation, we take 12 challenge attributes into account to cover a broad range of viewpoints and scene complexities, and annotate these attributes for facilitating the attribute-based performance analysis. To provide a strong baseline in tiny object tracking, we propose a novel Multilevel Knowledge Distillation Network (MKDNet), which pursues three-level knowledge distillations in a unified framework to effectively enhance the feature representation, discrimination and localization abilities in tracking tiny objects. Extensive experiments are performed on the proposed dataset, and the results prove the superiority and effectiveness of MKDNet compared with state-of-the-art methods. The dataset, the algorithm code, and the evaluation code are available at https://github.com/mmic-lcl/Datasets-and-benchmark-code.
2308.03921
Hariharan Subramonyam
Tyler Angert, Miroslav Ivan Suzara, Jenny Han, Christopher Lawrence Pondoc, Hariharan Subramonyam
Spellburst: A Node-based Interface for Exploratory Creative Coding with Natural Language Prompts
null
null
10.1145/3586183.3606719
null
cs.SE cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creative coding tasks are often exploratory in nature. When producing digital artwork, artists usually begin with a high-level semantic construct such as a "stained glass filter" and programmatically implement it by varying code parameters such as shape, color, lines, and opacity to produce visually appealing results. Based on interviews with artists, it can be effortful to translate semantic constructs to program syntax, and current programming tools don't lend well to rapid creative exploration. To address these challenges, we introduce Spellburst, a large language model (LLM) powered creative-coding environment. Spellburst provides (1) a node-based interface that allows artists to create generative art and explore variations through branching and merging operations, (2) expressive prompt-based interactions to engage in semantic programming, and (3) dynamic prompt-driven interfaces and direct code editing to seamlessly switch between semantic and syntactic exploration. Our evaluation with artists demonstrates Spellburst's potential to enhance creative coding practices and inform the design of computational creativity tools that bridge semantic and syntactic spaces.
[ { "created": "Mon, 7 Aug 2023 21:54:58 GMT", "version": "v1" } ]
2023-08-14
[ [ "Angert", "Tyler", "" ], [ "Suzara", "Miroslav Ivan", "" ], [ "Han", "Jenny", "" ], [ "Pondoc", "Christopher Lawrence", "" ], [ "Subramonyam", "Hariharan", "" ] ]
Creative coding tasks are often exploratory in nature. When producing digital artwork, artists usually begin with a high-level semantic construct such as a "stained glass filter" and programmatically implement it by varying code parameters such as shape, color, lines, and opacity to produce visually appealing results. Based on interviews with artists, it can be effortful to translate semantic constructs to program syntax, and current programming tools don't lend well to rapid creative exploration. To address these challenges, we introduce Spellburst, a large language model (LLM) powered creative-coding environment. Spellburst provides (1) a node-based interface that allows artists to create generative art and explore variations through branching and merging operations, (2) expressive prompt-based interactions to engage in semantic programming, and (3) dynamic prompt-driven interfaces and direct code editing to seamlessly switch between semantic and syntactic exploration. Our evaluation with artists demonstrates Spellburst's potential to enhance creative coding practices and inform the design of computational creativity tools that bridge semantic and syntactic spaces.
2012.01067
Egor Namakonov
Ori Lahav (1), Egor Namakonov (2 and 3), Jonas Oberhauser (4 and 5), Anton Podkopaev (3 and 6), Viktor Vafeiadis (7) ((1) Tel Aviv University, (2) St Petersburg University, (3) JetBrains Research, (4) Huawei Dresden Research Center, (5) Huawei OS Kernel Lab, (6) NRU HSE, (7) MPI-SWS)
Making Weak Memory Models Fair
43 pages, 2 figures
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
Liveness properties, such as termination, of even the simplest shared-memory concurrent programs under sequential consistency typically require some fairness assumptions about the scheduler. Under weak memory models, we observe that the standard notions of thread fairness are insufficient, and an additional fairness property, which we call memory fairness, is needed. In this paper, we propose a uniform definition for memory fairness that can be integrated into any declarative memory model enforcing acyclicity of the union of the program order and the reads-from relation. For the well-known models, SC, x86-TSO, RA, and StrongCOH, that have equivalent operational and declarative presentations, we show that our declarative memory fairness condition is equivalent to an intuitive model-specific operational notion of memory fairness, which requires the memory system to fairly execute its internal propagation steps. Our fairness condition preserves the correctness of local transformations and the compilation scheme from RC11 to x86-TSO, and also enables the first formal proofs of termination of mutual exclusion lock implementations under declarative weak memory models.
[ { "created": "Wed, 2 Dec 2020 10:22:23 GMT", "version": "v1" }, { "created": "Thu, 9 Sep 2021 10:43:13 GMT", "version": "v2" } ]
2021-09-10
[ [ "Lahav", "Ori", "", "2 and 3" ], [ "Namakonov", "Egor", "", "2 and 3" ], [ "Oberhauser", "Jonas", "", "4 and 5" ], [ "Podkopaev", "Anton", "", "3 and 6" ], [ "Vafeiadis", "Viktor", "" ] ]
Liveness properties, such as termination, of even the simplest shared-memory concurrent programs under sequential consistency typically require some fairness assumptions about the scheduler. Under weak memory models, we observe that the standard notions of thread fairness are insufficient, and an additional fairness property, which we call memory fairness, is needed. In this paper, we propose a uniform definition for memory fairness that can be integrated into any declarative memory model enforcing acyclicity of the union of the program order and the reads-from relation. For the well-known models, SC, x86-TSO, RA, and StrongCOH, that have equivalent operational and declarative presentations, we show that our declarative memory fairness condition is equivalent to an intuitive model-specific operational notion of memory fairness, which requires the memory system to fairly execute its internal propagation steps. Our fairness condition preserves the correctness of local transformations and the compilation scheme from RC11 to x86-TSO, and also enables the first formal proofs of termination of mutual exclusion lock implementations under declarative weak memory models.
2104.04552
Wenqian Ronny Huang
W. Ronny Huang, Tara N. Sainath, Cal Peyser, Shankar Kumar, David Rybach, Trevor Strohman
Lookup-Table Recurrent Language Models for Long Tail Speech Recognition
Presented as conference paper at Interspeech 2021
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Lookup-Table Language Models (LookupLM), a method for scaling up the size of RNN language models with only a constant increase in the floating point operations, by increasing the expressivity of the embedding table. In particular, we instantiate an (additional) embedding table which embeds the previous n-gram token sequence, rather than a single token. This allows the embedding table to be scaled up arbitrarily -- with a commensurate increase in performance -- without changing the token vocabulary. Since embeddings are sparsely retrieved from the table via a lookup; increasing the size of the table adds neither extra operations to each forward pass nor extra parameters that need to be stored on limited GPU/TPU memory. We explore scaling n-gram embedding tables up to nearly a billion parameters. When trained on a 3-billion sentence corpus, we find that LookupLM improves long tail log perplexity by 2.44 and long tail WER by 23.4% on a downstream speech recognition task over a standard RNN language model baseline, an improvement comparable to a scaling up the baseline by 6.2x the number of floating point operations.
[ { "created": "Fri, 9 Apr 2021 18:31:30 GMT", "version": "v1" }, { "created": "Mon, 7 Jun 2021 01:01:17 GMT", "version": "v2" } ]
2021-06-08
[ [ "Huang", "W. Ronny", "" ], [ "Sainath", "Tara N.", "" ], [ "Peyser", "Cal", "" ], [ "Kumar", "Shankar", "" ], [ "Rybach", "David", "" ], [ "Strohman", "Trevor", "" ] ]
We introduce Lookup-Table Language Models (LookupLM), a method for scaling up the size of RNN language models with only a constant increase in the floating point operations, by increasing the expressivity of the embedding table. In particular, we instantiate an (additional) embedding table which embeds the previous n-gram token sequence, rather than a single token. This allows the embedding table to be scaled up arbitrarily -- with a commensurate increase in performance -- without changing the token vocabulary. Since embeddings are sparsely retrieved from the table via a lookup; increasing the size of the table adds neither extra operations to each forward pass nor extra parameters that need to be stored on limited GPU/TPU memory. We explore scaling n-gram embedding tables up to nearly a billion parameters. When trained on a 3-billion sentence corpus, we find that LookupLM improves long tail log perplexity by 2.44 and long tail WER by 23.4% on a downstream speech recognition task over a standard RNN language model baseline, an improvement comparable to a scaling up the baseline by 6.2x the number of floating point operations.
2301.10363
Jing Liu PhD
Jing Liu, Hemant Singh, Saber Elsayed, Robert Hunjet and Hussein Abbass
Planning-Assisted Context-Sensitive Autonomous Shepherding of Dispersed Robotic Swarms in Obstacle-Cluttered Environments
17 pages, 6 figures
null
null
null
cs.RO cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic shepherding is a bio-inspired approach to autonomously guiding a swarm of agents towards a desired location. The research area has earned increasing research interest recently due to the efficacy of controlling a large number of agents in a swarm (sheep) using a smaller number of actuators (sheepdogs). However, shepherding a highly dispersed swarm in an obstacle-cluttered environment remains challenging for existing methods. To improve the efficacy of shepherding in complex environments with obstacles and dispersed sheep, this paper proposes a planning-assisted context-sensitive autonomous shepherding framework with collision avoidance abilities. The proposed approach models the swarm shepherding problem as a single Travelling Salesperson Problem (TSP), with two sheepdogs\textquoteright\ modes: no-interaction and interaction. An adaptive switching approach is integrated into the framework to guide real-time path planning for avoiding collisions with static and dynamic obstacles; the latter representing moving sheep swarms. We then propose an overarching hierarchical mission planning system, which is made of three sub-systems: a clustering approach to group and distinguish sheep sub-swarms, an Ant Colony Optimisation algorithm as a TSP solver for determining the optimal herding sequence of the sub-swarms, and an online path planner for calculating optimal paths for both sheepdogs and sheep. The experiments on various environments, both with and without obstacles, objectively demonstrate the effectiveness of the proposed shepherding framework and planning approaches.
[ { "created": "Wed, 25 Jan 2023 00:18:45 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2023 23:27:28 GMT", "version": "v2" } ]
2023-03-01
[ [ "Liu", "Jing", "" ], [ "Singh", "Hemant", "" ], [ "Elsayed", "Saber", "" ], [ "Hunjet", "Robert", "" ], [ "Abbass", "Hussein", "" ] ]
Robotic shepherding is a bio-inspired approach to autonomously guiding a swarm of agents towards a desired location. The research area has earned increasing research interest recently due to the efficacy of controlling a large number of agents in a swarm (sheep) using a smaller number of actuators (sheepdogs). However, shepherding a highly dispersed swarm in an obstacle-cluttered environment remains challenging for existing methods. To improve the efficacy of shepherding in complex environments with obstacles and dispersed sheep, this paper proposes a planning-assisted context-sensitive autonomous shepherding framework with collision avoidance abilities. The proposed approach models the swarm shepherding problem as a single Travelling Salesperson Problem (TSP), with two sheepdogs\textquoteright\ modes: no-interaction and interaction. An adaptive switching approach is integrated into the framework to guide real-time path planning for avoiding collisions with static and dynamic obstacles; the latter representing moving sheep swarms. We then propose an overarching hierarchical mission planning system, which is made of three sub-systems: a clustering approach to group and distinguish sheep sub-swarms, an Ant Colony Optimisation algorithm as a TSP solver for determining the optimal herding sequence of the sub-swarms, and an online path planner for calculating optimal paths for both sheepdogs and sheep. The experiments on various environments, both with and without obstacles, objectively demonstrate the effectiveness of the proposed shepherding framework and planning approaches.
2305.12626
Walter Goodwin
Walter Goodwin, Ioannis Havoutis, Ingmar Posner
You Only Look at One: Category-Level Object Representations for Pose Estimation From a Single Example
16 pages, 6 figures, CoRL 2022
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
In order to meaningfully interact with the world, robot manipulators must be able to interpret objects they encounter. A critical aspect of this interpretation is pose estimation: inferring quantities that describe the position and orientation of an object in 3D space. Most existing approaches to pose estimation make limiting assumptions, often working only for specific, known object instances, or at best generalising to an object category using large pose-labelled datasets. In this work, we present a method for achieving category-level pose estimation by inspection of just a single object from a desired category. We show that we can subsequently perform accurate pose estimation for unseen objects from an inspected category, and considerably outperform prior work by exploiting multi-view correspondences. We demonstrate that our method runs in real-time, enabling a robot manipulator equipped with an RGBD sensor to perform online 6D pose estimation for novel objects. Finally, we showcase our method in a continual learning setting, with a robot able to determine whether objects belong to known categories, and if not, use active perception to produce a one-shot category representation for subsequent pose estimation.
[ { "created": "Mon, 22 May 2023 01:32:24 GMT", "version": "v1" } ]
2023-05-23
[ [ "Goodwin", "Walter", "" ], [ "Havoutis", "Ioannis", "" ], [ "Posner", "Ingmar", "" ] ]
In order to meaningfully interact with the world, robot manipulators must be able to interpret objects they encounter. A critical aspect of this interpretation is pose estimation: inferring quantities that describe the position and orientation of an object in 3D space. Most existing approaches to pose estimation make limiting assumptions, often working only for specific, known object instances, or at best generalising to an object category using large pose-labelled datasets. In this work, we present a method for achieving category-level pose estimation by inspection of just a single object from a desired category. We show that we can subsequently perform accurate pose estimation for unseen objects from an inspected category, and considerably outperform prior work by exploiting multi-view correspondences. We demonstrate that our method runs in real-time, enabling a robot manipulator equipped with an RGBD sensor to perform online 6D pose estimation for novel objects. Finally, we showcase our method in a continual learning setting, with a robot able to determine whether objects belong to known categories, and if not, use active perception to produce a one-shot category representation for subsequent pose estimation.
2004.01168
Tara Safavi
Tara Safavi, Danai Koutra, Edgar Meij
Evaluating the Calibration of Knowledge Graph Embeddings for Trustworthy Link Prediction
EMNLP 2020
null
null
null
cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Little is known about the trustworthiness of predictions made by knowledge graph embedding (KGE) models. In this paper we take initial steps toward this direction by investigating the calibration of KGE models, or the extent to which they output confidence scores that reflect the expected correctness of predicted knowledge graph triples. We first conduct an evaluation under the standard closed-world assumption (CWA), in which predicted triples not already in the knowledge graph are considered false, and show that existing calibration techniques are effective for KGE under this common but narrow assumption. Next, we introduce the more realistic but challenging open-world assumption (OWA), in which unobserved predictions are not considered true or false until ground-truth labels are obtained. Here, we show that existing calibration techniques are much less effective under the OWA than the CWA, and provide explanations for this discrepancy. Finally, to motivate the utility of calibration for KGE from a practitioner's perspective, we conduct a unique case study of human-AI collaboration, showing that calibrated predictions can improve human performance in a knowledge graph completion task.
[ { "created": "Thu, 2 Apr 2020 17:46:47 GMT", "version": "v1" }, { "created": "Fri, 18 Sep 2020 16:02:54 GMT", "version": "v2" }, { "created": "Tue, 6 Oct 2020 09:31:15 GMT", "version": "v3" } ]
2020-10-07
[ [ "Safavi", "Tara", "" ], [ "Koutra", "Danai", "" ], [ "Meij", "Edgar", "" ] ]
Little is known about the trustworthiness of predictions made by knowledge graph embedding (KGE) models. In this paper we take initial steps toward this direction by investigating the calibration of KGE models, or the extent to which they output confidence scores that reflect the expected correctness of predicted knowledge graph triples. We first conduct an evaluation under the standard closed-world assumption (CWA), in which predicted triples not already in the knowledge graph are considered false, and show that existing calibration techniques are effective for KGE under this common but narrow assumption. Next, we introduce the more realistic but challenging open-world assumption (OWA), in which unobserved predictions are not considered true or false until ground-truth labels are obtained. Here, we show that existing calibration techniques are much less effective under the OWA than the CWA, and provide explanations for this discrepancy. Finally, to motivate the utility of calibration for KGE from a practitioner's perspective, we conduct a unique case study of human-AI collaboration, showing that calibrated predictions can improve human performance in a knowledge graph completion task.
1206.6464
James Martens
James Martens (University of Toronto), Ilya Sutskever (University of Toronto), Kevin Swersky (University of Toronto)
Estimating the Hessian by Back-propagating Curvature
Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012)
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we develop Curvature Propagation (CP), a general technique for efficiently computing unbiased approximations of the Hessian of any function that is computed using a computational graph. At the cost of roughly two gradient evaluations, CP can give a rank-1 approximation of the whole Hessian, and can be repeatedly applied to give increasingly precise unbiased estimates of any or all of the entries of the Hessian. Of particular interest is the diagonal of the Hessian, for which no general approach is known to exist that is both efficient and accurate. We show in experiments that CP turns out to work well in practice, giving very accurate estimates of the Hessian of neural networks, for example, with a relatively small amount of work. We also apply CP to Score Matching, where a diagonal of a Hessian plays an integral role in the Score Matching objective, and where it is usually computed exactly using inefficient algorithms which do not scale to larger and more complex models.
[ { "created": "Wed, 27 Jun 2012 19:59:59 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2012 18:32:03 GMT", "version": "v2" } ]
2012-09-05
[ [ "Martens", "James", "", "University of Toronto" ], [ "Sutskever", "Ilya", "", "University of\n Toronto" ], [ "Swersky", "Kevin", "", "University of Toronto" ] ]
In this work we develop Curvature Propagation (CP), a general technique for efficiently computing unbiased approximations of the Hessian of any function that is computed using a computational graph. At the cost of roughly two gradient evaluations, CP can give a rank-1 approximation of the whole Hessian, and can be repeatedly applied to give increasingly precise unbiased estimates of any or all of the entries of the Hessian. Of particular interest is the diagonal of the Hessian, for which no general approach is known to exist that is both efficient and accurate. We show in experiments that CP turns out to work well in practice, giving very accurate estimates of the Hessian of neural networks, for example, with a relatively small amount of work. We also apply CP to Score Matching, where a diagonal of a Hessian plays an integral role in the Score Matching objective, and where it is usually computed exactly using inefficient algorithms which do not scale to larger and more complex models.
2202.11025
Zhi Yan Dr.
Tao Yang, You Li, Cheng Zhao, Dexin Yao, Guanyin Chen, Li Sun, Tomas Krajnik, Zhi Yan
3D ToF LiDAR in Mobile Robotics: A Review
16 pages, 10 figures, 5 tables, 4 equations
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past ten years, the use of 3D Time-of-Flight (ToF) LiDARs in mobile robotics has grown rapidly. Based on our accumulation of relevant research, this article systematically reviews and analyzes the use 3D ToF LiDARs in research and industrial applications. The former includes object detection, robot localization, long-term autonomy, LiDAR data processing under adverse weather conditions, and sensor fusion. The latter encompasses service robots, assisted and autonomous driving, and recent applications performed in response to public health crises. We hope that our efforts can effectively provide readers with relevant references and promote the deployment of existing mature technologies in real-world systems.
[ { "created": "Tue, 22 Feb 2022 16:56:09 GMT", "version": "v1" } ]
2022-02-23
[ [ "Yang", "Tao", "" ], [ "Li", "You", "" ], [ "Zhao", "Cheng", "" ], [ "Yao", "Dexin", "" ], [ "Chen", "Guanyin", "" ], [ "Sun", "Li", "" ], [ "Krajnik", "Tomas", "" ], [ "Yan", "Zhi", "" ] ]
In the past ten years, the use of 3D Time-of-Flight (ToF) LiDARs in mobile robotics has grown rapidly. Based on our accumulation of relevant research, this article systematically reviews and analyzes the use 3D ToF LiDARs in research and industrial applications. The former includes object detection, robot localization, long-term autonomy, LiDAR data processing under adverse weather conditions, and sensor fusion. The latter encompasses service robots, assisted and autonomous driving, and recent applications performed in response to public health crises. We hope that our efforts can effectively provide readers with relevant references and promote the deployment of existing mature technologies in real-world systems.
2308.16770
David Graus
Jarno Vrolijk and David Graus
Enhancing PLM Performance on Labour Market Tasks via Instruction-based Finetuning and Prompt-tuning with Rules
accepted for publication at RecSys in HR 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The increased digitization of the labour market has given researchers, educators, and companies the means to analyze and better understand the labour market. However, labour market resources, although available in high volumes, tend to be unstructured, and as such, research towards methodologies for the identification, linking, and extraction of entities becomes more and more important. Against the backdrop of this quest for better labour market representations, resource constraints and the unavailability of large-scale annotated data cause a reliance on human domain experts. We demonstrate the effectiveness of prompt-based tuning of pre-trained language models (PLM) in labour market specific applications. Our results indicate that cost-efficient methods such as PTR and instruction tuning without exemplars can significantly increase the performance of PLMs on downstream labour market applications without introducing additional model layers, manual annotations, and data augmentation.
[ { "created": "Thu, 31 Aug 2023 14:47:00 GMT", "version": "v1" } ]
2023-09-01
[ [ "Vrolijk", "Jarno", "" ], [ "Graus", "David", "" ] ]
The increased digitization of the labour market has given researchers, educators, and companies the means to analyze and better understand the labour market. However, labour market resources, although available in high volumes, tend to be unstructured, and as such, research towards methodologies for the identification, linking, and extraction of entities becomes more and more important. Against the backdrop of this quest for better labour market representations, resource constraints and the unavailability of large-scale annotated data cause a reliance on human domain experts. We demonstrate the effectiveness of prompt-based tuning of pre-trained language models (PLM) in labour market specific applications. Our results indicate that cost-efficient methods such as PTR and instruction tuning without exemplars can significantly increase the performance of PLMs on downstream labour market applications without introducing additional model layers, manual annotations, and data augmentation.