id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
cs/0610051
Philippe Trebuchet
Mohab Safey El Din (INRIA Rocquencourt), Philippe Trebuchet (INRIA Rocquencourt)
Strong bi-homogeneous B\'{e}zout theorem and its use in effective real algebraic geometry
null
null
null
null
cs.SC
null
Let f1, ..., fs be a polynomial family in Q[X1,..., Xn] (with s less than n) of degree bounded by D. Suppose that f1, ..., fs generates a radical ideal, and defines a smooth algebraic variety V. Consider a projection P. We prove that the degree of the critical locus of P restricted to V is bounded by D^s(D-1)^(n-s) times binomial of n and n-s. This result is obtained in two steps. First the critical points of P restricted to V are characterized as projections of the solutions of Lagrange's system for which a bi-homogeneous structure is exhibited. Secondly we prove a bi-homogeneous B\'ezout Theorem, which bounds the sum of the degrees of the equidimensional components of the radical of an ideal generated by a bi-homogeneous polynomial family. This result is improved when f1,..., fs is a regular sequence. Moreover, we use Lagrange's system to design an algorithm computing at least one point in each connected component of a smooth real algebraic set. This algorithm generalizes, to the non equidimensional case, the one of Safey El Din and Schost. The evaluation of the output size of this algorithm gives new upper bounds on the first Betti number of a smooth real algebraic set. Finally, we estimate its arithmetic complexity and prove that in the worst cases it is polynomial in n, s, D^s(D-1)^(n-s) and the binomial of n and n-s, and the complexity of evaluation of f1,..., fs.
[ { "created": "Tue, 10 Oct 2006 15:02:07 GMT", "version": "v1" }, { "created": "Fri, 20 Oct 2006 15:16:19 GMT", "version": "v2" } ]
2007-05-23
[ [ "Din", "Mohab Safey El", "", "INRIA Rocquencourt" ], [ "Trebuchet", "Philippe", "", "INRIA\n Rocquencourt" ] ]
Let f1, ..., fs be a polynomial family in Q[X1,..., Xn] (with s less than n) of degree bounded by D. Suppose that f1, ..., fs generates a radical ideal, and defines a smooth algebraic variety V. Consider a projection P. We prove that the degree of the critical locus of P restricted to V is bounded by D^s(D-1)^(n-s) times binomial of n and n-s. This result is obtained in two steps. First the critical points of P restricted to V are characterized as projections of the solutions of Lagrange's system for which a bi-homogeneous structure is exhibited. Secondly we prove a bi-homogeneous B\'ezout Theorem, which bounds the sum of the degrees of the equidimensional components of the radical of an ideal generated by a bi-homogeneous polynomial family. This result is improved when f1,..., fs is a regular sequence. Moreover, we use Lagrange's system to design an algorithm computing at least one point in each connected component of a smooth real algebraic set. This algorithm generalizes, to the non equidimensional case, the one of Safey El Din and Schost. The evaluation of the output size of this algorithm gives new upper bounds on the first Betti number of a smooth real algebraic set. Finally, we estimate its arithmetic complexity and prove that in the worst cases it is polynomial in n, s, D^s(D-1)^(n-s) and the binomial of n and n-s, and the complexity of evaluation of f1,..., fs.
1911.09789
Zhiwei Wang
Zhiwei Wang, Hui Liu, Jiliang Tang, Songfan Yang, Gale Yan Huang, Zitao Liu
Learning Multi-level Dependencies for Robust Word Recognition
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust language processing systems are becoming increasingly important given the recent awareness of dangerous situations where brittle machine learning models can be easily broken with the presence of noises. In this paper, we introduce a robust word recognition framework that captures multi-level sequential dependencies in noised sentences. The proposed framework employs a sequence-to-sequence model over characters of each word, whose output is given to a word-level bi-directional recurrent neural network. We conduct extensive experiments to verify the effectiveness of the framework. The results show that the proposed framework outperforms state-of-the-art methods by a large margin and they also suggest that character-level dependencies can play an important role in word recognition.
[ { "created": "Fri, 22 Nov 2019 00:04:07 GMT", "version": "v1" } ]
2019-11-25
[ [ "Wang", "Zhiwei", "" ], [ "Liu", "Hui", "" ], [ "Tang", "Jiliang", "" ], [ "Yang", "Songfan", "" ], [ "Huang", "Gale Yan", "" ], [ "Liu", "Zitao", "" ] ]
Robust language processing systems are becoming increasingly important given the recent awareness of dangerous situations where brittle machine learning models can be easily broken with the presence of noises. In this paper, we introduce a robust word recognition framework that captures multi-level sequential dependencies in noised sentences. The proposed framework employs a sequence-to-sequence model over characters of each word, whose output is given to a word-level bi-directional recurrent neural network. We conduct extensive experiments to verify the effectiveness of the framework. The results show that the proposed framework outperforms state-of-the-art methods by a large margin and they also suggest that character-level dependencies can play an important role in word recognition.
2203.16810
L.A. Prashanth
Dipayan Sen, L.A. Prashanth and Aditya Gopalan
Adaptive Estimation of Random Vectors with Bandit Feedback: A mean-squared error viewpoint
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of sequentially learning to estimate, in the mean squared error (MSE) sense, a Gaussian $K$-vector of unknown covariance by observing only $m < K$ of its entries in each round. We first establish a concentration bound for MSE estimation. We then frame the estimation problem with bandit feedback, and propose a variant of the successive elimination algorithm. We also derive a minimax lower bound to understand the fundamental limit on the sample complexity of this problem.
[ { "created": "Thu, 31 Mar 2022 05:33:32 GMT", "version": "v1" }, { "created": "Fri, 1 Apr 2022 06:50:59 GMT", "version": "v2" }, { "created": "Thu, 11 Jan 2024 05:44:18 GMT", "version": "v3" } ]
2024-01-12
[ [ "Sen", "Dipayan", "" ], [ "Prashanth", "L. A.", "" ], [ "Gopalan", "Aditya", "" ] ]
We consider the problem of sequentially learning to estimate, in the mean squared error (MSE) sense, a Gaussian $K$-vector of unknown covariance by observing only $m < K$ of its entries in each round. We first establish a concentration bound for MSE estimation. We then frame the estimation problem with bandit feedback, and propose a variant of the successive elimination algorithm. We also derive a minimax lower bound to understand the fundamental limit on the sample complexity of this problem.
2207.10172
Guodong Wang
Guodong Wang, Yunhong Wang, Jie Qin, Dongming Zhang, Xiuguo Bao, Di Huang
Video Anomaly Detection by Solving Decoupled Spatio-Temporal Jigsaw Puzzles
Accepted by ECCV'2022; Code is available at https://github.com/gdwang08/Jigsaw-VAD
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Video Anomaly Detection (VAD) is an important topic in computer vision. Motivated by the recent advances in self-supervised learning, this paper addresses VAD by solving an intuitive yet challenging pretext task, i.e., spatio-temporal jigsaw puzzles, which is cast as a multi-label fine-grained classification problem. Our method exhibits several advantages over existing works: 1) the spatio-temporal jigsaw puzzles are decoupled in terms of spatial and temporal dimensions, responsible for capturing highly discriminative appearance and motion features, respectively; 2) full permutations are used to provide abundant jigsaw puzzles covering various difficulty levels, allowing the network to distinguish subtle spatio-temporal differences between normal and abnormal events; and 3) the pretext task is tackled in an end-to-end manner without relying on any pre-trained models. Our method outperforms state-of-the-art counterparts on three public benchmarks. Especially on ShanghaiTech Campus, the result is superior to reconstruction and prediction-based methods by a large margin.
[ { "created": "Wed, 20 Jul 2022 19:49:32 GMT", "version": "v1" }, { "created": "Fri, 22 Jul 2022 03:28:41 GMT", "version": "v2" } ]
2022-07-25
[ [ "Wang", "Guodong", "" ], [ "Wang", "Yunhong", "" ], [ "Qin", "Jie", "" ], [ "Zhang", "Dongming", "" ], [ "Bao", "Xiuguo", "" ], [ "Huang", "Di", "" ] ]
Video Anomaly Detection (VAD) is an important topic in computer vision. Motivated by the recent advances in self-supervised learning, this paper addresses VAD by solving an intuitive yet challenging pretext task, i.e., spatio-temporal jigsaw puzzles, which is cast as a multi-label fine-grained classification problem. Our method exhibits several advantages over existing works: 1) the spatio-temporal jigsaw puzzles are decoupled in terms of spatial and temporal dimensions, responsible for capturing highly discriminative appearance and motion features, respectively; 2) full permutations are used to provide abundant jigsaw puzzles covering various difficulty levels, allowing the network to distinguish subtle spatio-temporal differences between normal and abnormal events; and 3) the pretext task is tackled in an end-to-end manner without relying on any pre-trained models. Our method outperforms state-of-the-art counterparts on three public benchmarks. Especially on ShanghaiTech Campus, the result is superior to reconstruction and prediction-based methods by a large margin.
1803.04357
Cem Subakan
Cem Subakan, Oluwasanmi Koyejo, Paris Smaragdis
Learning the Base Distribution in Implicit Generative Models
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Popular generative model learning methods such as Generative Adversarial Networks (GANs), and Variational Autoencoders (VAE) enforce the latent representation to follow simple distributions such as isotropic Gaussian. In this paper, we argue that learning a complicated distribution over the latent space of an auto-encoder enables more accurate modeling of complicated data distributions. Based on this observation, we propose a two stage optimization procedure which maximizes an approximate implicit density model. We experimentally verify that our method outperforms GANs and VAEs on two image datasets (MNIST, CELEB-A). We also show that our approach is amenable to learning generative model for sequential data, by learning to generate speech and music.
[ { "created": "Mon, 12 Mar 2018 16:24:33 GMT", "version": "v1" }, { "created": "Tue, 13 Mar 2018 22:40:35 GMT", "version": "v2" } ]
2018-03-15
[ [ "Subakan", "Cem", "" ], [ "Koyejo", "Oluwasanmi", "" ], [ "Smaragdis", "Paris", "" ] ]
Popular generative model learning methods such as Generative Adversarial Networks (GANs), and Variational Autoencoders (VAE) enforce the latent representation to follow simple distributions such as isotropic Gaussian. In this paper, we argue that learning a complicated distribution over the latent space of an auto-encoder enables more accurate modeling of complicated data distributions. Based on this observation, we propose a two stage optimization procedure which maximizes an approximate implicit density model. We experimentally verify that our method outperforms GANs and VAEs on two image datasets (MNIST, CELEB-A). We also show that our approach is amenable to learning generative model for sequential data, by learning to generate speech and music.
2407.08106
Neng Wang
Neng Wang, Xieyuanli Chen, Chenghao Shi, Zhiqiang Zheng, Hongshan Yu, Huimin Lu
SGLC: Semantic Graph-Guided Coarse-Fine-Refine Full Loop Closing for LiDAR SLAM
8 pages, 4 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Loop closing is a crucial component in SLAM that helps eliminate accumulated errors through two main steps: loop detection and loop pose correction. The first step determines whether loop closing should be performed, while the second estimates the 6-DoF pose to correct odometry drift. Current methods mostly focus on developing robust descriptors for loop closure detection, often neglecting loop pose estimation. A few methods that do include pose estimation either suffer from low accuracy or incur high computational costs. To tackle this problem, we introduce SGLC, a real-time semantic graph-guided full loop closing method, with robust loop closure detection and 6-DoF pose estimation capabilities. SGLC takes into account the distinct characteristics of foreground and background points. For foreground instances, it builds a semantic graph that not only abstracts point cloud representation for fast descriptor generation and matching but also guides the subsequent loop verification and initial pose estimation. Background points, meanwhile, are exploited to provide more geometric features for scan-wise descriptor construction and stable planar information for further pose refinement. Loop pose estimation employs a coarse-fine-refine registration scheme that considers the alignment of both instance points and background points, offering high efficiency and accuracy. We evaluate the loop closing performance of SGLC through extensive experiments on the KITTI and KITTI-360 datasets, demonstrating its superiority over existing state-of-the-art methods. Additionally, we integrate SGLC into a SLAM system, eliminating accumulated errors and improving overall SLAM performance. The implementation of SGLC will be released at https://github.com/nubot-nudt/SGLC.
[ { "created": "Thu, 11 Jul 2024 00:45:04 GMT", "version": "v1" } ]
2024-07-12
[ [ "Wang", "Neng", "" ], [ "Chen", "Xieyuanli", "" ], [ "Shi", "Chenghao", "" ], [ "Zheng", "Zhiqiang", "" ], [ "Yu", "Hongshan", "" ], [ "Lu", "Huimin", "" ] ]
Loop closing is a crucial component in SLAM that helps eliminate accumulated errors through two main steps: loop detection and loop pose correction. The first step determines whether loop closing should be performed, while the second estimates the 6-DoF pose to correct odometry drift. Current methods mostly focus on developing robust descriptors for loop closure detection, often neglecting loop pose estimation. A few methods that do include pose estimation either suffer from low accuracy or incur high computational costs. To tackle this problem, we introduce SGLC, a real-time semantic graph-guided full loop closing method, with robust loop closure detection and 6-DoF pose estimation capabilities. SGLC takes into account the distinct characteristics of foreground and background points. For foreground instances, it builds a semantic graph that not only abstracts point cloud representation for fast descriptor generation and matching but also guides the subsequent loop verification and initial pose estimation. Background points, meanwhile, are exploited to provide more geometric features for scan-wise descriptor construction and stable planar information for further pose refinement. Loop pose estimation employs a coarse-fine-refine registration scheme that considers the alignment of both instance points and background points, offering high efficiency and accuracy. We evaluate the loop closing performance of SGLC through extensive experiments on the KITTI and KITTI-360 datasets, demonstrating its superiority over existing state-of-the-art methods. Additionally, we integrate SGLC into a SLAM system, eliminating accumulated errors and improving overall SLAM performance. The implementation of SGLC will be released at https://github.com/nubot-nudt/SGLC.
1808.01337
Vignesh Ganapathi-Subramanian
Vignesh Ganapathi-Subramanian, Olga Diamanti, Soeren Pirk, Chengcheng Tang, Matthias Niessner, Leonidas J. Guibas
Parsing Geometry Using Structure-Aware Shape Templates
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-life man-made objects often exhibit strong and easily-identifiable structure, as a direct result of their design or their intended functionality. Structure typically appears in the form of individual parts and their arrangement. Knowing about object structure can be an important cue for object recognition and scene understanding - a key goal for various AR and robotics applications. However, commodity RGB-D sensors used in these scenarios only produce raw, unorganized point clouds, without structural information about the captured scene. Moreover, the generated data is commonly partial and susceptible to artifacts and noise, which makes inferring the structure of scanned objects challenging. In this paper, we organize large shape collections into parameterized shape templates to capture the underlying structure of the objects. The templates allow us to transfer the structural information onto new objects and incomplete scans. We employ a deep neural network that matches the partial scan with one of the shape templates, then match and fit it to complete and detailed models from the collection. This allows us to faithfully label its parts and to guide the reconstruction of the scanned object. We showcase the effectiveness of our method by comparing it to other state-of-the-art approaches.
[ { "created": "Fri, 3 Aug 2018 20:14:58 GMT", "version": "v1" }, { "created": "Wed, 5 Sep 2018 02:40:46 GMT", "version": "v2" } ]
2018-09-06
[ [ "Ganapathi-Subramanian", "Vignesh", "" ], [ "Diamanti", "Olga", "" ], [ "Pirk", "Soeren", "" ], [ "Tang", "Chengcheng", "" ], [ "Niessner", "Matthias", "" ], [ "Guibas", "Leonidas J.", "" ] ]
Real-life man-made objects often exhibit strong and easily-identifiable structure, as a direct result of their design or their intended functionality. Structure typically appears in the form of individual parts and their arrangement. Knowing about object structure can be an important cue for object recognition and scene understanding - a key goal for various AR and robotics applications. However, commodity RGB-D sensors used in these scenarios only produce raw, unorganized point clouds, without structural information about the captured scene. Moreover, the generated data is commonly partial and susceptible to artifacts and noise, which makes inferring the structure of scanned objects challenging. In this paper, we organize large shape collections into parameterized shape templates to capture the underlying structure of the objects. The templates allow us to transfer the structural information onto new objects and incomplete scans. We employ a deep neural network that matches the partial scan with one of the shape templates, then match and fit it to complete and detailed models from the collection. This allows us to faithfully label its parts and to guide the reconstruction of the scanned object. We showcase the effectiveness of our method by comparing it to other state-of-the-art approaches.
2212.11342
Olawale Salaudeen
Olawale Salaudeen, Oluwasanmi Koyejo
Target Conditioned Representation Independence (TCRI); From Domain-Invariant to Domain-General Representations
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a Target Conditioned Representation Independence (TCRI) objective for domain generalization. TCRI addresses the limitations of existing domain generalization methods due to incomplete constraints. Specifically, TCRI implements regularizers motivated by conditional independence constraints that are sufficient to strictly learn complete sets of invariant mechanisms, which we show are necessary and sufficient for domain generalization. Empirically, we show that TCRI is effective on both synthetic and real-world data. TCRI is competitive with baselines in average accuracy while outperforming them in worst-domain accuracy, indicating desired cross-domain stability.
[ { "created": "Wed, 21 Dec 2022 20:24:45 GMT", "version": "v1" } ]
2022-12-26
[ [ "Salaudeen", "Olawale", "" ], [ "Koyejo", "Oluwasanmi", "" ] ]
We propose a Target Conditioned Representation Independence (TCRI) objective for domain generalization. TCRI addresses the limitations of existing domain generalization methods due to incomplete constraints. Specifically, TCRI implements regularizers motivated by conditional independence constraints that are sufficient to strictly learn complete sets of invariant mechanisms, which we show are necessary and sufficient for domain generalization. Empirically, we show that TCRI is effective on both synthetic and real-world data. TCRI is competitive with baselines in average accuracy while outperforming them in worst-domain accuracy, indicating desired cross-domain stability.
1808.10658
Ran Duan
Ran Duan, Kaifeng Lyu, Hongxun Wu and Yuanhang Xie
Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs
15 pages, improved version of the paper appeared in ICALP 2018
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a directed graph $G=(V,E)$ with a capacity on every edge, a \emph{bottleneck path} (or \emph{widest path}) between two vertices is a path maximizing the minimum capacity of edges in the path. For the single-source all-destination version of this problem in directed graphs, the previous best algorithm runs in $O(m+n\log n)$ ($m=|E|$ and $n=|V|$) time, by Dijkstra search with Fibonacci heap [Fredman and Tarjan 1987]. We improve this time bound to $O(m\sqrt{\log n})$, thus it is the first algorithm which breaks the time bound of classic Fibonacci heap when $m=o(n\sqrt{\log n})$. It is a Las-Vegas randomized approach. By contrast, the s-t bottleneck path has an algorithm with running time $O(m\beta(m,n))$ [Chechik et al. 2016], where $\beta(m,n)=\min\{k\geq 1: \log^{(k)}n\leq\frac{m}{n}\}$.
[ { "created": "Fri, 31 Aug 2018 10:09:44 GMT", "version": "v1" } ]
2018-09-03
[ [ "Duan", "Ran", "" ], [ "Lyu", "Kaifeng", "" ], [ "Wu", "Hongxun", "" ], [ "Xie", "Yuanhang", "" ] ]
In a directed graph $G=(V,E)$ with a capacity on every edge, a \emph{bottleneck path} (or \emph{widest path}) between two vertices is a path maximizing the minimum capacity of edges in the path. For the single-source all-destination version of this problem in directed graphs, the previous best algorithm runs in $O(m+n\log n)$ ($m=|E|$ and $n=|V|$) time, by Dijkstra search with Fibonacci heap [Fredman and Tarjan 1987]. We improve this time bound to $O(m\sqrt{\log n})$, thus it is the first algorithm which breaks the time bound of classic Fibonacci heap when $m=o(n\sqrt{\log n})$. It is a Las-Vegas randomized approach. By contrast, the s-t bottleneck path has an algorithm with running time $O(m\beta(m,n))$ [Chechik et al. 2016], where $\beta(m,n)=\min\{k\geq 1: \log^{(k)}n\leq\frac{m}{n}\}$.
2401.07167
Avijit Mandal
Avijit Mandal, S. Brandsen, and Henry D. Pfister
Polar Codes for CQ Channels: Decoding via Belief-Propagation with Quantum Messages
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
This paper considers the design and decoding of polar codes for general classical-quantum (CQ) channels. It focuses on decoding via belief-propagation with quantum messages (BPQM) and, in particular, the idea of paired-measurement BPQM (PM-BPQM) decoding. Since the PM-BPQM decoder admits a classical density evolution (DE) analysis, one can use DE to design a polar code for any CQ channel and then efficiently compute the trade-off between code rate and error probability. We have also implemented and tested a classical simulation of our PM-BPQM decoder for polar codes. While the decoder can be implemented efficiently on a quantum computer, simulating the decoder on a classical computer actually has exponential complexity. Thus, simulation results for the decoder are somewhat limited and are included primarily to validate our theoretical results.
[ { "created": "Sat, 13 Jan 2024 22:31:50 GMT", "version": "v1" } ]
2024-01-17
[ [ "Mandal", "Avijit", "" ], [ "Brandsen", "S.", "" ], [ "Pfister", "Henry D.", "" ] ]
This paper considers the design and decoding of polar codes for general classical-quantum (CQ) channels. It focuses on decoding via belief-propagation with quantum messages (BPQM) and, in particular, the idea of paired-measurement BPQM (PM-BPQM) decoding. Since the PM-BPQM decoder admits a classical density evolution (DE) analysis, one can use DE to design a polar code for any CQ channel and then efficiently compute the trade-off between code rate and error probability. We have also implemented and tested a classical simulation of our PM-BPQM decoder for polar codes. While the decoder can be implemented efficiently on a quantum computer, simulating the decoder on a classical computer actually has exponential complexity. Thus, simulation results for the decoder are somewhat limited and are included primarily to validate our theoretical results.
2405.19757
Sungchul Hong
Sungchul Hong, Seunghwan An, Jong-June Jeon
Improving SMOTE via Fusing Conditional VAE for Data-adaptive Noise Filtering
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advances in a generative neural network model extend the development of data augmentation methods. However, the augmentation methods based on the modern generative models fail to achieve notable performance for class imbalance data compared to the conventional model, Synthetic Minority Oversampling Technique (SMOTE). We investigate the problem of the generative model for imbalanced classification and introduce a framework to enhance the SMOTE algorithm using Variational Autoencoders (VAE). Our approach systematically quantifies the density of data points in a low-dimensional latent space using the VAE, simultaneously incorporating information on class labels and classification difficulty. Then, the data points potentially degrading the augmentation are systematically excluded, and the neighboring observations are directly augmented on the data space. Empirical studies on several imbalanced datasets represent that this simple process innovatively improves the conventional SMOTE algorithm over the deep learning models. Consequently, we conclude that the selection of minority data and the interpolation in the data space are beneficial for imbalanced classification problems with a relatively small number of data points.
[ { "created": "Thu, 30 May 2024 07:06:02 GMT", "version": "v1" }, { "created": "Wed, 14 Aug 2024 06:26:27 GMT", "version": "v2" } ]
2024-08-15
[ [ "Hong", "Sungchul", "" ], [ "An", "Seunghwan", "" ], [ "Jeon", "Jong-June", "" ] ]
Recent advances in a generative neural network model extend the development of data augmentation methods. However, the augmentation methods based on the modern generative models fail to achieve notable performance for class imbalance data compared to the conventional model, Synthetic Minority Oversampling Technique (SMOTE). We investigate the problem of the generative model for imbalanced classification and introduce a framework to enhance the SMOTE algorithm using Variational Autoencoders (VAE). Our approach systematically quantifies the density of data points in a low-dimensional latent space using the VAE, simultaneously incorporating information on class labels and classification difficulty. Then, the data points potentially degrading the augmentation are systematically excluded, and the neighboring observations are directly augmented on the data space. Empirical studies on several imbalanced datasets represent that this simple process innovatively improves the conventional SMOTE algorithm over the deep learning models. Consequently, we conclude that the selection of minority data and the interpolation in the data space are beneficial for imbalanced classification problems with a relatively small number of data points.
2312.13528
Jongmin Park
Minh-Quan Viet Bui, Jongmin Park, Jihyong Oh, Munchurl Kim
DyBluRF: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video
The first two authors contributed equally to this work (equal contribution). The last two authors advised equally to this work. Please visit our project page at https://kaist-viclab.github.io/dyblurf-site/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Neural Radiance Fields (NeRF), initially developed for static scenes, have inspired many video novel view synthesis techniques. However, the challenge for video view synthesis arises from motion blur, a consequence of object or camera movement during exposure, which hinders the precise synthesis of sharp spatio-temporal views. In response, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF, consisting of a Base Ray Initialization (BRI) stage and a Motion Decomposition-based Deblurring (MDD) stage. Our DyBluRF is the first that handles the novel view synthesis for blurry monocular video with a novel two-stage framework. In the BRI stage, we coarsely reconstruct dynamic 3D scenes and jointly initialize the base ray, which is further used to predict latent sharp rays, using the inaccurate camera pose information from the given blurry frames. In the MDD stage, we introduce a novel Incremental Latent Sharp-rays Prediction (ILSP) approach for the blurry monocular video frames by decomposing the latent sharp rays into global camera motion and local object motion components. We further propose two loss functions for effective geometry regularization and decomposition of static and dynamic scene components without any mask supervision. Experiments show that DyBluRF outperforms qualitatively and quantitatively the SOTA methods.
[ { "created": "Thu, 21 Dec 2023 02:01:19 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2024 05:57:33 GMT", "version": "v2" } ]
2024-04-01
[ [ "Bui", "Minh-Quan Viet", "" ], [ "Park", "Jongmin", "" ], [ "Oh", "Jihyong", "" ], [ "Kim", "Munchurl", "" ] ]
Neural Radiance Fields (NeRF), initially developed for static scenes, have inspired many video novel view synthesis techniques. However, the challenge for video view synthesis arises from motion blur, a consequence of object or camera movement during exposure, which hinders the precise synthesis of sharp spatio-temporal views. In response, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF, consisting of a Base Ray Initialization (BRI) stage and a Motion Decomposition-based Deblurring (MDD) stage. Our DyBluRF is the first that handles the novel view synthesis for blurry monocular video with a novel two-stage framework. In the BRI stage, we coarsely reconstruct dynamic 3D scenes and jointly initialize the base ray, which is further used to predict latent sharp rays, using the inaccurate camera pose information from the given blurry frames. In the MDD stage, we introduce a novel Incremental Latent Sharp-rays Prediction (ILSP) approach for the blurry monocular video frames by decomposing the latent sharp rays into global camera motion and local object motion components. We further propose two loss functions for effective geometry regularization and decomposition of static and dynamic scene components without any mask supervision. Experiments show that DyBluRF outperforms qualitatively and quantitatively the SOTA methods.
1507.04039
Pascal Potvin
Pascal Potvin, Hanen Garcia Gamardo, Kim-Khoa Nguyen and Mohamed Cheriet
Hyper Heterogeneous Cloud-based IMS Software Architecture: A Proof-of-Concept and Empirical Analysis
12 pages, 9 figures, 1 table. Accepted for oral presentation at S2CT 2015 in Toronto. Latest Version is Camera Ready
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The IP Multimedia Subsystem (IMS) defined by the 3GPP has been mainly developed and deployed by telephony vendors on vendor-specific hardware. Recent advances in Network Function Virtualisation (NFV) technology paved the way for virtualized hardware and telephony function elasticity. As such, Telecom vendors have started to embrace the cloud as a deployment platform, usually selecting a privileged virtualization platform. Operators would like to deploy telecom functionality on their already existing IT cloud platforms. Achieving such flexibility would require the telecom vendors to adopt a software architecture allowing deployment on many cloud platforms or even heterogeneous cloud platforms. We propose a distributed software architecture enabling the deployment of a single software version on multiple cloud platforms thus allowing for a solution-based deployment. We also present a prototype we developed to study the characteristics of this architecture.
[ { "created": "Tue, 14 Jul 2015 22:27:16 GMT", "version": "v1" }, { "created": "Fri, 18 Sep 2015 15:34:21 GMT", "version": "v2" } ]
2015-09-21
[ [ "Potvin", "Pascal", "" ], [ "Gamardo", "Hanen Garcia", "" ], [ "Nguyen", "Kim-Khoa", "" ], [ "Cheriet", "Mohamed", "" ] ]
The IP Multimedia Subsystem (IMS) defined by the 3GPP has been mainly developed and deployed by telephony vendors on vendor-specific hardware. Recent advances in Network Function Virtualisation (NFV) technology paved the way for virtualized hardware and telephony function elasticity. As such, Telecom vendors have started to embrace the cloud as a deployment platform, usually selecting a privileged virtualization platform. Operators would like to deploy telecom functionality on their already existing IT cloud platforms. Achieving such flexibility would require the telecom vendors to adopt a software architecture allowing deployment on many cloud platforms or even heterogeneous cloud platforms. We propose a distributed software architecture enabling the deployment of a single software version on multiple cloud platforms thus allowing for a solution-based deployment. We also present a prototype we developed to study the characteristics of this architecture.
2401.16974
Andreas Sauter
Andreas W.M. Sauter, Nicol\`o Botteghi, Erman Acar, Aske Plaat
CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement Learning
To be published In Proc. of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024), Auckland, New Zealand, May 6 - 10, 2024, IFAAMAS
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Causal discovery is the challenging task of inferring causal structure from data. Motivated by Pearl's Causal Hierarchy (PCH), which tells us that passive observations alone are not enough to distinguish correlation from causation, there has been a recent push to incorporate interventions into machine learning research. Reinforcement learning provides a convenient framework for such an active approach to learning. This paper presents CORE, a deep reinforcement learning-based approach for causal discovery and intervention planning. CORE learns to sequentially reconstruct causal graphs from data while learning to perform informative interventions. Our results demonstrate that CORE generalizes to unseen graphs and efficiently uncovers causal structures. Furthermore, CORE scales to larger graphs with up to 10 variables and outperforms existing approaches in structure estimation accuracy and sample efficiency. All relevant code and supplementary material can be found at https://github.com/sa-and/CORE
[ { "created": "Tue, 30 Jan 2024 12:57:52 GMT", "version": "v1" } ]
2024-01-31
[ [ "Sauter", "Andreas W. M.", "" ], [ "Botteghi", "Nicolò", "" ], [ "Acar", "Erman", "" ], [ "Plaat", "Aske", "" ] ]
Causal discovery is the challenging task of inferring causal structure from data. Motivated by Pearl's Causal Hierarchy (PCH), which tells us that passive observations alone are not enough to distinguish correlation from causation, there has been a recent push to incorporate interventions into machine learning research. Reinforcement learning provides a convenient framework for such an active approach to learning. This paper presents CORE, a deep reinforcement learning-based approach for causal discovery and intervention planning. CORE learns to sequentially reconstruct causal graphs from data while learning to perform informative interventions. Our results demonstrate that CORE generalizes to unseen graphs and efficiently uncovers causal structures. Furthermore, CORE scales to larger graphs with up to 10 variables and outperforms existing approaches in structure estimation accuracy and sample efficiency. All relevant code and supplementary material can be found at https://github.com/sa-and/CORE
2110.15253
Kyle Aitken
Kyle Aitken, Vinay V Ramasesh, Yuan Cao, Niru Maheswaranathan
Understanding How Encoder-Decoder Architectures Attend
10+14 pages, 16 figures. NeurIPS 2021
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Encoder-decoder networks with attention have proven to be a powerful way to solve many sequence-to-sequence tasks. In these networks, attention aligns encoder and decoder states and is often used for visualizing network behavior. However, the mechanisms used by networks to generate appropriate attention matrices are still mysterious. Moreover, how these mechanisms vary depending on the particular architecture used for the encoder and decoder (recurrent, feed-forward, etc.) are also not well understood. In this work, we investigate how encoder-decoder networks solve different sequence-to-sequence tasks. We introduce a way of decomposing hidden states over a sequence into temporal (independent of input) and input-driven (independent of sequence position) components. This reveals how attention matrices are formed: depending on the task requirements, networks rely more heavily on either the temporal or input-driven components. These findings hold across both recurrent and feed-forward architectures despite their differences in forming the temporal components. Overall, our results provide new insight into the inner workings of attention-based encoder-decoder networks.
[ { "created": "Thu, 28 Oct 2021 16:11:27 GMT", "version": "v1" } ]
2021-10-29
[ [ "Aitken", "Kyle", "" ], [ "Ramasesh", "Vinay V", "" ], [ "Cao", "Yuan", "" ], [ "Maheswaranathan", "Niru", "" ] ]
Encoder-decoder networks with attention have proven to be a powerful way to solve many sequence-to-sequence tasks. In these networks, attention aligns encoder and decoder states and is often used for visualizing network behavior. However, the mechanisms used by networks to generate appropriate attention matrices are still mysterious. Moreover, how these mechanisms vary depending on the particular architecture used for the encoder and decoder (recurrent, feed-forward, etc.) are also not well understood. In this work, we investigate how encoder-decoder networks solve different sequence-to-sequence tasks. We introduce a way of decomposing hidden states over a sequence into temporal (independent of input) and input-driven (independent of sequence position) components. This reveals how attention matrices are formed: depending on the task requirements, networks rely more heavily on either the temporal or input-driven components. These findings hold across both recurrent and feed-forward architectures despite their differences in forming the temporal components. Overall, our results provide new insight into the inner workings of attention-based encoder-decoder networks.
2003.08515
Fanbo Xiang
Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J. Guibas, Hao Su
SAPIEN: A SimulAted Part-based Interactive ENvironment
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building home assistant robots has long been a pursuit for vision and robotics researchers. To achieve this task, a simulated environment with physically realistic simulation, sufficient articulated objects, and transferability to the real robot is indispensable. Existing environments achieve these requirements for robotics simulation with different levels of simplification and focus. We take one step further in constructing an environment that supports household tasks for training robot learning algorithm. Our work, SAPIEN, is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects. Our SAPIEN enables various robotic vision and interaction tasks that require detailed part-level understanding.We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks using heuristic approaches and reinforcement learning algorithms. We hope that our SAPIEN can open a lot of research directions yet to be explored, including learning cognition through interaction, part motion discovery, and construction of robotics-ready simulated game environment.
[ { "created": "Thu, 19 Mar 2020 00:11:34 GMT", "version": "v1" } ]
2020-03-20
[ [ "Xiang", "Fanbo", "" ], [ "Qin", "Yuzhe", "" ], [ "Mo", "Kaichun", "" ], [ "Xia", "Yikuan", "" ], [ "Zhu", "Hao", "" ], [ "Liu", "Fangchen", "" ], [ "Liu", "Minghua", "" ], [ "Jiang", "Hanxiao", "" ], [ "Yuan", "Yifu", "" ], [ "Wang", "He", "" ], [ "Yi", "Li", "" ], [ "Chang", "Angel X.", "" ], [ "Guibas", "Leonidas J.", "" ], [ "Su", "Hao", "" ] ]
Building home assistant robots has long been a pursuit for vision and robotics researchers. To achieve this task, a simulated environment with physically realistic simulation, sufficient articulated objects, and transferability to the real robot is indispensable. Existing environments achieve these requirements for robotics simulation with different levels of simplification and focus. We take one step further in constructing an environment that supports household tasks for training robot learning algorithm. Our work, SAPIEN, is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects. Our SAPIEN enables various robotic vision and interaction tasks that require detailed part-level understanding.We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks using heuristic approaches and reinforcement learning algorithms. We hope that our SAPIEN can open a lot of research directions yet to be explored, including learning cognition through interaction, part motion discovery, and construction of robotics-ready simulated game environment.
2210.06313
Chong You
Zonglin Li, Chong You, Srinadh Bhojanapalli, Daliang Li, Ankit Singh Rawat, Sashank J. Reddi, Ke Ye, Felix Chern, Felix Yu, Ruiqi Guo, Sanjiv Kumar
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers
A short version was presented at ICLR 2023. Previous title: Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
null
null
null
cs.LG cs.CL cs.CV stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse. By activation map we refer to the intermediate output of the multi-layer perceptrons (MLPs) after a ReLU activation function, and by sparse we mean that on average very few entries (e.g., 3.0% for T5-Base and 6.3% for ViT-B16) are nonzero for each input to MLP. Moreover, larger Transformers with more layers and wider MLP hidden dimensions are sparser as measured by the percentage of nonzero entries. Through extensive experiments we demonstrate that the emergence of sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks, on both training and evaluation data, for Transformers of various configurations, at layers of all depth levels, as well as for other architectures including MLP-mixers and 2-layer MLPs. We show that sparsity also emerges using training datasets with random labels, or with random inputs, or with infinite amount of data, demonstrating that sparsity is not a result of a specific family of datasets. We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers. Moreover, we demonstrate perhaps surprisingly that enforcing an even sparser activation via Top-k thresholding with a small value of k brings a collection of desired but missing properties for Transformers, namely less sensitivity to noisy training data, more robustness to input corruptions, and better calibration for their prediction confidence.
[ { "created": "Wed, 12 Oct 2022 15:25:19 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 21:53:43 GMT", "version": "v2" } ]
2023-06-13
[ [ "Li", "Zonglin", "" ], [ "You", "Chong", "" ], [ "Bhojanapalli", "Srinadh", "" ], [ "Li", "Daliang", "" ], [ "Rawat", "Ankit Singh", "" ], [ "Reddi", "Sashank J.", "" ], [ "Ye", "Ke", "" ], [ "Chern", "Felix", "" ], [ "Yu", "Felix", "" ], [ "Guo", "Ruiqi", "" ], [ "Kumar", "Sanjiv", "" ] ]
This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse. By activation map we refer to the intermediate output of the multi-layer perceptrons (MLPs) after a ReLU activation function, and by sparse we mean that on average very few entries (e.g., 3.0% for T5-Base and 6.3% for ViT-B16) are nonzero for each input to MLP. Moreover, larger Transformers with more layers and wider MLP hidden dimensions are sparser as measured by the percentage of nonzero entries. Through extensive experiments we demonstrate that the emergence of sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks, on both training and evaluation data, for Transformers of various configurations, at layers of all depth levels, as well as for other architectures including MLP-mixers and 2-layer MLPs. We show that sparsity also emerges using training datasets with random labels, or with random inputs, or with infinite amount of data, demonstrating that sparsity is not a result of a specific family of datasets. We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers. Moreover, we demonstrate perhaps surprisingly that enforcing an even sparser activation via Top-k thresholding with a small value of k brings a collection of desired but missing properties for Transformers, namely less sensitivity to noisy training data, more robustness to input corruptions, and better calibration for their prediction confidence.
1806.00908
Gui-Song Xia
Jin Huang, Gui-Song Xia, Fan Hu, Liangpei Zhang
Accurate Building Detection in VHR Remote Sensing Images using Geometric Saliency
IGRASS'18 conference paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims to address the problem of detecting buildings from remote sensing images with very high resolution (VHR). Inspired by the observation that buildings are always more distinguishable in geometries than in texture or spectral, we propose a new geometric building index (GBI) for accurate building detection, which relies on the geometric saliency of building structures. The geometric saliency of buildings is derived from a mid-level geometric representations based on meaningful junctions that can locally describe anisotropic geometrical structures of images. The resulting GBI is measured by integrating the derived geometric saliency of buildings. Experiments on three public datasets demonstrate that the proposed GBI achieves very promising performance, and meanwhile shows impressive generalization capability.
[ { "created": "Mon, 4 Jun 2018 01:02:22 GMT", "version": "v1" }, { "created": "Sun, 10 Jun 2018 01:38:45 GMT", "version": "v2" } ]
2018-06-12
[ [ "Huang", "Jin", "" ], [ "Xia", "Gui-Song", "" ], [ "Hu", "Fan", "" ], [ "Zhang", "Liangpei", "" ] ]
This paper aims to address the problem of detecting buildings from remote sensing images with very high resolution (VHR). Inspired by the observation that buildings are always more distinguishable in geometries than in texture or spectral, we propose a new geometric building index (GBI) for accurate building detection, which relies on the geometric saliency of building structures. The geometric saliency of buildings is derived from a mid-level geometric representations based on meaningful junctions that can locally describe anisotropic geometrical structures of images. The resulting GBI is measured by integrating the derived geometric saliency of buildings. Experiments on three public datasets demonstrate that the proposed GBI achieves very promising performance, and meanwhile shows impressive generalization capability.
1808.00313
Xinyu Huang
Qichuan Geng and Xinyu Huang and Zhong Zhou and Ruigang Yang
A Network Structure to Explicitly Reduce Confusion Errors in Semantic Segmentation
18 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Confusing classes that are ubiquitous in real world often degrade performance for many vision related applications like object detection, classification, and segmentation. The confusion errors are not only caused by similar visual patterns but also amplified by various factors during the training of our designed models, such as reduced feature resolution in the encoding process or imbalanced data distributions. A large amount of deep learning based network structures has been proposed in recent years to deal with these individual factors and improve network performance. However, to our knowledge, no existing work in semantic image segmentation is designed to tackle confusion errors explicitly. In this paper, we present a novel and general network structure that reduces confusion errors in more direct manner and apply the network for semantic segmentation. There are two major contributions in our network structure: 1) We ensemble subnets with heterogeneous output spaces based on the discriminative confusing groups. The training for each subnet can distinguish confusing classes within the group without affecting unrelated classes outside the group. 2) We propose an improved cross-entropy loss function that maximizes the probability assigned to the correct class and penalizes the probabilities assigned to the confusing classes at the same time. Our network structure is a general structure and can be easily adapted to any other networks to further reduce confusion errors. Without any changes in the feature encoder and post-processing steps, our experiments demonstrate consistent and significant improvements on different baseline models on Cityscapes and PASCAL VOC datasets (e.g., 3.05% over ResNet-101 and 1.30% over ResNet-38).
[ { "created": "Wed, 1 Aug 2018 13:37:59 GMT", "version": "v1" } ]
2018-08-02
[ [ "Geng", "Qichuan", "" ], [ "Huang", "Xinyu", "" ], [ "Zhou", "Zhong", "" ], [ "Yang", "Ruigang", "" ] ]
Confusing classes that are ubiquitous in real world often degrade performance for many vision related applications like object detection, classification, and segmentation. The confusion errors are not only caused by similar visual patterns but also amplified by various factors during the training of our designed models, such as reduced feature resolution in the encoding process or imbalanced data distributions. A large amount of deep learning based network structures has been proposed in recent years to deal with these individual factors and improve network performance. However, to our knowledge, no existing work in semantic image segmentation is designed to tackle confusion errors explicitly. In this paper, we present a novel and general network structure that reduces confusion errors in more direct manner and apply the network for semantic segmentation. There are two major contributions in our network structure: 1) We ensemble subnets with heterogeneous output spaces based on the discriminative confusing groups. The training for each subnet can distinguish confusing classes within the group without affecting unrelated classes outside the group. 2) We propose an improved cross-entropy loss function that maximizes the probability assigned to the correct class and penalizes the probabilities assigned to the confusing classes at the same time. Our network structure is a general structure and can be easily adapted to any other networks to further reduce confusion errors. Without any changes in the feature encoder and post-processing steps, our experiments demonstrate consistent and significant improvements on different baseline models on Cityscapes and PASCAL VOC datasets (e.g., 3.05% over ResNet-101 and 1.30% over ResNet-38).
1705.03172
Jialong Han
Xin Zheng, Jialong Han, Aixin Sun
A Survey of Location Prediction on Twitter
Accepted to TKDE. 30 pages, 1 figure
null
10.1109/TKDE.2018.2807840
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we list future research directions.
[ { "created": "Tue, 9 May 2017 04:14:57 GMT", "version": "v1" }, { "created": "Sat, 24 Feb 2018 08:29:45 GMT", "version": "v2" } ]
2018-07-17
[ [ "Zheng", "Xin", "" ], [ "Han", "Jialong", "" ], [ "Sun", "Aixin", "" ] ]
Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we list future research directions.
1005.3182
Isabel Rodet
Nicolas Castagn\'e (ACROE), Claude Cadoz (ACROE, ICA), Jean-Loup Florens (ACROE), Annie Luciani (ACROE, ICA)
Haptics in computer music : a paradigm shift
Document accompagn\'e du poster
EuroHaptics 2004, Munich : Germany (2004)
null
null
cs.HC cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With an historical point of view combined with a bibliographic overview, the article discusses the idea that haptic force feedback transducers correspond with a paradigm shift in our real-time tools for creating music. So doing, il shows that computer music may be regarded as a major field of research and application for haptics.
[ { "created": "Tue, 18 May 2010 13:03:47 GMT", "version": "v1" }, { "created": "Mon, 7 Jun 2010 07:43:43 GMT", "version": "v2" } ]
2010-06-08
[ [ "Castagné", "Nicolas", "", "ACROE" ], [ "Cadoz", "Claude", "", "ACROE, ICA" ], [ "Florens", "Jean-Loup", "", "ACROE" ], [ "Luciani", "Annie", "", "ACROE, ICA" ] ]
With an historical point of view combined with a bibliographic overview, the article discusses the idea that haptic force feedback transducers correspond with a paradigm shift in our real-time tools for creating music. So doing, il shows that computer music may be regarded as a major field of research and application for haptics.
1810.01017
Nalin Asanka Gamagedara Arachchilage
Chamila Wijayarathna and Nalin Asanka Gamagedara Arachchilage
Fighting Against XSS Attacks: A Usability Evaluation of OWASP ESAPI Output Encoding
10
The 52nd Hawaii International Conference on System Sciences (HICSS), 2019
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross Site Scripting (XSS) is one of the most critical vulnerabilities exist in web applications. XSS can be prevented by encoding untrusted data that are loaded into browser content of web applications. Security Application Programming Interfaces (APIs) such as OWASP ESAPI provide output encoding functionalities for programmers to use to protect their applications from XSS attacks. However, XSS still being ranked as one of the most critical vulnerabilities in web applications suggests that programmers are not effectively using those APIs to encode untrusted data. Therefore, we conducted an experimental study with 10 programmers where they attempted to fix XSS vulnerabilities of a web application using the output encoding functionality of OWASP ESAPI. Results revealed 3 types of mistakes that programmers made which resulted in them failing to fix the application by removing XSS vulnerabilities. We also identified 16 usability issues of OWASP ESAPI. We identified that some of these usability issues as the reason for mistakes that programmers made. Based on these results, we provided suggestions on how the usability of output encoding APIs should be improved to give a better experience to programmers.
[ { "created": "Mon, 1 Oct 2018 23:57:58 GMT", "version": "v1" } ]
2018-10-03
[ [ "Wijayarathna", "Chamila", "" ], [ "Arachchilage", "Nalin Asanka Gamagedara", "" ] ]
Cross Site Scripting (XSS) is one of the most critical vulnerabilities exist in web applications. XSS can be prevented by encoding untrusted data that are loaded into browser content of web applications. Security Application Programming Interfaces (APIs) such as OWASP ESAPI provide output encoding functionalities for programmers to use to protect their applications from XSS attacks. However, XSS still being ranked as one of the most critical vulnerabilities in web applications suggests that programmers are not effectively using those APIs to encode untrusted data. Therefore, we conducted an experimental study with 10 programmers where they attempted to fix XSS vulnerabilities of a web application using the output encoding functionality of OWASP ESAPI. Results revealed 3 types of mistakes that programmers made which resulted in them failing to fix the application by removing XSS vulnerabilities. We also identified 16 usability issues of OWASP ESAPI. We identified that some of these usability issues as the reason for mistakes that programmers made. Based on these results, we provided suggestions on how the usability of output encoding APIs should be improved to give a better experience to programmers.
2111.14595
Semih G\"unel
Semih G\"unel and Florian Aymanns and Sina Honari and Pavan Ramdya and Pascal Fua
Overcoming the Domain Gap in Contrastive Learning of Neural Action Representations
Accepted into NeurIPS 2021 Workshop: Self-Supervised Learning - Theory and Practice
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior. For example, the ability to extract behavioral intentions from neural data, or neural decoding, is critical for developing effective brain machine interfaces. Although simple linear models have been applied to this challenge, they cannot identify important non-linear relationships. Thus, a self-supervised means of identifying non-linear relationships between neural dynamics and behavior, in order to compute neural representations, remains an important open problem. To address this challenge, we generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies, Drosophila melanogaster -- a popular model organism in neuroscience research. The dataset includes 3D markerless motion capture data from six camera views of the animal generating spontaneous actions, as well as synchronously acquired two-photon microscope images capturing the activity of descending neuron populations that are thought to drive actions. Standard contrastive learning and unsupervised domain adaptation techniques struggle to learn neural action representations (embeddings computed from the neural data describing action labels) due to large inter-animal differences in both neural and behavioral modalities. To overcome this deficiency, we developed simple yet effective augmentations that close the inter-animal domain gap, allowing us to extract behaviorally relevant, yet domain agnostic, information from neural data. This multimodal dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
[ { "created": "Mon, 29 Nov 2021 15:27:51 GMT", "version": "v1" } ]
2021-11-30
[ [ "Günel", "Semih", "" ], [ "Aymanns", "Florian", "" ], [ "Honari", "Sina", "" ], [ "Ramdya", "Pavan", "" ], [ "Fua", "Pascal", "" ] ]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior. For example, the ability to extract behavioral intentions from neural data, or neural decoding, is critical for developing effective brain machine interfaces. Although simple linear models have been applied to this challenge, they cannot identify important non-linear relationships. Thus, a self-supervised means of identifying non-linear relationships between neural dynamics and behavior, in order to compute neural representations, remains an important open problem. To address this challenge, we generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies, Drosophila melanogaster -- a popular model organism in neuroscience research. The dataset includes 3D markerless motion capture data from six camera views of the animal generating spontaneous actions, as well as synchronously acquired two-photon microscope images capturing the activity of descending neuron populations that are thought to drive actions. Standard contrastive learning and unsupervised domain adaptation techniques struggle to learn neural action representations (embeddings computed from the neural data describing action labels) due to large inter-animal differences in both neural and behavioral modalities. To overcome this deficiency, we developed simple yet effective augmentations that close the inter-animal domain gap, allowing us to extract behaviorally relevant, yet domain agnostic, information from neural data. This multimodal dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
1007.3961
Pablo Chico de Guzman Huerta
Pablo Chico de Guzman, Manuel Carro and David S. Warren
Swapping Evaluation: A Memory-Scalable Solution for Answer-On-Demand Tabling
16 pages, 5 figures, published in TPLP 2010
Swapping Evaluation in TPLP, volume 10, number 4-6, year 2010, pages 401-416
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the differences among the various approaches to suspension-based tabled evaluation is the scheduling strategy. The two most popular strategies are local and batched evaluation. The former collects all the solutions to a tabled predicate before making any one of them available outside the tabled computation. The latter returns answers one by one before computing them all, which in principle is better if only one answer (or a subset of the answers) is desired. Batched evaluation is closer to SLD evaluation in that it computes solutions lazily as they are demanded, but it may need arbitrarily more memory than local evaluation, which is able to reclaim memory sooner. Some programs which in practice can be executed under the local strategy quickly run out of memory under batched evaluation. This has led to the general adoption of local evaluation at the expense of the more depth-first batched strategy. In this paper we study the reasons for the high memory consumption of batched evaluation and propose a new scheduling strategy which we have termed swapping evaluation. Swapping evaluation also returns answers one by one before completing a tabled call, but its memory usage can be orders of magnitude less than batched evaluation. An experimental implementation in the XSB system shows that swapping evaluation is a feasible memory-scalable strategy that need not compromise execution speed.
[ { "created": "Thu, 22 Jul 2010 18:26:48 GMT", "version": "v1" } ]
2010-07-23
[ [ "de Guzman", "Pablo Chico", "" ], [ "Carro", "Manuel", "" ], [ "Warren", "David S.", "" ] ]
One of the differences among the various approaches to suspension-based tabled evaluation is the scheduling strategy. The two most popular strategies are local and batched evaluation. The former collects all the solutions to a tabled predicate before making any one of them available outside the tabled computation. The latter returns answers one by one before computing them all, which in principle is better if only one answer (or a subset of the answers) is desired. Batched evaluation is closer to SLD evaluation in that it computes solutions lazily as they are demanded, but it may need arbitrarily more memory than local evaluation, which is able to reclaim memory sooner. Some programs which in practice can be executed under the local strategy quickly run out of memory under batched evaluation. This has led to the general adoption of local evaluation at the expense of the more depth-first batched strategy. In this paper we study the reasons for the high memory consumption of batched evaluation and propose a new scheduling strategy which we have termed swapping evaluation. Swapping evaluation also returns answers one by one before completing a tabled call, but its memory usage can be orders of magnitude less than batched evaluation. An experimental implementation in the XSB system shows that swapping evaluation is a feasible memory-scalable strategy that need not compromise execution speed.
2008.06738
Brahma Pavse
Brahma Pavse, Ishan Durugkar, Josiah Hanna, Peter Stone
Reducing Sampling Error in Batch Temporal Difference Learning
Accepted to International Conference on Machine Learning (ICML) 2020
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. This paper studies the use of TD(0), a canonical TD algorithm, to estimate the value function of a given policy from a batch of data. In this batch setting, we show that TD(0) may converge to an inaccurate value function because the update following an action is weighted according to the number of times that action occurred in the batch -- not the true probability of the action under the given policy. To address this limitation, we introduce \textit{policy sampling error corrected}-TD(0) (PSEC-TD(0)). PSEC-TD(0) first estimates the empirical distribution of actions in each state in the batch and then uses importance sampling to correct for the mismatch between the empirical weighting and the correct weighting for updates following each action. We refine the concept of a certainty-equivalence estimate and argue that PSEC-TD(0) is a more data efficient estimator than TD(0) for a fixed batch of data. Finally, we conduct an empirical evaluation of PSEC-TD(0) on three batch value function learning tasks, with a hyperparameter sensitivity analysis, and show that PSEC-TD(0) produces value function estimates with lower mean squared error than TD(0).
[ { "created": "Sat, 15 Aug 2020 15:30:06 GMT", "version": "v1" } ]
2020-08-18
[ [ "Pavse", "Brahma", "" ], [ "Durugkar", "Ishan", "" ], [ "Hanna", "Josiah", "" ], [ "Stone", "Peter", "" ] ]
Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. This paper studies the use of TD(0), a canonical TD algorithm, to estimate the value function of a given policy from a batch of data. In this batch setting, we show that TD(0) may converge to an inaccurate value function because the update following an action is weighted according to the number of times that action occurred in the batch -- not the true probability of the action under the given policy. To address this limitation, we introduce \textit{policy sampling error corrected}-TD(0) (PSEC-TD(0)). PSEC-TD(0) first estimates the empirical distribution of actions in each state in the batch and then uses importance sampling to correct for the mismatch between the empirical weighting and the correct weighting for updates following each action. We refine the concept of a certainty-equivalence estimate and argue that PSEC-TD(0) is a more data efficient estimator than TD(0) for a fixed batch of data. Finally, we conduct an empirical evaluation of PSEC-TD(0) on three batch value function learning tasks, with a hyperparameter sensitivity analysis, and show that PSEC-TD(0) produces value function estimates with lower mean squared error than TD(0).
2109.12714
Ramakrishnan Sundareswaran
Ramakrishnan Sundareswaran, Jansel Herrera-Gerena, John Just, Ali Jannesari
Cluster Analysis with Deep Embeddings and Contrastive Learning
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Unsupervised disentangled representation learning is a long-standing problem in computer vision. This work proposes a novel framework for performing image clustering from deep embeddings by combining instance-level contrastive learning with a deep embedding based cluster center predictor. Our approach jointly learns representations and predicts cluster centers in an end-to-end manner. This is accomplished via a three-pronged approach that combines a clustering loss, an instance-wise contrastive loss, and an anchor loss. Our fundamental intuition is that using an ensemble loss that incorporates instance-level features and a clustering procedure focusing on semantic similarity reinforces learning better representations in the latent space. We observe that our method performs exceptionally well on popular vision datasets when evaluated using standard clustering metrics such as Normalized Mutual Information (NMI), in addition to producing geometrically well-separated cluster embeddings as defined by the Euclidean distance. Our framework performs on par with widely accepted clustering methods and outperforms the state-of-the-art contrastive learning method on the CIFAR-10 dataset with an NMI score of 0.772, a 7-8% improvement on the strong baseline.
[ { "created": "Sun, 26 Sep 2021 22:18:15 GMT", "version": "v1" }, { "created": "Sat, 2 Oct 2021 17:15:31 GMT", "version": "v2" } ]
2021-10-05
[ [ "Sundareswaran", "Ramakrishnan", "" ], [ "Herrera-Gerena", "Jansel", "" ], [ "Just", "John", "" ], [ "Jannesari", "Ali", "" ] ]
Unsupervised disentangled representation learning is a long-standing problem in computer vision. This work proposes a novel framework for performing image clustering from deep embeddings by combining instance-level contrastive learning with a deep embedding based cluster center predictor. Our approach jointly learns representations and predicts cluster centers in an end-to-end manner. This is accomplished via a three-pronged approach that combines a clustering loss, an instance-wise contrastive loss, and an anchor loss. Our fundamental intuition is that using an ensemble loss that incorporates instance-level features and a clustering procedure focusing on semantic similarity reinforces learning better representations in the latent space. We observe that our method performs exceptionally well on popular vision datasets when evaluated using standard clustering metrics such as Normalized Mutual Information (NMI), in addition to producing geometrically well-separated cluster embeddings as defined by the Euclidean distance. Our framework performs on par with widely accepted clustering methods and outperforms the state-of-the-art contrastive learning method on the CIFAR-10 dataset with an NMI score of 0.772, a 7-8% improvement on the strong baseline.
2107.12108
Lars Moormann
J. van Hegelsom, J.M. van de Mortel-Fronczak, L. Moormann, D.A. van Beek, J.E. Rooda
Development of a 3D Digital Twin of the Swalmen Tunnel in the Rijkswaterstaat Project
null
null
null
null
cs.HC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an ongoing project, a cooperation between the TU/e and the Dutch Department of Waterways and Public Works (Rijkswaterstaat in Dutch, abbreviated to RWS) is established. The project focuses on investigating applicability of synthesis-based engineering in the design of supervisory controllers for bridges, waterways and tunnels. Supervisory controllers ensure correct cooperation between components in a system. The design process of these controllers partly relies on simulation with models of the plant (the physical system). A possible addition to this design process is digital twin technology. A digital twin is a virtual copy of a system that is generally much more realistic than the 2D simulation models that are currently used for supervisory controller validation. In this report, the development of a digital twin of the Swalmen tunnel that is suitable for supervisory control validation is described. The Swalmen tunnel is a highway tunnel in Limburg, the Netherlands. This case study is relevant, because the Swalmen tunnel will be renovated in 2023 and 2028. These renovation projects include updating controlled subsystems in the tunnel, such as boom barriers and traffic lights, and updating the supervisory controller of the tunnel. The digital twin might be useful to aid the supervisory controller design process in these renovation projects.
[ { "created": "Mon, 26 Jul 2021 11:03:16 GMT", "version": "v1" }, { "created": "Tue, 15 Feb 2022 10:19:52 GMT", "version": "v2" } ]
2022-02-16
[ [ "van Hegelsom", "J.", "" ], [ "van de Mortel-Fronczak", "J. M.", "" ], [ "Moormann", "L.", "" ], [ "van Beek", "D. A.", "" ], [ "Rooda", "J. E.", "" ] ]
In an ongoing project, a cooperation between the TU/e and the Dutch Department of Waterways and Public Works (Rijkswaterstaat in Dutch, abbreviated to RWS) is established. The project focuses on investigating applicability of synthesis-based engineering in the design of supervisory controllers for bridges, waterways and tunnels. Supervisory controllers ensure correct cooperation between components in a system. The design process of these controllers partly relies on simulation with models of the plant (the physical system). A possible addition to this design process is digital twin technology. A digital twin is a virtual copy of a system that is generally much more realistic than the 2D simulation models that are currently used for supervisory controller validation. In this report, the development of a digital twin of the Swalmen tunnel that is suitable for supervisory control validation is described. The Swalmen tunnel is a highway tunnel in Limburg, the Netherlands. This case study is relevant, because the Swalmen tunnel will be renovated in 2023 and 2028. These renovation projects include updating controlled subsystems in the tunnel, such as boom barriers and traffic lights, and updating the supervisory controller of the tunnel. The digital twin might be useful to aid the supervisory controller design process in these renovation projects.
1704.07398
Yevgeni Berzak
Yevgeni Berzak, Chie Nakamura, Suzanne Flynn and Boris Katz
Predicting Native Language from Gaze
ACL 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental question in language learning concerns the role of a speaker's first language in second language acquisition. We present a novel methodology for studying this question: analysis of eye-movement patterns in second language reading of free-form text. Using this methodology, we demonstrate for the first time that the native language of English learners can be predicted from their gaze fixations when reading English. We provide analysis of classifier uncertainty and learned features, which indicates that differences in English reading are likely to be rooted in linguistic divergences across native languages. The presented framework complements production studies and offers new ground for advancing research on multilingualism.
[ { "created": "Mon, 24 Apr 2017 18:04:17 GMT", "version": "v1" }, { "created": "Tue, 2 May 2017 21:40:35 GMT", "version": "v2" } ]
2017-05-04
[ [ "Berzak", "Yevgeni", "" ], [ "Nakamura", "Chie", "" ], [ "Flynn", "Suzanne", "" ], [ "Katz", "Boris", "" ] ]
A fundamental question in language learning concerns the role of a speaker's first language in second language acquisition. We present a novel methodology for studying this question: analysis of eye-movement patterns in second language reading of free-form text. Using this methodology, we demonstrate for the first time that the native language of English learners can be predicted from their gaze fixations when reading English. We provide analysis of classifier uncertainty and learned features, which indicates that differences in English reading are likely to be rooted in linguistic divergences across native languages. The presented framework complements production studies and offers new ground for advancing research on multilingualism.
2307.02040
Zhaomin Wu
Zhaomin Wu, Junyi Hou, Bingsheng He
VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks
null
The Twelfth International Conference on Learning Representations (ICLR 2024)
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Vertical Federated Learning (VFL) is a crucial paradigm for training machine learning models on feature-partitioned, distributed data. However, due to privacy restrictions, few public real-world VFL datasets exist for algorithm evaluation, and these represent a limited array of feature distributions. Existing benchmarks often resort to synthetic datasets, derived from arbitrary feature splits from a global set, which only capture a subset of feature distributions, leading to inadequate algorithm performance assessment. This paper addresses these shortcomings by introducing two key factors affecting VFL performance - feature importance and feature correlation - and proposing associated evaluation metrics and dataset splitting methods. Additionally, we introduce a real VFL dataset to address the deficit in image-image VFL scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides valuable insights for future research in the field.
[ { "created": "Wed, 5 Jul 2023 05:55:08 GMT", "version": "v1" }, { "created": "Wed, 17 Jan 2024 07:54:46 GMT", "version": "v2" }, { "created": "Wed, 13 Mar 2024 08:06:37 GMT", "version": "v3" } ]
2024-03-14
[ [ "Wu", "Zhaomin", "" ], [ "Hou", "Junyi", "" ], [ "He", "Bingsheng", "" ] ]
Vertical Federated Learning (VFL) is a crucial paradigm for training machine learning models on feature-partitioned, distributed data. However, due to privacy restrictions, few public real-world VFL datasets exist for algorithm evaluation, and these represent a limited array of feature distributions. Existing benchmarks often resort to synthetic datasets, derived from arbitrary feature splits from a global set, which only capture a subset of feature distributions, leading to inadequate algorithm performance assessment. This paper addresses these shortcomings by introducing two key factors affecting VFL performance - feature importance and feature correlation - and proposing associated evaluation metrics and dataset splitting methods. Additionally, we introduce a real VFL dataset to address the deficit in image-image VFL scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides valuable insights for future research in the field.
2309.02428
Manal Helal
Manal Helal
Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework
34 pages, 8 figures, 4 tables
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The burgeoning growth of public domain data and the increasing complexity of deep learning model architectures have underscored the need for more efficient data representation and analysis techniques. This paper is motivated by the work of (Helal, 2023) and aims to present a comprehensive overview of tensorization. This transformative approach bridges the gap between the inherently multidimensional nature of data and the simplified 2-dimensional matrices commonly used in linear algebra-based machine learning algorithms. This paper explores the steps involved in tensorization, multidimensional data sources, various multiway analysis methods employed, and the benefits of these approaches. A small example of Blind Source Separation (BSS) is presented comparing 2-dimensional algorithms and a multiway algorithm in Python. Results indicate that multiway analysis is more expressive. Contrary to the intuition of the dimensionality curse, utilising multidimensional datasets in their native form and applying multiway analysis methods grounded in multilinear algebra reveal a profound capacity to capture intricate interrelationships among various dimensions while, surprisingly, reducing the number of model parameters and accelerating processing. A survey of the multi-away analysis methods and integration with various Deep Neural Networks models is presented using case studies in different application domains.
[ { "created": "Tue, 5 Sep 2023 17:56:22 GMT", "version": "v1" }, { "created": "Thu, 7 Sep 2023 13:42:57 GMT", "version": "v2" }, { "created": "Mon, 9 Oct 2023 11:14:41 GMT", "version": "v3" } ]
2023-10-10
[ [ "Helal", "Manal", "" ] ]
The burgeoning growth of public domain data and the increasing complexity of deep learning model architectures have underscored the need for more efficient data representation and analysis techniques. This paper is motivated by the work of (Helal, 2023) and aims to present a comprehensive overview of tensorization. This transformative approach bridges the gap between the inherently multidimensional nature of data and the simplified 2-dimensional matrices commonly used in linear algebra-based machine learning algorithms. This paper explores the steps involved in tensorization, multidimensional data sources, various multiway analysis methods employed, and the benefits of these approaches. A small example of Blind Source Separation (BSS) is presented comparing 2-dimensional algorithms and a multiway algorithm in Python. Results indicate that multiway analysis is more expressive. Contrary to the intuition of the dimensionality curse, utilising multidimensional datasets in their native form and applying multiway analysis methods grounded in multilinear algebra reveal a profound capacity to capture intricate interrelationships among various dimensions while, surprisingly, reducing the number of model parameters and accelerating processing. A survey of the multi-away analysis methods and integration with various Deep Neural Networks models is presented using case studies in different application domains.
1910.05784
Dimitrios Kollias
Xia Yicheng and Dimitrios Kollias
Interpretable Deep Neural Networks for Dimensional and Categorical Emotion Recognition in-the-wild
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emotions play an important role in people's life. Understanding and recognising is not only important for interpersonal communication, but also has promising applications in Human-Computer Interaction, automobile safety and medical research. This project focuses on extending the emotion recognition database, and training the CNN + RNN emotion recognition neural networks with emotion category representation and valence \& arousal representation. The combined models are constructed by training the two representations simultaneously. The comparison and analysis between the three types of model are discussed. The inner-relationship between two emotion representations and the interpretability of the neural networks are investigated. The findings suggest that categorical emotion recognition performance can benefit from training with a combined model. And the mapping of emotion category and valence \& arousal values can explain this phenomenon.
[ { "created": "Sun, 13 Oct 2019 16:33:18 GMT", "version": "v1" }, { "created": "Fri, 13 Dec 2019 23:25:57 GMT", "version": "v2" } ]
2019-12-17
[ [ "Yicheng", "Xia", "" ], [ "Kollias", "Dimitrios", "" ] ]
Emotions play an important role in people's life. Understanding and recognising is not only important for interpersonal communication, but also has promising applications in Human-Computer Interaction, automobile safety and medical research. This project focuses on extending the emotion recognition database, and training the CNN + RNN emotion recognition neural networks with emotion category representation and valence \& arousal representation. The combined models are constructed by training the two representations simultaneously. The comparison and analysis between the three types of model are discussed. The inner-relationship between two emotion representations and the interpretability of the neural networks are investigated. The findings suggest that categorical emotion recognition performance can benefit from training with a combined model. And the mapping of emotion category and valence \& arousal values can explain this phenomenon.
2209.12823
Anupam Biswas
Thounaojam Chinglemba, Soujanyo Biswas, Debashish Malakar, Vivek Meena, Debojyoti Sarkar, and Anupam Biswas
Introductory Review of Swarm Intelligence Techniques
Submitted to Springer
null
null
null
cs.NE math.OC
http://creativecommons.org/licenses/by/4.0/
With the rapid upliftment of technology, there has emerged a dire need to fine-tune or optimize certain processes, software, models or structures, with utmost accuracy and efficiency. Optimization algorithms are preferred over other methods of optimization through experimentation or simulation, for their generic problem-solving abilities and promising efficacy with the least human intervention. In recent times, the inducement of natural phenomena into algorithm design has immensely triggered the efficiency of optimization process for even complex multi-dimensional, non-continuous, non-differentiable and noisy problem search spaces. This chapter deals with the Swarm intelligence (SI) based algorithms or Swarm Optimization Algorithms, which are a subset of the greater Nature Inspired Optimization Algorithms (NIOAs). Swarm intelligence involves the collective study of individuals and their mutual interactions leading to intelligent behavior of the swarm. The chapter presents various population-based SI algorithms, their fundamental structures along with their mathematical models.
[ { "created": "Mon, 26 Sep 2022 16:29:55 GMT", "version": "v1" }, { "created": "Fri, 30 Sep 2022 05:09:10 GMT", "version": "v2" } ]
2022-10-03
[ [ "Chinglemba", "Thounaojam", "" ], [ "Biswas", "Soujanyo", "" ], [ "Malakar", "Debashish", "" ], [ "Meena", "Vivek", "" ], [ "Sarkar", "Debojyoti", "" ], [ "Biswas", "Anupam", "" ] ]
With the rapid upliftment of technology, there has emerged a dire need to fine-tune or optimize certain processes, software, models or structures, with utmost accuracy and efficiency. Optimization algorithms are preferred over other methods of optimization through experimentation or simulation, for their generic problem-solving abilities and promising efficacy with the least human intervention. In recent times, the inducement of natural phenomena into algorithm design has immensely triggered the efficiency of optimization process for even complex multi-dimensional, non-continuous, non-differentiable and noisy problem search spaces. This chapter deals with the Swarm intelligence (SI) based algorithms or Swarm Optimization Algorithms, which are a subset of the greater Nature Inspired Optimization Algorithms (NIOAs). Swarm intelligence involves the collective study of individuals and their mutual interactions leading to intelligent behavior of the swarm. The chapter presents various population-based SI algorithms, their fundamental structures along with their mathematical models.
2203.09549
M Rasel Mahmud
Sagor Chandro Bakchy, Md. Rabiul Islam, M. Rasel Mahmud, Faisal Imran
Human Gait Analysis using Gait Energy Image
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Gait recognition is one of the most recent emerging techniques of human biometric which can be used for security based purposes having unobtrusive learning method. In comparison with other bio-metrics gait analysis has some special security features. Most of the biometric technique uses sequential template based component analysis for recognition. Comparing with those methods, we proposed a developed technique for gait identification using the feature Gait Energy Image (GEI). GEI representation of gait contains all information of each image in one gait cycle and requires less storage and low processing speed. As only one image is enough to store the necessary information in GEI feature recognition process is very easier than any other feature for gait recognition. Gait recognition has some limitations in recognition process like viewing angle variation, walking speed, clothes, carrying load etc. Our proposed method in the paper compares the recognition performance with template based feature extraction which needs to process each frame in the cycle. We use GEI which gives relatively all information about all the frames in the cycle and results in better performance than other feature of gait analysis.
[ { "created": "Thu, 17 Mar 2022 18:16:46 GMT", "version": "v1" } ]
2022-03-21
[ [ "Bakchy", "Sagor Chandro", "" ], [ "Islam", "Md. Rabiul", "" ], [ "Mahmud", "M. Rasel", "" ], [ "Imran", "Faisal", "" ] ]
Gait recognition is one of the most recent emerging techniques of human biometric which can be used for security based purposes having unobtrusive learning method. In comparison with other bio-metrics gait analysis has some special security features. Most of the biometric technique uses sequential template based component analysis for recognition. Comparing with those methods, we proposed a developed technique for gait identification using the feature Gait Energy Image (GEI). GEI representation of gait contains all information of each image in one gait cycle and requires less storage and low processing speed. As only one image is enough to store the necessary information in GEI feature recognition process is very easier than any other feature for gait recognition. Gait recognition has some limitations in recognition process like viewing angle variation, walking speed, clothes, carrying load etc. Our proposed method in the paper compares the recognition performance with template based feature extraction which needs to process each frame in the cycle. We use GEI which gives relatively all information about all the frames in the cycle and results in better performance than other feature of gait analysis.
2001.02101
Homayoun Valafar
Chrisogonas O. Odhiambo, Casey A. Cole, Alaleh Torkjazi, Homayoun Valafar
State Transition Modeling of the Smoking Behavior using LSTM Recurrent Neural Networks
8 pages, CSCI 2019
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of sensors has pervaded everyday life in several applications including human activity monitoring, healthcare, and social networks. In this study, we focus on the use of smartwatch sensors to recognize smoking activity. More specifically, we have reformulated the previous work in detection of smoking to include in-context recognition of smoking. Our presented reformulation of the smoking gesture as a state-transition model that consists of the mini-gestures hand-to-lip, hand-on-lip, and hand-off-lip, has demonstrated improvement in detection rates nearing 100% using conventional neural networks. In addition, we have begun the utilization of Long-Short-Term Memory (LSTM) neural networks to allow for in-context detection of gestures with accuracy nearing 97%.
[ { "created": "Tue, 7 Jan 2020 15:06:28 GMT", "version": "v1" } ]
2020-01-08
[ [ "Odhiambo", "Chrisogonas O.", "" ], [ "Cole", "Casey A.", "" ], [ "Torkjazi", "Alaleh", "" ], [ "Valafar", "Homayoun", "" ] ]
The use of sensors has pervaded everyday life in several applications including human activity monitoring, healthcare, and social networks. In this study, we focus on the use of smartwatch sensors to recognize smoking activity. More specifically, we have reformulated the previous work in detection of smoking to include in-context recognition of smoking. Our presented reformulation of the smoking gesture as a state-transition model that consists of the mini-gestures hand-to-lip, hand-on-lip, and hand-off-lip, has demonstrated improvement in detection rates nearing 100% using conventional neural networks. In addition, we have begun the utilization of Long-Short-Term Memory (LSTM) neural networks to allow for in-context detection of gestures with accuracy nearing 97%.
2312.12112
Nabeel Seedat
Nabeel Seedat, Nicolas Huynh, Boris van Breugel, Mihaela van der Schaar
Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes
Presented at the 41st International Conference on Machine Learning (ICML) 2024. *Seedat & Huynh contributed equally
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.
[ { "created": "Tue, 19 Dec 2023 12:34:46 GMT", "version": "v1" }, { "created": "Wed, 7 Feb 2024 19:00:35 GMT", "version": "v2" }, { "created": "Sun, 30 Jun 2024 12:48:18 GMT", "version": "v3" } ]
2024-07-02
[ [ "Seedat", "Nabeel", "" ], [ "Huynh", "Nicolas", "" ], [ "van Breugel", "Boris", "" ], [ "van der Schaar", "Mihaela", "" ] ]
Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.
2407.04263
Raula Gaikovina Kula Dr
Vittunyuta Maeprasart, Ali Ouni, Raula Gaikovina Kula
Drop it All or Pick it Up? How Developers Responded to the Log4JShell Vulnerability
Accepted to SERA24. arXiv admin note: text overlap with arXiv:2406.11362
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Although using third-party libraries has become prevalent in contemporary software development, developers often struggle to update their dependencies. Prior works acknowledge that due to the migration effort, priority and other issues cause lags in the migration process. The common assumption is that developers should drop all other activities and prioritize fixing the vulnerability. Our objective is to understand developer behavior when facing high-risk vulnerabilities in their code. We explore the prolific, and possibly one of the cases of the Log4JShell, a vulnerability that has the highest severity rating ever, which received widespread media attention. Using a mixed-method approach, we analyze 219 GitHub Pull Requests (PR) and 354 issues belonging to 53 Maven projects affected by the Log4JShell vulnerability. Our study confirms that developers show a quick response taking from 5 to 6 days. However, instead of dropping everything, surprisingly developer activities tend to increase for all pending issues and PRs. Developer discussions involved either giving information (29.3\%) and seeking information (20.6\%), which is missing in existing support tools. Leveraging this possibly-one of a kind event, insights opens up a new line of research, causing us to rethink best practices and what developers need in order to efficiently fix vulnerabilities.
[ { "created": "Fri, 5 Jul 2024 05:33:10 GMT", "version": "v1" } ]
2024-07-08
[ [ "Maeprasart", "Vittunyuta", "" ], [ "Ouni", "Ali", "" ], [ "Kula", "Raula Gaikovina", "" ] ]
Although using third-party libraries has become prevalent in contemporary software development, developers often struggle to update their dependencies. Prior works acknowledge that due to the migration effort, priority and other issues cause lags in the migration process. The common assumption is that developers should drop all other activities and prioritize fixing the vulnerability. Our objective is to understand developer behavior when facing high-risk vulnerabilities in their code. We explore the prolific, and possibly one of the cases of the Log4JShell, a vulnerability that has the highest severity rating ever, which received widespread media attention. Using a mixed-method approach, we analyze 219 GitHub Pull Requests (PR) and 354 issues belonging to 53 Maven projects affected by the Log4JShell vulnerability. Our study confirms that developers show a quick response taking from 5 to 6 days. However, instead of dropping everything, surprisingly developer activities tend to increase for all pending issues and PRs. Developer discussions involved either giving information (29.3\%) and seeking information (20.6\%), which is missing in existing support tools. Leveraging this possibly-one of a kind event, insights opens up a new line of research, causing us to rethink best practices and what developers need in order to efficiently fix vulnerabilities.
1609.08097
Rebecca Sharp
Rebecca Sharp, Mihai Surdeanu, Peter Jansen, Peter Clark, and Michael Hammond
Creating Causal Embeddings for Question Answering with Minimal Supervision
To appear in EMNLP 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using general-purpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we generate causal embeddings cost-effectively by bootstrapping cause-effect pairs extracted from free text using a small set of seed patterns. Second, we train dedicated embeddings over this data, by using task-specific contexts, i.e., the context of a cause is its effect. Finally, we extend a state-of-the-art reranking approach for QA to incorporate these causal embeddings. We evaluate the causal embedding models both directly with a casual implication task, and indirectly, in a downstream causal QA task using data from Yahoo! Answers. We show that explicitly modeling causality improves performance in both tasks. In the QA task our best model achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7% (relative).
[ { "created": "Mon, 26 Sep 2016 17:50:15 GMT", "version": "v1" } ]
2016-09-27
[ [ "Sharp", "Rebecca", "" ], [ "Surdeanu", "Mihai", "" ], [ "Jansen", "Peter", "" ], [ "Clark", "Peter", "" ], [ "Hammond", "Michael", "" ] ]
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using general-purpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we generate causal embeddings cost-effectively by bootstrapping cause-effect pairs extracted from free text using a small set of seed patterns. Second, we train dedicated embeddings over this data, by using task-specific contexts, i.e., the context of a cause is its effect. Finally, we extend a state-of-the-art reranking approach for QA to incorporate these causal embeddings. We evaluate the causal embedding models both directly with a casual implication task, and indirectly, in a downstream causal QA task using data from Yahoo! Answers. We show that explicitly modeling causality improves performance in both tasks. In the QA task our best model achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7% (relative).
1812.08044
Gabriel Marzinotto
Gabriel Marzinotto (TALEP), G\'eraldine Damnati (FTR\&D), Frederic Bechet (LIF, TALEP)
FrameNet automatic analysis : a study on a French corpus of encyclopedic texts
in French
TALN 2017, 2017, Orl{\'e}ans, France
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents an automatic frame analysis system evaluated on a corpus of French encyclopedic history texts annotated according to the FrameNet formalism. The chosen approach relies on an integrated sequence labeling model which jointly optimizes frame identification and semantic role segmentation and identification. The purpose of this study is to analyze the task complexity from several dimensions. Hence we provide detailed evaluations from a feature selection point of view and from the data point of view.
[ { "created": "Wed, 19 Dec 2018 15:59:31 GMT", "version": "v1" } ]
2018-12-20
[ [ "Marzinotto", "Gabriel", "", "TALEP" ], [ "Damnati", "Géraldine", "", "FTR\\&D" ], [ "Bechet", "Frederic", "", "LIF, TALEP" ] ]
This article presents an automatic frame analysis system evaluated on a corpus of French encyclopedic history texts annotated according to the FrameNet formalism. The chosen approach relies on an integrated sequence labeling model which jointly optimizes frame identification and semantic role segmentation and identification. The purpose of this study is to analyze the task complexity from several dimensions. Hence we provide detailed evaluations from a feature selection point of view and from the data point of view.
2006.01964
Cenek Albl
Cenek Albl, Zuzana Kukelova, Viktor Larsson, Tomas Pajdla, Konrad Schindler
From two rolling shutters to one global shutter
CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most consumer cameras are equipped with electronic rolling shutter, leading to image distortions when the camera moves during image capture. We explore a surprisingly simple camera configuration that makes it possible to undo the rolling shutter distortion: two cameras mounted to have different rolling shutter directions. Such a setup is easy and cheap to build and it possesses the geometric constraints needed to correct rolling shutter distortion using only a sparse set of point correspondences between the two images. We derive equations that describe the underlying geometry for general and special motions and present an efficient method for finding their solutions. Our synthetic and real experiments demonstrate that our approach is able to remove large rolling shutter distortions of all types without relying on any specific scene structure.
[ { "created": "Tue, 2 Jun 2020 22:18:43 GMT", "version": "v1" } ]
2020-06-04
[ [ "Albl", "Cenek", "" ], [ "Kukelova", "Zuzana", "" ], [ "Larsson", "Viktor", "" ], [ "Pajdla", "Tomas", "" ], [ "Schindler", "Konrad", "" ] ]
Most consumer cameras are equipped with electronic rolling shutter, leading to image distortions when the camera moves during image capture. We explore a surprisingly simple camera configuration that makes it possible to undo the rolling shutter distortion: two cameras mounted to have different rolling shutter directions. Such a setup is easy and cheap to build and it possesses the geometric constraints needed to correct rolling shutter distortion using only a sparse set of point correspondences between the two images. We derive equations that describe the underlying geometry for general and special motions and present an efficient method for finding their solutions. Our synthetic and real experiments demonstrate that our approach is able to remove large rolling shutter distortions of all types without relying on any specific scene structure.
2212.05056
Sergi Bray
Sergi D. Bray (1), Shane D. Johnson (1), Bennett Kleinberg (2) ((1) University College London, (2) Tilburg University)
Testing Human Ability To Detect Deepfake Images of Human Faces
null
null
null
null
cs.HC cs.CR cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
[ { "created": "Wed, 7 Dec 2022 14:48:25 GMT", "version": "v1" }, { "created": "Wed, 14 Dec 2022 13:53:10 GMT", "version": "v2" }, { "created": "Thu, 25 May 2023 15:07:19 GMT", "version": "v3" } ]
2023-05-26
[ [ "Bray", "Sergi D.", "" ], [ "Johnson", "Shane D.", "" ], [ "Kleinberg", "Bennett", "" ] ]
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
1906.00108
Gautham Krishna Gudur
Gautham Krishna Gudur, Prahalathan Sundaramoorthy and Venkatesh Umaashankar
ActiveHARNet: Towards On-Device Deep Bayesian Active Learning for Human Activity Recognition
6 pages, 5 figures, ACM MobiSys 2019 (3rd International Workshop on Embedded and Mobile Deep Learning - EMDL '19)
null
null
null
cs.LG cs.HC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various health-care applications such as assisted living, fall detection etc., require modeling of user behavior through Human Activity Recognition (HAR). HAR using mobile- and wearable-based deep learning algorithms have been on the rise owing to the advancements in pervasive computing. However, there are two other challenges that need to be addressed: first, the deep learning model should support on-device incremental training (model updation) from real-time incoming data points to learn user behavior over time, while also being resource-friendly; second, a suitable ground truthing technique (like Active Learning) should help establish labels on-the-fly while also selecting only the most informative data points to query from an oracle. Hence, in this paper, we propose ActiveHARNet, a resource-efficient deep ensembled model which supports on-device Incremental Learning and inference, with capabilities to represent model uncertainties through approximations in Bayesian Neural Networks using dropout. This is combined with suitable acquisition functions for active learning. Empirical results on two publicly available wrist-worn HAR and fall detection datasets indicate that ActiveHARNet achieves considerable efficiency boost during inference across different users, with a substantially low number of acquired pool points (at least 60% reduction) during incremental learning on both datasets experimented with various acquisition functions, thus demonstrating deployment and Incremental Learning feasibility.
[ { "created": "Fri, 31 May 2019 22:28:40 GMT", "version": "v1" } ]
2019-06-04
[ [ "Gudur", "Gautham Krishna", "" ], [ "Sundaramoorthy", "Prahalathan", "" ], [ "Umaashankar", "Venkatesh", "" ] ]
Various health-care applications such as assisted living, fall detection etc., require modeling of user behavior through Human Activity Recognition (HAR). HAR using mobile- and wearable-based deep learning algorithms have been on the rise owing to the advancements in pervasive computing. However, there are two other challenges that need to be addressed: first, the deep learning model should support on-device incremental training (model updation) from real-time incoming data points to learn user behavior over time, while also being resource-friendly; second, a suitable ground truthing technique (like Active Learning) should help establish labels on-the-fly while also selecting only the most informative data points to query from an oracle. Hence, in this paper, we propose ActiveHARNet, a resource-efficient deep ensembled model which supports on-device Incremental Learning and inference, with capabilities to represent model uncertainties through approximations in Bayesian Neural Networks using dropout. This is combined with suitable acquisition functions for active learning. Empirical results on two publicly available wrist-worn HAR and fall detection datasets indicate that ActiveHARNet achieves considerable efficiency boost during inference across different users, with a substantially low number of acquired pool points (at least 60% reduction) during incremental learning on both datasets experimented with various acquisition functions, thus demonstrating deployment and Incremental Learning feasibility.
1001.3263
Bernd Schuh
Bernd R. Schuh
A Real World Mechanism for Testing Satisfiability in Polynomial Time
null
null
null
null
cs.LO cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whether the satisfiability of any formula F of propositional calculus can be determined in polynomial time is an open question. I propose a simple procedure based on some real world mechanisms to tackle this problem. The main result is the blueprint for a machine which is able to test any formula in conjunctive normal form (CNF) for satisfiability in linear time. The device uses light and some electrochemical properties to function. It adapts itself to the scope of the problem without growing exponentially in mass with the size of the formula. It requires infinite precision in its components instead.
[ { "created": "Tue, 19 Jan 2010 11:18:54 GMT", "version": "v1" } ]
2010-01-20
[ [ "Schuh", "Bernd R.", "" ] ]
Whether the satisfiability of any formula F of propositional calculus can be determined in polynomial time is an open question. I propose a simple procedure based on some real world mechanisms to tackle this problem. The main result is the blueprint for a machine which is able to test any formula in conjunctive normal form (CNF) for satisfiability in linear time. The device uses light and some electrochemical properties to function. It adapts itself to the scope of the problem without growing exponentially in mass with the size of the formula. It requires infinite precision in its components instead.
2310.00956
Murdoch Gabbay
Murdoch Gabbay and Giuliano Losa
Semiframes: algebras of heterogeneous consensus
See also arXiv:2303.09287, which takes a point-set approach ("point-set semitopologies"). This update updates the license to CC-BY 4.0
null
null
null
cs.LO math.CT math.GN
http://creativecommons.org/licenses/by/4.0/
Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.
[ { "created": "Mon, 2 Oct 2023 07:48:55 GMT", "version": "v1" }, { "created": "Wed, 29 May 2024 11:13:27 GMT", "version": "v2" } ]
2024-05-30
[ [ "Gabbay", "Murdoch", "" ], [ "Losa", "Giuliano", "" ] ]
Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.
2406.10991
Tianhua Zhang
Tianhua Zhang, Kun Li, Hongyin Luo, Xixin Wu, James Glass, Helen Meng
Adaptive Query Rewriting: Aligning Rewriters through Marginal Probability of Conversational Answers
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Query rewriting is a crucial technique for passage retrieval in open-domain conversational question answering (CQA). It decontexualizes conversational queries into self-contained questions suitable for off-the-shelf retrievers. Existing methods attempt to incorporate retriever's preference during the training of rewriting models. However, these approaches typically rely on extensive annotations such as in-domain rewrites and/or relevant passage labels, limiting the models' generalization and adaptation capabilities. In this paper, we introduce AdaQR ($\textbf{Ada}$ptive $\textbf{Q}$uery $\textbf{R}$ewriting), a framework for training query rewriting models with limited rewrite annotations from seed datasets and completely no passage label. Our approach begins by fine-tuning compact large language models using only ~$10\%$ of rewrite annotations from the seed dataset training split. The models are then utilized to generate rewrite candidates for each query instance. A novel approach is then proposed to assess retriever's preference for these candidates by the probability of answers conditioned on the conversational query by marginalizing the Top-$K$ passages. This serves as the reward for optimizing the rewriter further using Direct Preference Optimization (DPO), a process free of rewrite and retrieval annotations. Experimental results on four open-domain CQA datasets demonstrate that AdaQR not only enhances the in-domain capabilities of the rewriter with limited annotation requirement, but also adapts effectively to out-of-domain datasets.
[ { "created": "Sun, 16 Jun 2024 16:09:05 GMT", "version": "v1" } ]
2024-06-18
[ [ "Zhang", "Tianhua", "" ], [ "Li", "Kun", "" ], [ "Luo", "Hongyin", "" ], [ "Wu", "Xixin", "" ], [ "Glass", "James", "" ], [ "Meng", "Helen", "" ] ]
Query rewriting is a crucial technique for passage retrieval in open-domain conversational question answering (CQA). It decontexualizes conversational queries into self-contained questions suitable for off-the-shelf retrievers. Existing methods attempt to incorporate retriever's preference during the training of rewriting models. However, these approaches typically rely on extensive annotations such as in-domain rewrites and/or relevant passage labels, limiting the models' generalization and adaptation capabilities. In this paper, we introduce AdaQR ($\textbf{Ada}$ptive $\textbf{Q}$uery $\textbf{R}$ewriting), a framework for training query rewriting models with limited rewrite annotations from seed datasets and completely no passage label. Our approach begins by fine-tuning compact large language models using only ~$10\%$ of rewrite annotations from the seed dataset training split. The models are then utilized to generate rewrite candidates for each query instance. A novel approach is then proposed to assess retriever's preference for these candidates by the probability of answers conditioned on the conversational query by marginalizing the Top-$K$ passages. This serves as the reward for optimizing the rewriter further using Direct Preference Optimization (DPO), a process free of rewrite and retrieval annotations. Experimental results on four open-domain CQA datasets demonstrate that AdaQR not only enhances the in-domain capabilities of the rewriter with limited annotation requirement, but also adapts effectively to out-of-domain datasets.
1712.08345
EPTCS
Timo Kehrer (Humboldt-University of Berlin), Alice Miller (University of Glasgow)
Proceedings Third Workshop on Graphs as Models
null
EPTCS 263, 2017
10.4204/EPTCS.263
null
cs.LO cs.DC cs.DS cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphs are used as models in many areas of computer science and computer engineering. For example graphs are used to represent syntax, control and data flow, dependency, state spaces, models such as UML and other types of domain-specific models, and social network graphs. In all of these examples, the graph serves as an intuitive yet mathematically precise foundation for many purposes, both in theory building as well as in practical applications. Graph-based models serve as an abstract communication medium and are used to describe various concepts and phenomena. Moreover, once such graph-based models are constructed, they can be analyzed and transformed to verify the correctness of static and dynamic properties, to discover new properties, to deeply study a particular domain of interest or to produce new equivalent and/or optimized versions of graph-based models. The Graphs as Models (GaM) workshop series combines the strengths of two pre-existing workshop series: GT-VMT (Graph Transformation and Visual Modelling Techniques) and GRAPHITE (Graph Inspection and Traversal Engineering), but also solicits research from other related areas, such as social network analysis. GaM offers a platform for exchanging new ideas and results for active researchers in these areas, with a particular aim of boosting inter- and transdisciplinary research exploiting new applications of graphs as models in any area of computational science. This year (2017), the third edition of the GaM workshop was co-located with the European Joint Conferences on Theory and Practice of Software 2017 (ETAPS'17), held in Uppsala, Sweden.
[ { "created": "Fri, 22 Dec 2017 08:48:17 GMT", "version": "v1" } ]
2017-12-25
[ [ "Kehrer", "Timo", "", "Humboldt-University of Berlin" ], [ "Miller", "Alice", "", "University\n of Glasgow" ] ]
Graphs are used as models in many areas of computer science and computer engineering. For example graphs are used to represent syntax, control and data flow, dependency, state spaces, models such as UML and other types of domain-specific models, and social network graphs. In all of these examples, the graph serves as an intuitive yet mathematically precise foundation for many purposes, both in theory building as well as in practical applications. Graph-based models serve as an abstract communication medium and are used to describe various concepts and phenomena. Moreover, once such graph-based models are constructed, they can be analyzed and transformed to verify the correctness of static and dynamic properties, to discover new properties, to deeply study a particular domain of interest or to produce new equivalent and/or optimized versions of graph-based models. The Graphs as Models (GaM) workshop series combines the strengths of two pre-existing workshop series: GT-VMT (Graph Transformation and Visual Modelling Techniques) and GRAPHITE (Graph Inspection and Traversal Engineering), but also solicits research from other related areas, such as social network analysis. GaM offers a platform for exchanging new ideas and results for active researchers in these areas, with a particular aim of boosting inter- and transdisciplinary research exploiting new applications of graphs as models in any area of computational science. This year (2017), the third edition of the GaM workshop was co-located with the European Joint Conferences on Theory and Practice of Software 2017 (ETAPS'17), held in Uppsala, Sweden.
2211.13495
Zeyu Shangguan
Zeyu Shangguan, Lian Huai, Tong Liu, Xingqun Jiang
Few-shot Object Detection with Refined Contrastive Learning
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the scarcity of sampling data in reality, few-shot object detection (FSOD) has drawn more and more attention because of its ability to quickly train new detection concepts with less data. However, there are still failure identifications due to the difficulty in distinguishing confusable classes. We also notice that the high standard deviation of average precision reveals the inconsistent detection performance. To this end, we propose a novel FSOD method with Refined Contrastive Learning (FSRC). A pre-determination component is introduced to find out the Resemblance Group from novel classes which contains confusable classes. Afterwards, Refined Contrastive Learning (RCL) is pointedly performed on this group of classes in order to increase the inter-class distances among them. In the meantime, the detection results distribute more uniformly which further improve the performance. Experimental results based on PASCAL VOC and COCO datasets demonstrate our proposed method outperforms the current state-of-the-art research.
[ { "created": "Thu, 24 Nov 2022 09:34:20 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2023 11:01:09 GMT", "version": "v2" } ]
2023-12-22
[ [ "Shangguan", "Zeyu", "" ], [ "Huai", "Lian", "" ], [ "Liu", "Tong", "" ], [ "Jiang", "Xingqun", "" ] ]
Due to the scarcity of sampling data in reality, few-shot object detection (FSOD) has drawn more and more attention because of its ability to quickly train new detection concepts with less data. However, there are still failure identifications due to the difficulty in distinguishing confusable classes. We also notice that the high standard deviation of average precision reveals the inconsistent detection performance. To this end, we propose a novel FSOD method with Refined Contrastive Learning (FSRC). A pre-determination component is introduced to find out the Resemblance Group from novel classes which contains confusable classes. Afterwards, Refined Contrastive Learning (RCL) is pointedly performed on this group of classes in order to increase the inter-class distances among them. In the meantime, the detection results distribute more uniformly which further improve the performance. Experimental results based on PASCAL VOC and COCO datasets demonstrate our proposed method outperforms the current state-of-the-art research.
2404.17421
Laura Bozzelli
Massimo Benerecetti, Laura Bozzelli, Fabio Mogavero, Adriano Peron
Automata-Theoretic Characterisations of Branching-Time Temporal Logics
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Characterisations theorems serve as important tools in model theory and can be used to assess and compare the expressive power of temporal languages used for the specification and verification of properties in formal methods. While complete connections have been established for the linear-time case between temporal logics, predicate logics, algebraic models, and automata, the situation in the branching-time case remains considerably more fragmented. In this work, we provide an automata-theoretic characterisation of some important branching-time temporal logics, namely CTL* and ECTL* interpreted on arbitrary-branching trees, by identifying two variants of Hesitant Tree Automata that are proved equivalent to those logics. The characterisations also apply to Monadic Path Logic and the bisimulation-invariant fragment of Monadic Chain Logic, again interpreted over trees. These results widen the characterisation landscape of the branching-time case and solve a forty-year-old open question.
[ { "created": "Fri, 26 Apr 2024 13:58:19 GMT", "version": "v1" }, { "created": "Mon, 29 Apr 2024 02:40:03 GMT", "version": "v2" } ]
2024-04-30
[ [ "Benerecetti", "Massimo", "" ], [ "Bozzelli", "Laura", "" ], [ "Mogavero", "Fabio", "" ], [ "Peron", "Adriano", "" ] ]
Characterisations theorems serve as important tools in model theory and can be used to assess and compare the expressive power of temporal languages used for the specification and verification of properties in formal methods. While complete connections have been established for the linear-time case between temporal logics, predicate logics, algebraic models, and automata, the situation in the branching-time case remains considerably more fragmented. In this work, we provide an automata-theoretic characterisation of some important branching-time temporal logics, namely CTL* and ECTL* interpreted on arbitrary-branching trees, by identifying two variants of Hesitant Tree Automata that are proved equivalent to those logics. The characterisations also apply to Monadic Path Logic and the bisimulation-invariant fragment of Monadic Chain Logic, again interpreted over trees. These results widen the characterisation landscape of the branching-time case and solve a forty-year-old open question.
2002.06517
Kyungsu Kim
Hyungjun Kim, Kyungsu Kim, Jinseok Kim, Jae-Joon Kim
BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings. However, BNNs suffer from performance degradation mainly due to the gradient mismatch caused by binarizing activations. Previous works tried to address the gradient mismatch problem by reducing the discrepancy between activation functions used at forward pass and its differentiable approximation used at backward pass, which is an indirect measure. In this work, we use the gradient of smoothed loss function to better estimate the gradient mismatch in quantized neural network. Analysis using the gradient mismatch estimator indicates that using higher precision for activation is more effective than modifying the differentiable approximation of activation function. Based on the observation, we propose a new training scheme for binary activation networks called BinaryDuo in which two binary activations are coupled into a ternary activation during training. Experimental results show that BinaryDuo outperforms state-of-the-art BNNs on various benchmarks with the same amount of parameters and computing cost.
[ { "created": "Sun, 16 Feb 2020 06:18:53 GMT", "version": "v1" } ]
2020-02-18
[ [ "Kim", "Hyungjun", "" ], [ "Kim", "Kyungsu", "" ], [ "Kim", "Jinseok", "" ], [ "Kim", "Jae-Joon", "" ] ]
Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings. However, BNNs suffer from performance degradation mainly due to the gradient mismatch caused by binarizing activations. Previous works tried to address the gradient mismatch problem by reducing the discrepancy between activation functions used at forward pass and its differentiable approximation used at backward pass, which is an indirect measure. In this work, we use the gradient of smoothed loss function to better estimate the gradient mismatch in quantized neural network. Analysis using the gradient mismatch estimator indicates that using higher precision for activation is more effective than modifying the differentiable approximation of activation function. Based on the observation, we propose a new training scheme for binary activation networks called BinaryDuo in which two binary activations are coupled into a ternary activation during training. Experimental results show that BinaryDuo outperforms state-of-the-art BNNs on various benchmarks with the same amount of parameters and computing cost.
2012.03891
Amalie Trewartha
Amalie Trewartha, John Dagdelen, Haoyan Huo, Kevin Cruse, Zheren Wang, Tanjin He, Akshay Subramanian, Yuxing Fei, Benjamin Justus, Kristin Persson, Gerbrand Ceder
COVIDScholar: An automated COVID-19 research aggregation and analysis platform
null
null
null
null
cs.DL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ongoing COVID-19 pandemic has had far-reaching effects throughout society, and science is no exception. The scale, speed, and breadth of the scientific community's COVID-19 response has lead to the emergence of new research literature on a remarkable scale -- as of October 2020, over 81,000 COVID-19 related scientific papers have been released, at a rate of over 250 per day. This has created a challenge to traditional methods of engagement with the research literature; the volume of new research is far beyond the ability of any human to read, and the urgency of response has lead to an increasingly prominent role for pre-print servers and a diffusion of relevant research across sources. These factors have created a need for new tools to change the way scientific literature is disseminated. COVIDScholar is a knowledge portal designed with the unique needs of the COVID-19 research community in mind, utilizing NLP to aid researchers in synthesizing the information spread across thousands of emergent research articles, patents, and clinical trials into actionable insights and new knowledge. The search interface for this corpus, https://covidscholar.org, now serves over 2000 unique users weekly. We present also an analysis of trends in COVID-19 research over the course of 2020.
[ { "created": "Mon, 7 Dec 2020 18:17:11 GMT", "version": "v1" } ]
2020-12-08
[ [ "Trewartha", "Amalie", "" ], [ "Dagdelen", "John", "" ], [ "Huo", "Haoyan", "" ], [ "Cruse", "Kevin", "" ], [ "Wang", "Zheren", "" ], [ "He", "Tanjin", "" ], [ "Subramanian", "Akshay", "" ], [ "Fei", "Yuxing", "" ], [ "Justus", "Benjamin", "" ], [ "Persson", "Kristin", "" ], [ "Ceder", "Gerbrand", "" ] ]
The ongoing COVID-19 pandemic has had far-reaching effects throughout society, and science is no exception. The scale, speed, and breadth of the scientific community's COVID-19 response has lead to the emergence of new research literature on a remarkable scale -- as of October 2020, over 81,000 COVID-19 related scientific papers have been released, at a rate of over 250 per day. This has created a challenge to traditional methods of engagement with the research literature; the volume of new research is far beyond the ability of any human to read, and the urgency of response has lead to an increasingly prominent role for pre-print servers and a diffusion of relevant research across sources. These factors have created a need for new tools to change the way scientific literature is disseminated. COVIDScholar is a knowledge portal designed with the unique needs of the COVID-19 research community in mind, utilizing NLP to aid researchers in synthesizing the information spread across thousands of emergent research articles, patents, and clinical trials into actionable insights and new knowledge. The search interface for this corpus, https://covidscholar.org, now serves over 2000 unique users weekly. We present also an analysis of trends in COVID-19 research over the course of 2020.
2303.05336
Nicola Rizzo
Nicola Rizzo, Massimo Equi, Tuukka Norri, Veli M\"akinen
Elastic Founder Graphs Improved and Enhanced
47 pages, 10 figures. Extension of conference papers IWOCA 2022 (https://doi.org/10.1007/978-3-031-06678-8_35 , preprint arXiv:2201.06492), CPM 2022 (https://doi.org/10.4230/LIPIcs.CPM.2022.19 ), and of some results from PhD dissertation projects of Massimo Equi (http://urn.fi/URN:ISBN:978-951-51-8217-3 ) and Tuukka Norri (http://urn.fi/URN:ISBN:978-951-51-8215-9 )
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indexing labeled graphs for pattern matching is a central challenge of pangenomics. Equi et al. (Algorithmica, 2022) developed the Elastic Founder Graph ($\mathsf{EFG}$) representing an alignment of $m$ sequences of length $n$, drawn from alphabet $\Sigma$ plus the special gap character: the paths spell the original sequences or their recombination. By enforcing the semi-repeat-free property, the $\mathsf{EFG}$ admits a polynomial-space index for linear-time pattern matching, breaking through the conditional lower bounds on indexing labeled graphs (Equi et al., SOFSEM 2021). In this work we improve the space of the $\mathsf{EFG}$ index answering pattern matching queries in linear time, from linear in the length of all strings spelled by three consecutive node labels, to linear in the size of the edge labels. Then, we develop linear-time construction algorithms optimizing for different metrics: we improve the existing linearithmic construction algorithms to $O(mn)$, by solving the novel exclusive ancestor set problem on trees; we propose, for the simplified gapless setting, an $O(mn)$-time solution minimizing the maximum block height, that we generalize by substituting block height with prefix-aware height. Finally, to show the versatility of the framework, we develop a BWT-based $\mathsf{EFG}$ index and study how to encode and perform document listing queries on a set of paths of the graphs, reporting which paths present a given pattern as a substring. We propose the $\mathsf{EFG}$ framework as an improved and enhanced version of the framework for the gapless setting, along with construction methods that are valid in any setting concerned with the segmentation of aligned sequences.
[ { "created": "Thu, 9 Mar 2023 15:31:42 GMT", "version": "v1" } ]
2023-03-10
[ [ "Rizzo", "Nicola", "" ], [ "Equi", "Massimo", "" ], [ "Norri", "Tuukka", "" ], [ "Mäkinen", "Veli", "" ] ]
Indexing labeled graphs for pattern matching is a central challenge of pangenomics. Equi et al. (Algorithmica, 2022) developed the Elastic Founder Graph ($\mathsf{EFG}$) representing an alignment of $m$ sequences of length $n$, drawn from alphabet $\Sigma$ plus the special gap character: the paths spell the original sequences or their recombination. By enforcing the semi-repeat-free property, the $\mathsf{EFG}$ admits a polynomial-space index for linear-time pattern matching, breaking through the conditional lower bounds on indexing labeled graphs (Equi et al., SOFSEM 2021). In this work we improve the space of the $\mathsf{EFG}$ index answering pattern matching queries in linear time, from linear in the length of all strings spelled by three consecutive node labels, to linear in the size of the edge labels. Then, we develop linear-time construction algorithms optimizing for different metrics: we improve the existing linearithmic construction algorithms to $O(mn)$, by solving the novel exclusive ancestor set problem on trees; we propose, for the simplified gapless setting, an $O(mn)$-time solution minimizing the maximum block height, that we generalize by substituting block height with prefix-aware height. Finally, to show the versatility of the framework, we develop a BWT-based $\mathsf{EFG}$ index and study how to encode and perform document listing queries on a set of paths of the graphs, reporting which paths present a given pattern as a substring. We propose the $\mathsf{EFG}$ framework as an improved and enhanced version of the framework for the gapless setting, along with construction methods that are valid in any setting concerned with the segmentation of aligned sequences.
1806.02305
Samuel Kadoury
Marc-Antoine Boucher, Sarah Lippe, Amelie Damphousse, Ramy El-Jalbout, Samuel Kadoury
Dilatation of Lateral Ventricles with Brain Volumes in Infants with 3D Transfontanelle US
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ultrasound (US) can be used to assess brain development in newborns, as MRI is challenging due to immobilization issues, and may require sedation. Dilatation of the lateral ventricles in the brain is a risk factor for poorer neurodevelopment outcomes in infants. Hence, 3D US has the ability to assess the volume of the lateral ventricles similar to clinically standard MRI, but manual segmentation is time consuming. The objective of this study is to develop an approach quantifying the ratio of lateral ventricular dilatation with respect to total brain volume using 3D US, which can assess the severity of macrocephaly. Automatic segmentation of the lateral ventricles is achieved with a multi-atlas deformable registration approach using locally linear correlation metrics for US-MRI fusion, followed by a refinement step using deformable mesh models. Total brain volume is estimated using a 3D ellipsoid modeling approach. Validation was performed on a cohort of 12 infants, ranging from 2 to 8.5 months old, where 3D US and MRI were used to compare brain volumes and segmented lateral ventricles. Automatically extracted volumes from 3D US show a high correlation and no statistically significant difference when compared to ground truth measurements. Differences in volume ratios was 6.0 +/- 4.8% compared to MRI, while lateral ventricular segmentation yielded a mean Dice coefficient of 70.8 +/- 3.6% and a mean absolute distance (MAD) of 0.88 +/- 0.2mm, demonstrating the clinical benefit of this tool in paediatric ultrasound.
[ { "created": "Wed, 6 Jun 2018 17:10:06 GMT", "version": "v1" } ]
2018-06-07
[ [ "Boucher", "Marc-Antoine", "" ], [ "Lippe", "Sarah", "" ], [ "Damphousse", "Amelie", "" ], [ "El-Jalbout", "Ramy", "" ], [ "Kadoury", "Samuel", "" ] ]
Ultrasound (US) can be used to assess brain development in newborns, as MRI is challenging due to immobilization issues, and may require sedation. Dilatation of the lateral ventricles in the brain is a risk factor for poorer neurodevelopment outcomes in infants. Hence, 3D US has the ability to assess the volume of the lateral ventricles similar to clinically standard MRI, but manual segmentation is time consuming. The objective of this study is to develop an approach quantifying the ratio of lateral ventricular dilatation with respect to total brain volume using 3D US, which can assess the severity of macrocephaly. Automatic segmentation of the lateral ventricles is achieved with a multi-atlas deformable registration approach using locally linear correlation metrics for US-MRI fusion, followed by a refinement step using deformable mesh models. Total brain volume is estimated using a 3D ellipsoid modeling approach. Validation was performed on a cohort of 12 infants, ranging from 2 to 8.5 months old, where 3D US and MRI were used to compare brain volumes and segmented lateral ventricles. Automatically extracted volumes from 3D US show a high correlation and no statistically significant difference when compared to ground truth measurements. Differences in volume ratios was 6.0 +/- 4.8% compared to MRI, while lateral ventricular segmentation yielded a mean Dice coefficient of 70.8 +/- 3.6% and a mean absolute distance (MAD) of 0.88 +/- 0.2mm, demonstrating the clinical benefit of this tool in paediatric ultrasound.
2203.08332
Liang Peng
Liang Peng, Senbo Yan, Boxi Wu, Zheng Yang, Xiaofei He, Deng Cai
WeakM3D: Towards Weakly Supervised Monocular 3D Object Detection
Accepted by ICLR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monocular 3D object detection is one of the most challenging tasks in 3D scene understanding. Due to the ill-posed nature of monocular imagery, existing monocular 3D detection methods highly rely on training with the manually annotated 3D box labels on the LiDAR point clouds. This annotation process is very laborious and expensive. To dispense with the reliance on 3D box labels, in this paper we explore the weakly supervised monocular 3D detection. Specifically, we first detect 2D boxes on the image. Then, we adopt the generated 2D boxes to select corresponding RoI LiDAR points as the weak supervision. Eventually, we adopt a network to predict 3D boxes which can tightly align with associated RoI LiDAR points. This network is learned by minimizing our newly-proposed 3D alignment loss between the 3D box estimates and the corresponding RoI LiDAR points. We will illustrate the potential challenges of the above learning problem and resolve these challenges by introducing several effective designs into our method. Codes will be available at https://github.com/SPengLiang/WeakM3D.
[ { "created": "Wed, 16 Mar 2022 00:37:08 GMT", "version": "v1" } ]
2022-03-17
[ [ "Peng", "Liang", "" ], [ "Yan", "Senbo", "" ], [ "Wu", "Boxi", "" ], [ "Yang", "Zheng", "" ], [ "He", "Xiaofei", "" ], [ "Cai", "Deng", "" ] ]
Monocular 3D object detection is one of the most challenging tasks in 3D scene understanding. Due to the ill-posed nature of monocular imagery, existing monocular 3D detection methods highly rely on training with the manually annotated 3D box labels on the LiDAR point clouds. This annotation process is very laborious and expensive. To dispense with the reliance on 3D box labels, in this paper we explore the weakly supervised monocular 3D detection. Specifically, we first detect 2D boxes on the image. Then, we adopt the generated 2D boxes to select corresponding RoI LiDAR points as the weak supervision. Eventually, we adopt a network to predict 3D boxes which can tightly align with associated RoI LiDAR points. This network is learned by minimizing our newly-proposed 3D alignment loss between the 3D box estimates and the corresponding RoI LiDAR points. We will illustrate the potential challenges of the above learning problem and resolve these challenges by introducing several effective designs into our method. Codes will be available at https://github.com/SPengLiang/WeakM3D.
2008.08963
Srijita Kundu
Rahul Jain and Srijita Kundu
A Direct Product Theorem for One-Way Quantum Communication
31 pages, 1 figure
null
null
null
cs.CC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove a direct product theorem for the one-way entanglement-assisted quantum communication complexity of a general relation $f\subseteq\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}$. For any $\varepsilon, \zeta > 0$ and any $k\geq1$, we show that \[ \mathrm{Q}^1_{1-(1-\varepsilon)^{\Omega(\zeta^6k/\log|\mathcal{Z}|)}}(f^k) = \Omega\left(k\left(\zeta^5\cdot\mathrm{Q}^1_{\varepsilon + 12\zeta}(f) - \log\log(1/\zeta)\right)\right),\] where $\mathrm{Q}^1_{\varepsilon}(f)$ represents the one-way entanglement-assisted quantum communication complexity of $f$ with worst-case error $\varepsilon$ and $f^k$ denotes $k$ parallel instances of $f$. As far as we are aware, this is the first direct product theorem for quantum communication. Our techniques are inspired by the parallel repetition theorems for the entangled value of two-player non-local games, under product distributions due to Jain, Pereszl\'{e}nyi and Yao, and under anchored distributions due to Bavarian, Vidick and Yuen, as well as message-compression for quantum protocols due to Jain, Radhakrishnan and Sen. Our techniques also work for entangled non-local games which have input distributions anchored on any one side. In particular, we show that for any game $G = (q, \mathcal{X}\times\mathcal{Y}, \mathcal{A}\times\mathcal{B}, \mathsf{V})$ where $q$ is a distribution on $\mathcal{X}\times\mathcal{Y}$ anchored on any one side with anchoring probability $\zeta$, then \[ \omega^*(G^k) = \left(1 - (1-\omega^*(G))^5\right)^{\Omega\left(\frac{\zeta^2 k}{\log(|\mathcal{A}|\cdot|\mathcal{B}|)}\right)}\] where $\omega^*(G)$ represents the entangled value of the game $G$. This is a generalization of the result of Bavarian, Vidick and Yuen, who proved a parallel repetition theorem for games anchored on both sides, and potentially a simplification of their proof.
[ { "created": "Thu, 20 Aug 2020 13:31:41 GMT", "version": "v1" } ]
2020-08-21
[ [ "Jain", "Rahul", "" ], [ "Kundu", "Srijita", "" ] ]
We prove a direct product theorem for the one-way entanglement-assisted quantum communication complexity of a general relation $f\subseteq\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}$. For any $\varepsilon, \zeta > 0$ and any $k\geq1$, we show that \[ \mathrm{Q}^1_{1-(1-\varepsilon)^{\Omega(\zeta^6k/\log|\mathcal{Z}|)}}(f^k) = \Omega\left(k\left(\zeta^5\cdot\mathrm{Q}^1_{\varepsilon + 12\zeta}(f) - \log\log(1/\zeta)\right)\right),\] where $\mathrm{Q}^1_{\varepsilon}(f)$ represents the one-way entanglement-assisted quantum communication complexity of $f$ with worst-case error $\varepsilon$ and $f^k$ denotes $k$ parallel instances of $f$. As far as we are aware, this is the first direct product theorem for quantum communication. Our techniques are inspired by the parallel repetition theorems for the entangled value of two-player non-local games, under product distributions due to Jain, Pereszl\'{e}nyi and Yao, and under anchored distributions due to Bavarian, Vidick and Yuen, as well as message-compression for quantum protocols due to Jain, Radhakrishnan and Sen. Our techniques also work for entangled non-local games which have input distributions anchored on any one side. In particular, we show that for any game $G = (q, \mathcal{X}\times\mathcal{Y}, \mathcal{A}\times\mathcal{B}, \mathsf{V})$ where $q$ is a distribution on $\mathcal{X}\times\mathcal{Y}$ anchored on any one side with anchoring probability $\zeta$, then \[ \omega^*(G^k) = \left(1 - (1-\omega^*(G))^5\right)^{\Omega\left(\frac{\zeta^2 k}{\log(|\mathcal{A}|\cdot|\mathcal{B}|)}\right)}\] where $\omega^*(G)$ represents the entangled value of the game $G$. This is a generalization of the result of Bavarian, Vidick and Yuen, who proved a parallel repetition theorem for games anchored on both sides, and potentially a simplification of their proof.
1607.02230
EPTCS
Antonina Nepeivoda
Turchin's Relation for Call-by-Name Computations: A Formal Approach
In Proceedings VPT 2016, arXiv:1607.01835
EPTCS 216, 2016, pp. 137-159
10.4204/EPTCS.216.8
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supercompilation is a program transformation technique that was first described by V. F. Turchin in the 1970s. In supercompilation, Turchin's relation as a similarity relation on call-stack configurations is used both for call-by-value and call-by-name semantics to terminate unfolding of the program being transformed. In this paper, we give a formal grammar model of call-by-name stack behaviour. We classify the model in terms of the Chomsky hierarchy and then formally prove that Turchin's relation can terminate all computations generated by the model.
[ { "created": "Fri, 8 Jul 2016 05:31:36 GMT", "version": "v1" } ]
2016-07-11
[ [ "Nepeivoda", "Antonina", "" ] ]
Supercompilation is a program transformation technique that was first described by V. F. Turchin in the 1970s. In supercompilation, Turchin's relation as a similarity relation on call-stack configurations is used both for call-by-value and call-by-name semantics to terminate unfolding of the program being transformed. In this paper, we give a formal grammar model of call-by-name stack behaviour. We classify the model in terms of the Chomsky hierarchy and then formally prove that Turchin's relation can terminate all computations generated by the model.
2210.12491
Behzad Ghanbarian
Alireza Roustazadeh, Behzad Ghanbarian, Mohammad B. Shadmand, Vahid Taslimitehrani, Larry W. Lake
Estimating oil and gas recovery factors via machine learning: Database-dependent accuracy and reliability
null
Engineering Applications of Artificial Intelligence Volume 128, February 2024, 107500
10.1016/j.engappai.2023.107500
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
With recent advances in artificial intelligence, machine learning (ML) approaches have become an attractive tool in petroleum engineering, particularly for reservoir characterizations. A key reservoir property is hydrocarbon recovery factor (RF) whose accurate estimation would provide decisive insights to drilling and production strategies. Therefore, this study aims to estimate the hydrocarbon RF for exploration from various reservoir characteristics, such as porosity, permeability, pressure, and water saturation via the ML. We applied three regression-based models including the extreme gradient boosting (XGBoost), support vector machine (SVM), and stepwise multiple linear regression (MLR) and various combinations of three databases to construct ML models and estimate the oil and/or gas RF. Using two databases and the cross-validation method, we evaluated the performance of the ML models. In each iteration 90 and 10% of the data were respectively used to train and test the models. The third independent database was then used to further assess the constructed models. For both oil and gas RFs, we found that the XGBoost model estimated the RF for the train and test datasets more accurately than the SVM and MLR models. However, the performance of all the models were unsatisfactory for the independent databases. Results demonstrated that the ML algorithms were highly dependent and sensitive to the databases based on which they were trained. Statistical tests revealed that such unsatisfactory performances were because the distributions of input features and target variables in the train datasets were significantly different from those in the independent databases (p-value < 0.05).
[ { "created": "Sat, 22 Oct 2022 16:25:49 GMT", "version": "v1" } ]
2023-12-05
[ [ "Roustazadeh", "Alireza", "" ], [ "Ghanbarian", "Behzad", "" ], [ "Shadmand", "Mohammad B.", "" ], [ "Taslimitehrani", "Vahid", "" ], [ "Lake", "Larry W.", "" ] ]
With recent advances in artificial intelligence, machine learning (ML) approaches have become an attractive tool in petroleum engineering, particularly for reservoir characterizations. A key reservoir property is hydrocarbon recovery factor (RF) whose accurate estimation would provide decisive insights to drilling and production strategies. Therefore, this study aims to estimate the hydrocarbon RF for exploration from various reservoir characteristics, such as porosity, permeability, pressure, and water saturation via the ML. We applied three regression-based models including the extreme gradient boosting (XGBoost), support vector machine (SVM), and stepwise multiple linear regression (MLR) and various combinations of three databases to construct ML models and estimate the oil and/or gas RF. Using two databases and the cross-validation method, we evaluated the performance of the ML models. In each iteration 90 and 10% of the data were respectively used to train and test the models. The third independent database was then used to further assess the constructed models. For both oil and gas RFs, we found that the XGBoost model estimated the RF for the train and test datasets more accurately than the SVM and MLR models. However, the performance of all the models were unsatisfactory for the independent databases. Results demonstrated that the ML algorithms were highly dependent and sensitive to the databases based on which they were trained. Statistical tests revealed that such unsatisfactory performances were because the distributions of input features and target variables in the train datasets were significantly different from those in the independent databases (p-value < 0.05).
2107.05220
Yannis Stamatiou
Vasiliki Liagkou, Panayotis Nastou, Paul Spirakis, Yannis Stamatiou
On the undecidability of the Panopticon detection problem
13 pages, no figures, technical report
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The Panopticon (which means "watcher of everything") is a well-known structure of continuous surveillance and discipline proposed by Bentham in 1785. This device was, later, used by Foucault and other philosophers as a paradigm and metaphor for the study of constitutional power and knowledge as well as a model of individuals' deprivation of freedom. Nowadays, technological achievements have given rise to new, non-physical (unlike prisons), means of constant surveillance that transcend physical boundaries. This, combined with the confession of some governmental institutions that they actually collaborate with these Internet giants to collect or deduce information about people, creates a worrisome situation of several co-existing Panopticons that can act separately or in close collaboration. Thus, they can only be detected and identified through the expense of (perhaps considerable) effort. In this paper we provide a theoretical framework for studying the detectability status of Panopticons that fall under two theoretical, but not unrealistic, definitions. We show, using Oracle Turing Machines, that detecting modern day, ICT-based, Panopticons is an undecidable problem. Furthermore, we show that for each sufficiently expressive formal system, we can effectively construct a Turing Machine for which it is impossible to prove, within the formal system, either that it is a Panopticon or it is not a Panopticon.
[ { "created": "Mon, 12 Jul 2021 06:48:17 GMT", "version": "v1" } ]
2021-07-13
[ [ "Liagkou", "Vasiliki", "" ], [ "Nastou", "Panayotis", "" ], [ "Spirakis", "Paul", "" ], [ "Stamatiou", "Yannis", "" ] ]
The Panopticon (which means "watcher of everything") is a well-known structure of continuous surveillance and discipline proposed by Bentham in 1785. This device was, later, used by Foucault and other philosophers as a paradigm and metaphor for the study of constitutional power and knowledge as well as a model of individuals' deprivation of freedom. Nowadays, technological achievements have given rise to new, non-physical (unlike prisons), means of constant surveillance that transcend physical boundaries. This, combined with the confession of some governmental institutions that they actually collaborate with these Internet giants to collect or deduce information about people, creates a worrisome situation of several co-existing Panopticons that can act separately or in close collaboration. Thus, they can only be detected and identified through the expense of (perhaps considerable) effort. In this paper we provide a theoretical framework for studying the detectability status of Panopticons that fall under two theoretical, but not unrealistic, definitions. We show, using Oracle Turing Machines, that detecting modern day, ICT-based, Panopticons is an undecidable problem. Furthermore, we show that for each sufficiently expressive formal system, we can effectively construct a Turing Machine for which it is impossible to prove, within the formal system, either that it is a Panopticon or it is not a Panopticon.
1405.2496
Stefano Gonella
Jeffrey M. Druce, Jarvis D. Haupt, Stefano Gonella
Anomaly-Sensitive Dictionary Learning for Unsupervised Diagnostics of Solid Media
Submitted to the Proceedings of the Royal Society A
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a strategy for the detection and triangulation of structural anomalies in solid media. The method revolves around the construction of sparse representations of the medium's dynamic response, obtained by learning instructive dictionaries which form a suitable basis for the response data. The resulting sparse coding problem is recast as a modified dictionary learning task with additional spatial sparsity constraints enforced on the atoms of the learned dictionaries, which provides them with a prescribed spatial topology that is designed to unveil anomalous regions in the physical domain. The proposed methodology is model agnostic, i.e., it forsakes the need for a physical model and requires virtually no a priori knowledge of the structure's material properties, as all the inferences are exclusively informed by the data through the layers of information that are available in the intrinsic salient structure of the material's dynamic response. This characteristic makes the approach powerful for anomaly identification in systems with unknown or heterogeneous property distribution, for which a model is unsuitable or unreliable. The method is validated using both synthetically
[ { "created": "Sun, 11 May 2014 04:24:35 GMT", "version": "v1" } ]
2014-05-13
[ [ "Druce", "Jeffrey M.", "" ], [ "Haupt", "Jarvis D.", "" ], [ "Gonella", "Stefano", "" ] ]
This paper proposes a strategy for the detection and triangulation of structural anomalies in solid media. The method revolves around the construction of sparse representations of the medium's dynamic response, obtained by learning instructive dictionaries which form a suitable basis for the response data. The resulting sparse coding problem is recast as a modified dictionary learning task with additional spatial sparsity constraints enforced on the atoms of the learned dictionaries, which provides them with a prescribed spatial topology that is designed to unveil anomalous regions in the physical domain. The proposed methodology is model agnostic, i.e., it forsakes the need for a physical model and requires virtually no a priori knowledge of the structure's material properties, as all the inferences are exclusively informed by the data through the layers of information that are available in the intrinsic salient structure of the material's dynamic response. This characteristic makes the approach powerful for anomaly identification in systems with unknown or heterogeneous property distribution, for which a model is unsuitable or unreliable. The method is validated using both synthetically
cs/0501033
Pierre-Louis Curien
Pierre-Louis Curien (PPS)
Playful, streamlike computation
null
Domain theory, logic and computation, Kluwer Academic Publishers (Ed.) (2003) 1-24
null
null
cs.LO
null
We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation -- that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like \"call-cc\" and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand.
[ { "created": "Tue, 18 Jan 2005 07:39:09 GMT", "version": "v1" } ]
2007-06-17
[ [ "Curien", "Pierre-Louis", "", "PPS" ] ]
We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation -- that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like \"call-cc\" and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand.
2210.07448
Stefan Larson
Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, Kevin Leach
Evaluating Out-of-Distribution Performance on Document Image Classifiers
NeurIPS D&B 2022
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
The ability of a document classifier to handle inputs that are drawn from a distribution different from the training distribution is crucial for robust deployment and generalizability. The RVL-CDIP corpus is the de facto standard benchmark for document classification, yet to our knowledge all studies that use this corpus do not include evaluation on out-of-distribution documents. In this paper, we curate and release a new out-of-distribution benchmark for evaluating out-of-distribution performance for document classifiers. Our new out-of-distribution benchmark consists of two types of documents: those that are not part of any of the 16 in-domain RVL-CDIP categories (RVL-CDIP-O), and those that are one of the 16 in-domain categories yet are drawn from a distribution different from that of the original RVL-CDIP dataset (RVL-CDIP-N). While prior work on document classification for in-domain RVL-CDIP documents reports high accuracy scores, we find that these models exhibit accuracy drops of between roughly 15-30% on our new out-of-domain RVL-CDIP-N benchmark, and further struggle to distinguish between in-domain RVL-CDIP-N and out-of-domain RVL-CDIP-O inputs. Our new benchmark provides researchers with a valuable new resource for analyzing out-of-distribution performance on document classifiers. Our new out-of-distribution data can be found at https://github.com/gxlarson/rvl-cdip-ood.
[ { "created": "Fri, 14 Oct 2022 01:24:21 GMT", "version": "v1" }, { "created": "Wed, 18 Jan 2023 16:26:48 GMT", "version": "v2" } ]
2023-01-19
[ [ "Larson", "Stefan", "" ], [ "Lim", "Gordon", "" ], [ "Ai", "Yutong", "" ], [ "Kuang", "David", "" ], [ "Leach", "Kevin", "" ] ]
The ability of a document classifier to handle inputs that are drawn from a distribution different from the training distribution is crucial for robust deployment and generalizability. The RVL-CDIP corpus is the de facto standard benchmark for document classification, yet to our knowledge all studies that use this corpus do not include evaluation on out-of-distribution documents. In this paper, we curate and release a new out-of-distribution benchmark for evaluating out-of-distribution performance for document classifiers. Our new out-of-distribution benchmark consists of two types of documents: those that are not part of any of the 16 in-domain RVL-CDIP categories (RVL-CDIP-O), and those that are one of the 16 in-domain categories yet are drawn from a distribution different from that of the original RVL-CDIP dataset (RVL-CDIP-N). While prior work on document classification for in-domain RVL-CDIP documents reports high accuracy scores, we find that these models exhibit accuracy drops of between roughly 15-30% on our new out-of-domain RVL-CDIP-N benchmark, and further struggle to distinguish between in-domain RVL-CDIP-N and out-of-domain RVL-CDIP-O inputs. Our new benchmark provides researchers with a valuable new resource for analyzing out-of-distribution performance on document classifiers. Our new out-of-distribution data can be found at https://github.com/gxlarson/rvl-cdip-ood.
2202.07630
Chen Liu
Chen Liu, Jonas Pfeiffer, Anna Korhonen, Ivan Vuli\'c, Iryna Gurevych
Delving Deeper into Cross-lingual Visual Question Answering
Findings of EACL 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Visual question answering (VQA) is one of the crucial vision-and-language tasks. Yet, existing VQA research has mostly focused on the English language, due to a lack of suitable evaluation resources. Previous work on cross-lingual VQA has reported poor zero-shot transfer performance of current multilingual multimodal Transformers with large gaps to monolingual performance, without any deeper analysis. In this work, we delve deeper into the different aspects of cross-lingual VQA, aiming to understand the impact of 1) modeling methods and choices, including architecture, inductive bias, fine-tuning; 2) learning biases: including question types and modality biases in cross-lingual setups. The key results of our analysis are: 1) We show that simple modifications to the standard training setup can substantially reduce the transfer gap to monolingual English performance, yielding +10 accuracy points over existing methods. 2) We analyze cross-lingual VQA across different question types of varying complexity for different multilingual multimodal Transformers, and identify question types that are the most difficult to improve on. 3) We provide an analysis of modality biases present in training data and models, revealing why zero-shot performance gaps remain for certain question types and languages.
[ { "created": "Tue, 15 Feb 2022 18:22:18 GMT", "version": "v1" }, { "created": "Thu, 8 Jun 2023 18:33:28 GMT", "version": "v2" } ]
2023-06-12
[ [ "Liu", "Chen", "" ], [ "Pfeiffer", "Jonas", "" ], [ "Korhonen", "Anna", "" ], [ "Vulić", "Ivan", "" ], [ "Gurevych", "Iryna", "" ] ]
Visual question answering (VQA) is one of the crucial vision-and-language tasks. Yet, existing VQA research has mostly focused on the English language, due to a lack of suitable evaluation resources. Previous work on cross-lingual VQA has reported poor zero-shot transfer performance of current multilingual multimodal Transformers with large gaps to monolingual performance, without any deeper analysis. In this work, we delve deeper into the different aspects of cross-lingual VQA, aiming to understand the impact of 1) modeling methods and choices, including architecture, inductive bias, fine-tuning; 2) learning biases: including question types and modality biases in cross-lingual setups. The key results of our analysis are: 1) We show that simple modifications to the standard training setup can substantially reduce the transfer gap to monolingual English performance, yielding +10 accuracy points over existing methods. 2) We analyze cross-lingual VQA across different question types of varying complexity for different multilingual multimodal Transformers, and identify question types that are the most difficult to improve on. 3) We provide an analysis of modality biases present in training data and models, revealing why zero-shot performance gaps remain for certain question types and languages.
1311.0251
Andrew Mao
Andrew Mao, Hossein Azari Soufiani, Yiling Chen, David C. Parkes
Capturing Variation and Uncertainty in Human Judgment
null
null
null
null
cs.IR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The well-studied problem of statistical rank aggregation has been applied to comparing sports teams, information retrieval, and most recently to data generated by human judgment. Such human-generated rankings may be substantially different from traditional statistical ranking data. In this work, we show that a recently proposed generalized random utility model reveals distinctive patterns in human judgment across three different domains, and provides a succinct representation of variance in both population preferences and imperfect perception. In contrast, we also show that classical statistical ranking models fail to capture important features from human-generated input. Our work motivates the use of more flexible ranking models for representing and describing the collective preferences or decision-making of human participants.
[ { "created": "Tue, 29 Oct 2013 21:30:59 GMT", "version": "v1" }, { "created": "Mon, 3 Nov 2014 22:10:36 GMT", "version": "v2" } ]
2014-11-05
[ [ "Mao", "Andrew", "" ], [ "Soufiani", "Hossein Azari", "" ], [ "Chen", "Yiling", "" ], [ "Parkes", "David C.", "" ] ]
The well-studied problem of statistical rank aggregation has been applied to comparing sports teams, information retrieval, and most recently to data generated by human judgment. Such human-generated rankings may be substantially different from traditional statistical ranking data. In this work, we show that a recently proposed generalized random utility model reveals distinctive patterns in human judgment across three different domains, and provides a succinct representation of variance in both population preferences and imperfect perception. In contrast, we also show that classical statistical ranking models fail to capture important features from human-generated input. Our work motivates the use of more flexible ranking models for representing and describing the collective preferences or decision-making of human participants.
cs/0702114
David Pritchard
David Pritchard
Nearest Neighbor Network Traversal
null
null
null
null
cs.DC
null
A mobile agent in a network wants to visit every node of an n-node network, using a small number of steps. We investigate the performance of the following ``nearest neighbor'' heuristic: always go to the nearest unvisited node. If the network graph never changes, then from (Rosenkrantz, Stearns and Lewis, 1977) and (Hurkens and Woeginger, 2004) it follows that Theta(n log n) steps are necessary and sufficient in the worst case. We give a simpler proof of the upper bound and an example that improves the best known lower bound. We investigate how the performance of this heuristic changes when it is distributively implemented in a network. Even if network edges are allow to fail over time, we show that the nearest neighbor strategy never runs for more than O(n^2) iterations. We also show that any strategy can be forced to take at least n(n-1)/2 steps before all nodes are visited, if the edges of the network are deleted in an adversarial way.
[ { "created": "Tue, 20 Feb 2007 03:54:12 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pritchard", "David", "" ] ]
A mobile agent in a network wants to visit every node of an n-node network, using a small number of steps. We investigate the performance of the following ``nearest neighbor'' heuristic: always go to the nearest unvisited node. If the network graph never changes, then from (Rosenkrantz, Stearns and Lewis, 1977) and (Hurkens and Woeginger, 2004) it follows that Theta(n log n) steps are necessary and sufficient in the worst case. We give a simpler proof of the upper bound and an example that improves the best known lower bound. We investigate how the performance of this heuristic changes when it is distributively implemented in a network. Even if network edges are allow to fail over time, we show that the nearest neighbor strategy never runs for more than O(n^2) iterations. We also show that any strategy can be forced to take at least n(n-1)/2 steps before all nodes are visited, if the edges of the network are deleted in an adversarial way.
1906.00697
Bin Li
Xiaoyu Shi, Benedetta Tondi, Bin Li, Mauro Barni
CNN-based Steganalysis and Parametric Adversarial Embedding: a Game-Theoretic Framework
Adversarial embedding, deep learning, steganography, steganalysis, game theory
null
null
null
cs.MM cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CNN-based steganalysis has recently achieved very good performance in detecting content-adaptive steganography. At the same time, recent works have shown that, by adopting an approach similar to that used to build adversarial examples, a steganographer can adopt an adversarial embedding strategy to effectively counter a target CNN steganalyzer. In turn, the good performance of the steganalyzer can be restored by retraining the CNN with adversarial stego images. A problem with this model is that, arguably, at training time the steganalizer is not aware of the exact parameters used by the steganograher for adversarial embedding and, vice versa, the steganographer does not know how the images that will be used to train the steganalyzer are generated. In order to exit this apparent deadlock, we introduce a game theoretic framework wherein the problem of setting the parameters of the steganalyzer and the steganographer is solved in a strategic way. More specifically, a non-zero sum game is first formulated to model the problem, and then instantiated by considering a specific adversarial embedding scheme setting its operating parameters in a game-theoretic fashion. Our analysis shows that the equilibrium solution of the non zero-sum game can be conveniently found by solving an associated zero-sum game, thus reducing greatly the complexity of the problem. Then we run several experiments to derive the optimum strategies for the steganographer and the staganalyst in a game-theoretic sense, and to evaluate the performance of the game at the equilibrium, characterizing the loss with respect to the conventional non-adversarial case. Eventually, by leveraging on the analysis of the equilibrium point of the game, we introduce a new strategy to improve the reliability of the steganalysis, which shows the benefits of addressing the security issue in a game-theoretic perspective.
[ { "created": "Mon, 3 Jun 2019 10:48:36 GMT", "version": "v1" } ]
2019-06-04
[ [ "Shi", "Xiaoyu", "" ], [ "Tondi", "Benedetta", "" ], [ "Li", "Bin", "" ], [ "Barni", "Mauro", "" ] ]
CNN-based steganalysis has recently achieved very good performance in detecting content-adaptive steganography. At the same time, recent works have shown that, by adopting an approach similar to that used to build adversarial examples, a steganographer can adopt an adversarial embedding strategy to effectively counter a target CNN steganalyzer. In turn, the good performance of the steganalyzer can be restored by retraining the CNN with adversarial stego images. A problem with this model is that, arguably, at training time the steganalizer is not aware of the exact parameters used by the steganograher for adversarial embedding and, vice versa, the steganographer does not know how the images that will be used to train the steganalyzer are generated. In order to exit this apparent deadlock, we introduce a game theoretic framework wherein the problem of setting the parameters of the steganalyzer and the steganographer is solved in a strategic way. More specifically, a non-zero sum game is first formulated to model the problem, and then instantiated by considering a specific adversarial embedding scheme setting its operating parameters in a game-theoretic fashion. Our analysis shows that the equilibrium solution of the non zero-sum game can be conveniently found by solving an associated zero-sum game, thus reducing greatly the complexity of the problem. Then we run several experiments to derive the optimum strategies for the steganographer and the staganalyst in a game-theoretic sense, and to evaluate the performance of the game at the equilibrium, characterizing the loss with respect to the conventional non-adversarial case. Eventually, by leveraging on the analysis of the equilibrium point of the game, we introduce a new strategy to improve the reliability of the steganalysis, which shows the benefits of addressing the security issue in a game-theoretic perspective.
2205.08521
Michael Rudow
Michael Rudow and K.V. Rashmi
Learning-Augmented Streaming Codes are Approximately Optimal for Variable-Size Messages
13 pages, 8 figures, this is an extended version of the IEEE ISIT 2022 paper with the same title
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time streaming communication requires a high quality of service despite contending with packet loss. Streaming codes are a class of codes best suited for this setting. A key challenge for streaming codes is that they operate in an "online" setting in which the amount of data to be transmitted varies over time and is not known in advance. Mitigating the adverse effects of variability requires spreading the data that arrives at a time slot over multiple future packets, and the optimal strategy for spreading depends on the arrival pattern. Algebraic coding techniques alone are therefore insufficient for designing rate-optimal codes. We combine algebraic coding techniques with a learning-augmented algorithm for spreading to design the first approximately rate-optimal streaming codes for a range of parameter regimes that are important for practical applications.
[ { "created": "Tue, 17 May 2022 17:45:53 GMT", "version": "v1" } ]
2022-05-18
[ [ "Rudow", "Michael", "" ], [ "Rashmi", "K. V.", "" ] ]
Real-time streaming communication requires a high quality of service despite contending with packet loss. Streaming codes are a class of codes best suited for this setting. A key challenge for streaming codes is that they operate in an "online" setting in which the amount of data to be transmitted varies over time and is not known in advance. Mitigating the adverse effects of variability requires spreading the data that arrives at a time slot over multiple future packets, and the optimal strategy for spreading depends on the arrival pattern. Algebraic coding techniques alone are therefore insufficient for designing rate-optimal codes. We combine algebraic coding techniques with a learning-augmented algorithm for spreading to design the first approximately rate-optimal streaming codes for a range of parameter regimes that are important for practical applications.
1210.6413
EPTCS
Eduardo Zambon (University of Twente), Arend Rensink (University of Twente)
Graph Subsumption in Abstract State Space Exploration
In Proceedings GRAPHITE 2012, arXiv:1210.6118
EPTCS 99, 2012, pp. 35-49
10.4204/EPTCS.99.6
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present the extension of an existing method for abstract graph-based state space exploration, called neighbourhood abstraction, with a reduction technique based on subsumption. Basically, one abstract state subsumes another when it covers more concrete states; in such a case, the subsumed state need not be included in the state space, thus giving a reduction. We explain the theory and especially also report on a number of experiments, which show that subsumption indeed drastically reduces both the state space and the resources (time and memory) needed to compute it.
[ { "created": "Wed, 24 Oct 2012 00:33:15 GMT", "version": "v1" } ]
2012-10-25
[ [ "Zambon", "Eduardo", "", "University of Twente" ], [ "Rensink", "Arend", "", "University of\n Twente" ] ]
In this paper we present the extension of an existing method for abstract graph-based state space exploration, called neighbourhood abstraction, with a reduction technique based on subsumption. Basically, one abstract state subsumes another when it covers more concrete states; in such a case, the subsumed state need not be included in the state space, thus giving a reduction. We explain the theory and especially also report on a number of experiments, which show that subsumption indeed drastically reduces both the state space and the resources (time and memory) needed to compute it.
1506.08637
Maice Costa
Maice Costa, Marian Codreanu, and Anthony Ephremides
On The Age Of Information In Status Update Systems With Packet Management
20 pages, 7 figures
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a communication system in which status updates arrive at a source node, and should be transmitted through a network to the intended destination node. The status updates are samples of a random process under observation, transmitted as packets, which also contain the time stamp to identify when the sample was generated. The age of the information available to the destination node is the time elapsed since the last received update was generated. In this paper, we model the source-destination link using queuing theory, and we assume that the time it takes to successfully transmit a packet to the destination is an exponentially distributed service time. We analyze the age of information in the case that the source node has the capability to manage the arriving samples, possibly discarding packets in order to avoid wasting network resources with the transmission of stale information. In addition to characterizing the average age, we propose a new metric, called peak age, which provides information about the maximum value of the age, achieved immediately before receiving an update.
[ { "created": "Mon, 29 Jun 2015 14:09:54 GMT", "version": "v1" } ]
2015-06-30
[ [ "Costa", "Maice", "" ], [ "Codreanu", "Marian", "" ], [ "Ephremides", "Anthony", "" ] ]
We consider a communication system in which status updates arrive at a source node, and should be transmitted through a network to the intended destination node. The status updates are samples of a random process under observation, transmitted as packets, which also contain the time stamp to identify when the sample was generated. The age of the information available to the destination node is the time elapsed since the last received update was generated. In this paper, we model the source-destination link using queuing theory, and we assume that the time it takes to successfully transmit a packet to the destination is an exponentially distributed service time. We analyze the age of information in the case that the source node has the capability to manage the arriving samples, possibly discarding packets in order to avoid wasting network resources with the transmission of stale information. In addition to characterizing the average age, we propose a new metric, called peak age, which provides information about the maximum value of the age, achieved immediately before receiving an update.
1908.06381
Danylo Malyuta
Danylo Malyuta, Christian Brommer, Daniel Hentzen, Thomas Stastny, Roland Siegwart, Roland Brockers
Long-Duration Fully Autonomous Operation of Rotorcraft Unmanned Aerial Systems for Remote-Sensing Data Acquisition
38 pages, 28 figures
J Field Robotics (2019) 1-21
10.1002/rob.21898
null
cs.RO cs.CV cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
[ { "created": "Sun, 18 Aug 2019 06:33:52 GMT", "version": "v1" } ]
2019-08-20
[ [ "Malyuta", "Danylo", "" ], [ "Brommer", "Christian", "" ], [ "Hentzen", "Daniel", "" ], [ "Stastny", "Thomas", "" ], [ "Siegwart", "Roland", "" ], [ "Brockers", "Roland", "" ] ]
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, e.g. days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision-based precision landing on a recharging station for automated energy replenishment. High-level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision-based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with a duration of 11 and 10.6 hours, and one outdoor experiment with a duration of 4 hours. The UAS executed 16, 48 and 22 flights respectively during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1 to 10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge this is the first research publication about the long-term outdoor operation of a quadrotor system with no human interaction.
2106.12021
Abhishek Moitra
Abhishek Moitra and Priyadarshini Panda
DetectX -- Adversarial Input Detection using Current Signatures in Memristive XBar Arrays
14 pages, 13 figures
IEEE Transactions on Circuits and Systems 1- Regular Papers, 2021
10.1109/TCSI.2021.3110487
null
cs.CR cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial input detection has emerged as a prominent technique to harden Deep Neural Networks(DNNs) against adversarial attacks. Most prior works use neural network-based detectors or complex statistical analysis for adversarial detection. These approaches are computationally intensive and vulnerable to adversarial attacks. To this end, we propose DetectX - a hardware friendly adversarial detection mechanism using hardware signatures like Sum of column Currents (SoI) in memristive crossbars (XBar). We show that adversarial inputs have higher SoI compared to clean inputs. However, the difference is too small for reliable adversarial detection. Hence, we propose a dual-phase training methodology: Phase1 training is geared towards increasing the separation between clean and adversarial SoIs; Phase2 training improves the overall robustness against different strengths of adversarial attacks. For hardware-based adversarial detection, we implement the DetectX module using 32nm CMOS circuits and integrate it with a Neurosim-like analog crossbar architecture. We perform hardware evaluation of the Neurosim+DetectX system on the Neurosim platform using datasets-CIFAR10(VGG8), CIFAR100(VGG16) and TinyImagenet(ResNet18). Our experiments show that DetectX is 10x-25x more energy efficient and immune to dynamic adversarial attacks compared to previous state-of-the-art works. Moreover, we achieve high detection performance (ROC-AUC > 0.95) for strong white-box and black-box attacks. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/DetectX
[ { "created": "Tue, 22 Jun 2021 19:09:03 GMT", "version": "v1" } ]
2021-10-11
[ [ "Moitra", "Abhishek", "" ], [ "Panda", "Priyadarshini", "" ] ]
Adversarial input detection has emerged as a prominent technique to harden Deep Neural Networks(DNNs) against adversarial attacks. Most prior works use neural network-based detectors or complex statistical analysis for adversarial detection. These approaches are computationally intensive and vulnerable to adversarial attacks. To this end, we propose DetectX - a hardware friendly adversarial detection mechanism using hardware signatures like Sum of column Currents (SoI) in memristive crossbars (XBar). We show that adversarial inputs have higher SoI compared to clean inputs. However, the difference is too small for reliable adversarial detection. Hence, we propose a dual-phase training methodology: Phase1 training is geared towards increasing the separation between clean and adversarial SoIs; Phase2 training improves the overall robustness against different strengths of adversarial attacks. For hardware-based adversarial detection, we implement the DetectX module using 32nm CMOS circuits and integrate it with a Neurosim-like analog crossbar architecture. We perform hardware evaluation of the Neurosim+DetectX system on the Neurosim platform using datasets-CIFAR10(VGG8), CIFAR100(VGG16) and TinyImagenet(ResNet18). Our experiments show that DetectX is 10x-25x more energy efficient and immune to dynamic adversarial attacks compared to previous state-of-the-art works. Moreover, we achieve high detection performance (ROC-AUC > 0.95) for strong white-box and black-box attacks. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/DetectX
2103.01578
Daiki Morinaga
Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, and Youhei Akimoto
Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions
17 pages
null
null
null
cs.NE
http://creativecommons.org/licenses/by/4.0/
The (1+1)-evolution strategy (ES) with success-based step-size adaptation is analyzed on a general convex quadratic function and its monotone transformation, that is, $f(x) = g((x - x^*)^\mathrm{T} H (x - x^*))$, where $g:\mathbb{R}\to\mathbb{R}$ is a strictly increasing function, $H$ is a positive-definite symmetric matrix, and $x^* \in \mathbb{R}^d$ is the optimal solution of $f$. The convergence rate, that is, the decrease rate of the distance from a search point $m_t$ to the optimal solution $x^*$, is proven to be in $O(\exp( - L / \mathrm{Tr}(H) ))$, where $L$ is the smallest eigenvalue of $H$ and $\mathrm{Tr}(H)$ is the trace of $H$. This result generalizes the known rate of $O(\exp(- 1/d ))$ for the case of $H = I_{d}$ ($I_d$ is the identity matrix of dimension $d$) and $O(\exp(- 1/ (d\cdot\xi) ))$ for the case of $H = \mathrm{diag}(\xi \cdot I_{d/2}, I_{d/2})$. To the best of our knowledge, this is the first study in which the convergence rate of the (1+1)-ES is derived explicitly and rigorously on a general convex quadratic function, which depicts the impact of the distribution of the eigenvalues in the Hessian $H$ on the optimization and not only the impact of the condition number of $H$.
[ { "created": "Tue, 2 Mar 2021 09:03:44 GMT", "version": "v1" }, { "created": "Mon, 12 Apr 2021 14:16:38 GMT", "version": "v2" } ]
2021-04-13
[ [ "Morinaga", "Daiki", "" ], [ "Fukuchi", "Kazuto", "" ], [ "Sakuma", "Jun", "" ], [ "Akimoto", "Youhei", "" ] ]
The (1+1)-evolution strategy (ES) with success-based step-size adaptation is analyzed on a general convex quadratic function and its monotone transformation, that is, $f(x) = g((x - x^*)^\mathrm{T} H (x - x^*))$, where $g:\mathbb{R}\to\mathbb{R}$ is a strictly increasing function, $H$ is a positive-definite symmetric matrix, and $x^* \in \mathbb{R}^d$ is the optimal solution of $f$. The convergence rate, that is, the decrease rate of the distance from a search point $m_t$ to the optimal solution $x^*$, is proven to be in $O(\exp( - L / \mathrm{Tr}(H) ))$, where $L$ is the smallest eigenvalue of $H$ and $\mathrm{Tr}(H)$ is the trace of $H$. This result generalizes the known rate of $O(\exp(- 1/d ))$ for the case of $H = I_{d}$ ($I_d$ is the identity matrix of dimension $d$) and $O(\exp(- 1/ (d\cdot\xi) ))$ for the case of $H = \mathrm{diag}(\xi \cdot I_{d/2}, I_{d/2})$. To the best of our knowledge, this is the first study in which the convergence rate of the (1+1)-ES is derived explicitly and rigorously on a general convex quadratic function, which depicts the impact of the distribution of the eigenvalues in the Hessian $H$ on the optimization and not only the impact of the condition number of $H$.
1506.03883
Dietmar Berwanger
Dietmar Berwanger, Anup Basil Mathew, Marie van den Bogaard
Hierarchical Information and the Synthesis of Distributed Strategies
35 pages, 6 figures; extended version of a paper presented at ATVA 2015
null
null
null
cs.GT cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infinite games with imperfect information are known to be undecidable unless the information flow is severely restricted. One fundamental decidable case occurs when there is a total ordering among players, such that each player has access to all the information that the following ones receive. In this paper we consider variations of this hierarchy principle for synchronous games with perfect recall, and identify new decidable classes for which the distributed synthesis problem is solvable with finite-state strategies. In particular, we show that decidability is maintained when the information hierarchy may change along the play, or when transient phases without hierarchical information are allowed. Finally, we interpret our result in terms of distributed system architectures.
[ { "created": "Fri, 12 Jun 2015 01:11:24 GMT", "version": "v1" }, { "created": "Sat, 16 Jul 2016 12:14:37 GMT", "version": "v2" } ]
2016-07-19
[ [ "Berwanger", "Dietmar", "" ], [ "Mathew", "Anup Basil", "" ], [ "Bogaard", "Marie van den", "" ] ]
Infinite games with imperfect information are known to be undecidable unless the information flow is severely restricted. One fundamental decidable case occurs when there is a total ordering among players, such that each player has access to all the information that the following ones receive. In this paper we consider variations of this hierarchy principle for synchronous games with perfect recall, and identify new decidable classes for which the distributed synthesis problem is solvable with finite-state strategies. In particular, we show that decidability is maintained when the information hierarchy may change along the play, or when transient phases without hierarchical information are allowed. Finally, we interpret our result in terms of distributed system architectures.
1902.01975
Alireza Javani
Alireza Javani, Marwen Zorgui and Zhiying Wang
Age of Information in Multiple Sensing
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Having timely and fresh knowledge about the current state of information sources is critical in a variety of applications. In particular, a status update may arrive at the destination much later than its generation time due to processing and communication delays. The freshness of the status update at the destination is captured by the notion of age of information. In this study, we first analyze a network with a single source, $n$ servers, and the monitor (destination). The servers independently sense the source of information and send the status update to the monitor. We then extend our result to multiple independent sources of information in the presence of $n$ servers. We assume that updates arrive at the servers according to Poisson random processes. Each server sends its update to the monitor through a direct link, which is modeled as a queue. The service time to transmit an update is considered to be an exponential random variable. We examine both homogeneous and heterogeneous service and arrival rates for the single-source case, and only homogeneous arrival and service rates for the multiple sources case. We derive a closed-form expression for the average age of information under a last-come-first-serve (LCFS) queue for a single source and arbitrary $n$ homogeneous servers. For $n=2,3$, we derive the explicit average age of information for arbitrary sources and homogeneous servers, and for a single source and heterogeneous servers. For $n=2$ we find the optimal arrival rates given a fixed sum arrival rate and service rates.
[ { "created": "Tue, 5 Feb 2019 23:28:02 GMT", "version": "v1" }, { "created": "Thu, 13 Jun 2019 01:34:00 GMT", "version": "v2" } ]
2019-06-14
[ [ "Javani", "Alireza", "" ], [ "Zorgui", "Marwen", "" ], [ "Wang", "Zhiying", "" ] ]
Having timely and fresh knowledge about the current state of information sources is critical in a variety of applications. In particular, a status update may arrive at the destination much later than its generation time due to processing and communication delays. The freshness of the status update at the destination is captured by the notion of age of information. In this study, we first analyze a network with a single source, $n$ servers, and the monitor (destination). The servers independently sense the source of information and send the status update to the monitor. We then extend our result to multiple independent sources of information in the presence of $n$ servers. We assume that updates arrive at the servers according to Poisson random processes. Each server sends its update to the monitor through a direct link, which is modeled as a queue. The service time to transmit an update is considered to be an exponential random variable. We examine both homogeneous and heterogeneous service and arrival rates for the single-source case, and only homogeneous arrival and service rates for the multiple sources case. We derive a closed-form expression for the average age of information under a last-come-first-serve (LCFS) queue for a single source and arbitrary $n$ homogeneous servers. For $n=2,3$, we derive the explicit average age of information for arbitrary sources and homogeneous servers, and for a single source and heterogeneous servers. For $n=2$ we find the optimal arrival rates given a fixed sum arrival rate and service rates.
2310.03091
Daile Osorio-Roig
Daile Osorio-Roig, Lazaro J. Gonzalez-Soler, Christian Rathgeb, Christoph Busch
Privacy-preserving Multi-biometric Indexing based on Frequent Binary Patterns
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The development of large-scale identification systems that ensure the privacy protection of enrolled subjects represents a major challenge. Biometric deployments that provide interoperability and usability by including efficient multi-biometric solutions are a recent requirement. In the context of privacy protection, several template protection schemes have been proposed in the past. However, these schemes seem inadequate for indexing (workload reduction) in biometric identification systems. More specifically, they have been used in identification systems that perform exhaustive searches, leading to a degradation of computational efficiency. To overcome these limitations, we propose an efficient privacy-preserving multi-biometric identification system that retrieves protected deep cancelable templates and is agnostic with respect to biometric characteristics and biometric template protection schemes. To this end, a multi-biometric binning scheme is designed to exploit the low intra-class variation properties contained in the frequent binary patterns extracted from different types of biometric characteristics. Experimental results reported on publicly available databases using state-of-the-art Deep Neural Network (DNN)-based embedding extractors show that the protected multi-biometric identification system can reduce the computational workload to approximately 57\% (indexing up to three types of biometric characteristics) and 53% (indexing up to two types of biometric characteristics), while simultaneously improving the biometric performance of the baseline biometric system at the high-security thresholds. The source code of the proposed multi-biometric indexing approach together with the composed multi-biometric dataset, will be made available to the research community once the article is accepted.
[ { "created": "Wed, 4 Oct 2023 18:18:24 GMT", "version": "v1" } ]
2023-10-06
[ [ "Osorio-Roig", "Daile", "" ], [ "Gonzalez-Soler", "Lazaro J.", "" ], [ "Rathgeb", "Christian", "" ], [ "Busch", "Christoph", "" ] ]
The development of large-scale identification systems that ensure the privacy protection of enrolled subjects represents a major challenge. Biometric deployments that provide interoperability and usability by including efficient multi-biometric solutions are a recent requirement. In the context of privacy protection, several template protection schemes have been proposed in the past. However, these schemes seem inadequate for indexing (workload reduction) in biometric identification systems. More specifically, they have been used in identification systems that perform exhaustive searches, leading to a degradation of computational efficiency. To overcome these limitations, we propose an efficient privacy-preserving multi-biometric identification system that retrieves protected deep cancelable templates and is agnostic with respect to biometric characteristics and biometric template protection schemes. To this end, a multi-biometric binning scheme is designed to exploit the low intra-class variation properties contained in the frequent binary patterns extracted from different types of biometric characteristics. Experimental results reported on publicly available databases using state-of-the-art Deep Neural Network (DNN)-based embedding extractors show that the protected multi-biometric identification system can reduce the computational workload to approximately 57\% (indexing up to three types of biometric characteristics) and 53% (indexing up to two types of biometric characteristics), while simultaneously improving the biometric performance of the baseline biometric system at the high-security thresholds. The source code of the proposed multi-biometric indexing approach together with the composed multi-biometric dataset, will be made available to the research community once the article is accepted.
2304.13149
Christophe Van Gysel
Christophe Van Gysel
Modeling Spoken Information Queries for Virtual Assistants: Open Problems, Challenges and Opportunities
SIGIR '23. The 46th International ACM SIGIR Conference on Research & Development in Information Retrieval
null
10.1145/3539618.3591849
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual assistants are becoming increasingly important speech-driven Information Retrieval platforms that assist users with various tasks. We discuss open problems and challenges with respect to modeling spoken information queries for virtual assistants, and list opportunities where Information Retrieval methods and research can be applied to improve the quality of virtual assistant speech recognition. We discuss how query domain classification, knowledge graphs and user interaction data, and query personalization can be helpful to improve the accurate recognition of spoken information domain queries. Finally, we also provide a brief overview of current problems and challenges in speech recognition.
[ { "created": "Tue, 25 Apr 2023 20:52:40 GMT", "version": "v1" } ]
2023-04-27
[ [ "Van Gysel", "Christophe", "" ] ]
Virtual assistants are becoming increasingly important speech-driven Information Retrieval platforms that assist users with various tasks. We discuss open problems and challenges with respect to modeling spoken information queries for virtual assistants, and list opportunities where Information Retrieval methods and research can be applied to improve the quality of virtual assistant speech recognition. We discuss how query domain classification, knowledge graphs and user interaction data, and query personalization can be helpful to improve the accurate recognition of spoken information domain queries. Finally, we also provide a brief overview of current problems and challenges in speech recognition.
2301.10404
Chetan Arora
Khlood Ahmad, Mohamed Abdelrazek, Chetan Arora, Muneera Bano, John Grundy
Requirements Practices and Gaps When Engineering Human-Centered Artificial Intelligence Systems
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
[Context] Engineering Artificial Intelligence (AI) software is a relatively new area with many challenges, unknowns, and limited proven best practices. Big companies such as Google, Microsoft, and Apple have provided a suite of recent guidelines to assist engineering teams in building human-centered AI systems. [Objective] The practices currently adopted by practitioners for developing such systems, especially during Requirements Engineering (RE), are little studied and reported to date. [Method] This paper presents the results of a survey conducted to understand current industry practices in RE for AI (RE4AI) and to determine which key human-centered AI guidelines should be followed. Our survey is based on mapping existing industrial guidelines, best practices, and efforts in the literature. [Results] We surveyed 29 professionals and found most participants agreed that all the human-centered aspects we mapped should be addressed in RE. Further, we found that most participants were using UML or Microsoft Office to present requirements. [Conclusion] We identify that most of the tools currently used are not equipped to manage AI-based software, and the use of UML and Office may pose issues to the quality of requirements captured for AI. Also, all human-centered practices mapped from the guidelines should be included in RE.
[ { "created": "Wed, 25 Jan 2023 04:45:06 GMT", "version": "v1" } ]
2023-01-26
[ [ "Ahmad", "Khlood", "" ], [ "Abdelrazek", "Mohamed", "" ], [ "Arora", "Chetan", "" ], [ "Bano", "Muneera", "" ], [ "Grundy", "John", "" ] ]
[Context] Engineering Artificial Intelligence (AI) software is a relatively new area with many challenges, unknowns, and limited proven best practices. Big companies such as Google, Microsoft, and Apple have provided a suite of recent guidelines to assist engineering teams in building human-centered AI systems. [Objective] The practices currently adopted by practitioners for developing such systems, especially during Requirements Engineering (RE), are little studied and reported to date. [Method] This paper presents the results of a survey conducted to understand current industry practices in RE for AI (RE4AI) and to determine which key human-centered AI guidelines should be followed. Our survey is based on mapping existing industrial guidelines, best practices, and efforts in the literature. [Results] We surveyed 29 professionals and found most participants agreed that all the human-centered aspects we mapped should be addressed in RE. Further, we found that most participants were using UML or Microsoft Office to present requirements. [Conclusion] We identify that most of the tools currently used are not equipped to manage AI-based software, and the use of UML and Office may pose issues to the quality of requirements captured for AI. Also, all human-centered practices mapped from the guidelines should be included in RE.
2208.10922
Dongchan Min
Dongchan Min, Minyoung Song, Eunji Ko, Sung Ju Hwang
StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation
null
null
null
null
cs.CV cs.LG eess.AS eess.IV
http://creativecommons.org/licenses/by/4.0/
We propose StyleTalker, a novel audio-driven talking head generation model that can synthesize a video of a talking person from a single reference image with accurately audio-synced lip shapes, realistic head poses, and eye blinks. Specifically, by leveraging a pretrained image generator and an image encoder, we estimate the latent codes of the talking head video that faithfully reflects the given audio. This is made possible with several newly devised components: 1) A contrastive lip-sync discriminator for accurate lip synchronization, 2) A conditional sequential variational autoencoder that learns the latent motion space disentangled from the lip movements, such that we can independently manipulate the motions and lip movements while preserving the identity. 3) An auto-regressive prior augmented with normalizing flow to learn a complex audio-to-motion multi-modal latent space. Equipped with these components, StyleTalker can generate talking head videos not only in a motion-controllable way when another motion source video is given but also in a completely audio-driven manner by inferring realistic motions from the input audio. Through extensive experiments and user studies, we show that our model is able to synthesize talking head videos with impressive perceptual quality which are accurately lip-synced with the input audios, largely outperforming state-of-the-art baselines.
[ { "created": "Tue, 23 Aug 2022 12:49:01 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 08:48:04 GMT", "version": "v2" } ]
2024-03-18
[ [ "Min", "Dongchan", "" ], [ "Song", "Minyoung", "" ], [ "Ko", "Eunji", "" ], [ "Hwang", "Sung Ju", "" ] ]
We propose StyleTalker, a novel audio-driven talking head generation model that can synthesize a video of a talking person from a single reference image with accurately audio-synced lip shapes, realistic head poses, and eye blinks. Specifically, by leveraging a pretrained image generator and an image encoder, we estimate the latent codes of the talking head video that faithfully reflects the given audio. This is made possible with several newly devised components: 1) A contrastive lip-sync discriminator for accurate lip synchronization, 2) A conditional sequential variational autoencoder that learns the latent motion space disentangled from the lip movements, such that we can independently manipulate the motions and lip movements while preserving the identity. 3) An auto-regressive prior augmented with normalizing flow to learn a complex audio-to-motion multi-modal latent space. Equipped with these components, StyleTalker can generate talking head videos not only in a motion-controllable way when another motion source video is given but also in a completely audio-driven manner by inferring realistic motions from the input audio. Through extensive experiments and user studies, we show that our model is able to synthesize talking head videos with impressive perceptual quality which are accurately lip-synced with the input audios, largely outperforming state-of-the-art baselines.
1602.02685
Cristobal Esteban
Crist\'obal Esteban, Oliver Staeck, Yinchong Yang and Volker Tresp
Predicting Clinical Events by Combining Static and Dynamic Information Using Recurrent Neural Networks
null
null
null
null
cs.LG cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In clinical data sets we often find static information (e.g. patient gender, blood type, etc.) combined with sequences of data that are recorded during multiple hospital visits (e.g. medications prescribed, tests performed, etc.). Recurrent Neural Networks (RNNs) have proven to be very successful for modelling sequences of data in many areas of Machine Learning. In this work we present an approach based on RNNs, specifically designed for the clinical domain, that combines static and dynamic information in order to predict future events. We work with a database collected in the Charit\'{e} Hospital in Berlin that contains complete information concerning patients that underwent a kidney transplantation. After the transplantation three main endpoints can occur: rejection of the kidney, loss of the kidney and death of the patient. Our goal is to predict, based on information recorded in the Electronic Health Record of each patient, whether any of those endpoints will occur within the next six or twelve months after each visit to the clinic. We compared different types of RNNs that we developed for this work, with a model based on a Feedforward Neural Network and a Logistic Regression model. We found that the RNN that we developed based on Gated Recurrent Units provides the best performance for this task. We also used the same models for a second task, i.e., next event prediction, and found that here the model based on a Feedforward Neural Network outperformed the other models. Our hypothesis is that long-term dependencies are not as relevant in this task.
[ { "created": "Mon, 8 Feb 2016 18:30:58 GMT", "version": "v1" }, { "created": "Thu, 17 Nov 2016 11:52:19 GMT", "version": "v2" } ]
2016-11-18
[ [ "Esteban", "Cristóbal", "" ], [ "Staeck", "Oliver", "" ], [ "Yang", "Yinchong", "" ], [ "Tresp", "Volker", "" ] ]
In clinical data sets we often find static information (e.g. patient gender, blood type, etc.) combined with sequences of data that are recorded during multiple hospital visits (e.g. medications prescribed, tests performed, etc.). Recurrent Neural Networks (RNNs) have proven to be very successful for modelling sequences of data in many areas of Machine Learning. In this work we present an approach based on RNNs, specifically designed for the clinical domain, that combines static and dynamic information in order to predict future events. We work with a database collected in the Charit\'{e} Hospital in Berlin that contains complete information concerning patients that underwent a kidney transplantation. After the transplantation three main endpoints can occur: rejection of the kidney, loss of the kidney and death of the patient. Our goal is to predict, based on information recorded in the Electronic Health Record of each patient, whether any of those endpoints will occur within the next six or twelve months after each visit to the clinic. We compared different types of RNNs that we developed for this work, with a model based on a Feedforward Neural Network and a Logistic Regression model. We found that the RNN that we developed based on Gated Recurrent Units provides the best performance for this task. We also used the same models for a second task, i.e., next event prediction, and found that here the model based on a Feedforward Neural Network outperformed the other models. Our hypothesis is that long-term dependencies are not as relevant in this task.
2310.02635
Weirui Ye
Weirui Ye, Yunsheng Zhang, Mengchen Wang, Shengjie Wang, Xianfan Gu, Pieter Abbeel, Yang Gao
Foundation Reinforcement Learning: towards Embodied Generalist Agents with Foundation Prior Assistance
null
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, people have shown that large-scale pre-training from internet-scale data is the key to building generalist models, as witnessed in NLP. To build embodied generalist agents, we and many other researchers hypothesize that such foundation prior is also an indispensable component. However, it is unclear what is the proper concrete form to represent those embodied foundation priors and how they should be used in the downstream task. In this paper, we propose an intuitive and effective set of embodied priors that consist of foundation policy, value, and success reward. The proposed priors are based on the goal-conditioned MDP. To verify their effectiveness, we instantiate an actor-critic method assisted by the priors, called Foundation Actor-Critic (FAC). We name our framework as Foundation Reinforcement Learning (FRL), since it completely relies on embodied foundation priors to explore, learn and reinforce. The benefits of FRL are threefold. (1) Sample efficient. With foundation priors, FAC learns significantly faster than traditional RL. Our evaluation on the Meta-World has proved that FAC can achieve 100% success rates for 7/8 tasks under less than 200k frames, which outperforms the baseline method with careful manual-designed rewards under 1M frames. (2) Robust to noisy priors. Our method tolerates the unavoidable noise in embodied foundation models. We show that FAC works well even under heavy noise or quantization errors. (3) Minimal human intervention: FAC completely learns from the foundation priors, without the need of human-specified dense reward, or providing teleoperated demos. Thus, FAC can be easily scaled up. We believe our FRL framework could enable the future robot to autonomously explore and learn without human intervention in the physical world. In summary, our proposed FRL is a novel and powerful learning paradigm, towards achieving embodied generalist agents.
[ { "created": "Wed, 4 Oct 2023 07:56:42 GMT", "version": "v1" }, { "created": "Tue, 10 Oct 2023 04:13:20 GMT", "version": "v2" } ]
2023-10-11
[ [ "Ye", "Weirui", "" ], [ "Zhang", "Yunsheng", "" ], [ "Wang", "Mengchen", "" ], [ "Wang", "Shengjie", "" ], [ "Gu", "Xianfan", "" ], [ "Abbeel", "Pieter", "" ], [ "Gao", "Yang", "" ] ]
Recently, people have shown that large-scale pre-training from internet-scale data is the key to building generalist models, as witnessed in NLP. To build embodied generalist agents, we and many other researchers hypothesize that such foundation prior is also an indispensable component. However, it is unclear what is the proper concrete form to represent those embodied foundation priors and how they should be used in the downstream task. In this paper, we propose an intuitive and effective set of embodied priors that consist of foundation policy, value, and success reward. The proposed priors are based on the goal-conditioned MDP. To verify their effectiveness, we instantiate an actor-critic method assisted by the priors, called Foundation Actor-Critic (FAC). We name our framework as Foundation Reinforcement Learning (FRL), since it completely relies on embodied foundation priors to explore, learn and reinforce. The benefits of FRL are threefold. (1) Sample efficient. With foundation priors, FAC learns significantly faster than traditional RL. Our evaluation on the Meta-World has proved that FAC can achieve 100% success rates for 7/8 tasks under less than 200k frames, which outperforms the baseline method with careful manual-designed rewards under 1M frames. (2) Robust to noisy priors. Our method tolerates the unavoidable noise in embodied foundation models. We show that FAC works well even under heavy noise or quantization errors. (3) Minimal human intervention: FAC completely learns from the foundation priors, without the need of human-specified dense reward, or providing teleoperated demos. Thus, FAC can be easily scaled up. We believe our FRL framework could enable the future robot to autonomously explore and learn without human intervention in the physical world. In summary, our proposed FRL is a novel and powerful learning paradigm, towards achieving embodied generalist agents.
2304.12671
Raquel Blanco
Raquel Blanco, Javier Tuya, Ruben V. Seco
Test adequacy evaluation for the user-database interaction: a specification-based approach
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012), Montreal, QC, Canada, 2012
null
10.1109/ICST.2012.87
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Testing a database application is a challenging process where both the database and the user interaction have to be considered in the design of test cases. This paper describes a specification-based approach to guide the design of test inputs (both the test database and the user inputs) for a database application and to automatically evaluate the test adequacy. First, the system specification of the application is modelled: (1) the structure of the database and the user interface are represented in a single model, called Integrated Data Model (IDM), (2) the functional requirements are expressed as a set of business rules, written in terms of the IDM. Then, a MCDC-based criterion is applied over the business rules to automatically derive the situations of interest to be tested (test requirements), which guide the design of the test inputs. Finally, the adequacy of these test inputs is automatically evaluated to determine whether the test requirements are covered. The approach has been applied to the TPC-C benchmark. The results show that it allows designing test cases that are able to detect interesting faults which were located in the procedural code of the implementation.
[ { "created": "Tue, 25 Apr 2023 09:20:22 GMT", "version": "v1" } ]
2023-04-26
[ [ "Blanco", "Raquel", "" ], [ "Tuya", "Javier", "" ], [ "Seco", "Ruben V.", "" ] ]
Testing a database application is a challenging process where both the database and the user interaction have to be considered in the design of test cases. This paper describes a specification-based approach to guide the design of test inputs (both the test database and the user inputs) for a database application and to automatically evaluate the test adequacy. First, the system specification of the application is modelled: (1) the structure of the database and the user interface are represented in a single model, called Integrated Data Model (IDM), (2) the functional requirements are expressed as a set of business rules, written in terms of the IDM. Then, a MCDC-based criterion is applied over the business rules to automatically derive the situations of interest to be tested (test requirements), which guide the design of the test inputs. Finally, the adequacy of these test inputs is automatically evaluated to determine whether the test requirements are covered. The approach has been applied to the TPC-C benchmark. The results show that it allows designing test cases that are able to detect interesting faults which were located in the procedural code of the implementation.
2205.05626
Mohammad Dehghani Soltani
Mohammad Dehghani Soltani, Hossein Kazemi, Elham Sarbazi, Taisir E. H. El-Gorashi, Jaafar M. H. Elmirghani, Richard V. Penty, Ian H. White, Harald Haas and Majid Safari
High-Speed Imaging Receiver Design for 6G Optical Wireless Communications: A Rate-FOV Trade-Off
30 pages, 15 Figures and 6 Tables
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
The design of a compact high-speed and wide field of view (FOV) receiver is challenging due to the presence of two well-known trade-offs. The first one is the area-bandwidth trade-off of photodetectors (PDs) and the second one is the gain-FOV trade-off due to the use of optics. The combined effects of these two trade-offs imply that the achievable data rate of an imaging optical receiver is limited by its FOV, i.e., a rate-FOV trade-off. To control the area-bandwidth trade-off, an array of small PDs can be used instead of a single PD. Moreover, in practice, a large-area lens is required to ensure sufficient power collection, which in turn limits the receiver FOV (i.e., gain-FOV trade-off). We propose an imaging receiver design in the form of an array of arrays. To achieve a reasonable receiver FOV, we use individual focusing lens for each PD array rather than a single collection lens for the whole receiver. The proposed array of arrays structure provides an effective method to control both gain-FOV trade-off (via an array of lenses) and area-bandwidth trade-off (via arrays of PDs). We first derive a tractable analytical model for the SNR of an array of PDs where the maximum ratio combining has been employed. Then, we extend the model for the proposed array of arrays structure and the accuracy of the analytical model is verified based on several Optic Studio-based simulations. Next, we formulate an optimization problem to maximize the achievable data rate of the imaging receiver subject to a minimum required FOV. The optimization problem is solved for two commonly used modulation techniques, namely, OOK and direct current biased optical orthogonal frequency division multiplexing with variable rate quadrature amplitude modulation. It is demonstrated that a data rate of ~ 24 Gbps with a FOV of 15 is achievable using OOK with a total receiver size of 2 cm by 2 cm.
[ { "created": "Wed, 11 May 2022 16:51:19 GMT", "version": "v1" } ]
2022-05-12
[ [ "Soltani", "Mohammad Dehghani", "" ], [ "Kazemi", "Hossein", "" ], [ "Sarbazi", "Elham", "" ], [ "El-Gorashi", "Taisir E. H.", "" ], [ "Elmirghani", "Jaafar M. H.", "" ], [ "Penty", "Richard V.", "" ], [ "White", "Ian H.", "" ], [ "Haas", "Harald", "" ], [ "Safari", "Majid", "" ] ]
The design of a compact high-speed and wide field of view (FOV) receiver is challenging due to the presence of two well-known trade-offs. The first one is the area-bandwidth trade-off of photodetectors (PDs) and the second one is the gain-FOV trade-off due to the use of optics. The combined effects of these two trade-offs imply that the achievable data rate of an imaging optical receiver is limited by its FOV, i.e., a rate-FOV trade-off. To control the area-bandwidth trade-off, an array of small PDs can be used instead of a single PD. Moreover, in practice, a large-area lens is required to ensure sufficient power collection, which in turn limits the receiver FOV (i.e., gain-FOV trade-off). We propose an imaging receiver design in the form of an array of arrays. To achieve a reasonable receiver FOV, we use individual focusing lens for each PD array rather than a single collection lens for the whole receiver. The proposed array of arrays structure provides an effective method to control both gain-FOV trade-off (via an array of lenses) and area-bandwidth trade-off (via arrays of PDs). We first derive a tractable analytical model for the SNR of an array of PDs where the maximum ratio combining has been employed. Then, we extend the model for the proposed array of arrays structure and the accuracy of the analytical model is verified based on several Optic Studio-based simulations. Next, we formulate an optimization problem to maximize the achievable data rate of the imaging receiver subject to a minimum required FOV. The optimization problem is solved for two commonly used modulation techniques, namely, OOK and direct current biased optical orthogonal frequency division multiplexing with variable rate quadrature amplitude modulation. It is demonstrated that a data rate of ~ 24 Gbps with a FOV of 15 is achievable using OOK with a total receiver size of 2 cm by 2 cm.
2106.04811
Jem Guhit
Jem Guhit, Edward Colone, Shawn McKee, Kris Steinhoff, and Katarina Thomas
Benchmarking NetBASILISK: a Network Security Project for Science
12 pages, 4 figures, presented at vCHEP '21 Conference
null
10.1051/epjconf/202125102068
null
cs.DC cs.CR cs.DB cs.NI
http://creativecommons.org/licenses/by/4.0/
Infrastructures supporting distributed scientific collaborations must address competing goals in both providing high-performance access to resources while simultaneously securing the infrastructure against security threats. The NetBASILISK project is attempting to improve the security of such infrastructures while not adversely impacting their performance. This paper will present our work to create a benchmark and monitoring infrastructure that allows us to test for any degradation in transferring data into a NetBASILISK protected site.
[ { "created": "Wed, 9 Jun 2021 05:08:26 GMT", "version": "v1" } ]
2021-09-08
[ [ "Guhit", "Jem", "" ], [ "Colone", "Edward", "" ], [ "McKee", "Shawn", "" ], [ "Steinhoff", "Kris", "" ], [ "Thomas", "Katarina", "" ] ]
Infrastructures supporting distributed scientific collaborations must address competing goals in both providing high-performance access to resources while simultaneously securing the infrastructure against security threats. The NetBASILISK project is attempting to improve the security of such infrastructures while not adversely impacting their performance. This paper will present our work to create a benchmark and monitoring infrastructure that allows us to test for any degradation in transferring data into a NetBASILISK protected site.
1805.07885
Araz Taeihagh
Yanwei Li, Araz Taeihagh, Martin de Jong
The Governance of Risks in Ridesharing: A Revelatory Case from Singapore
null
Energies 11, no. 5: 1277 (2018)
10.3390/en11051277
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently we have witnessed the worldwide adoption of many different types of innovative technologies, such as crowdsourcing, ridesharing, open and big data, aiming at delivering public services more efficiently and effectively. Among them, ridesharing has received substantial attention from decision-makers around the world. Because of the multitude of currently understood or potentially unknown risks associated with ridesharing (unemployment, insurance, information privacy, and environmental risk), governments in different countries apply different strategies to address such risks. Some governments prohibit the adoption of ridesharing altogether, while other governments promote it. In this article, we address the question of how risks involved in ridesharing are governed over time. We present an in-depth single case study on Singapore and examine how the Singaporean government has addressed risks in ridesharing over time. The Singaporean government has a strong ambition to become an innovation hub, and many innovative technologies have been adopted and promoted to that end. At the same time, decision-makers in Singapore are reputed for their proactive style of social governance. The example of Singapore can be regarded as a revelatory case study, helping us further to explore governance practices in other countries. Keywords: risk; ridesharing; transport; governance; innovative technologies; case study; Singapore
[ { "created": "Mon, 21 May 2018 04:12:01 GMT", "version": "v1" } ]
2018-05-22
[ [ "Li", "Yanwei", "" ], [ "Taeihagh", "Araz", "" ], [ "de Jong", "Martin", "" ] ]
Recently we have witnessed the worldwide adoption of many different types of innovative technologies, such as crowdsourcing, ridesharing, open and big data, aiming at delivering public services more efficiently and effectively. Among them, ridesharing has received substantial attention from decision-makers around the world. Because of the multitude of currently understood or potentially unknown risks associated with ridesharing (unemployment, insurance, information privacy, and environmental risk), governments in different countries apply different strategies to address such risks. Some governments prohibit the adoption of ridesharing altogether, while other governments promote it. In this article, we address the question of how risks involved in ridesharing are governed over time. We present an in-depth single case study on Singapore and examine how the Singaporean government has addressed risks in ridesharing over time. The Singaporean government has a strong ambition to become an innovation hub, and many innovative technologies have been adopted and promoted to that end. At the same time, decision-makers in Singapore are reputed for their proactive style of social governance. The example of Singapore can be regarded as a revelatory case study, helping us further to explore governance practices in other countries. Keywords: risk; ridesharing; transport; governance; innovative technologies; case study; Singapore
2006.01784
Vahid Yazdanpanah
Vahid Yazdanpanah, Devrim Murat Yazan, W. Henk M. Zijm
Coordinating Multiagent Industrial Symbiosis
null
null
null
null
cs.MA cs.AI cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a formal multiagent framework for coordinating a class of collaborative industrial practices called Industrial Symbiotic Networks (ISNs) as cooperative games. The game-theoretic formulation of ISNs enables systematic reasoning about what we call the ISN implementation problem. Specifically, the characteristics of ISNs may lead to the inapplicability of standard fair and stable benefit allocation methods. Inspired by realistic ISN scenarios and following the literature on normative multiagent systems, we consider regulations and normative socio-economic policies as coordination instruments that in combination with ISN games resolve the situation. In this multiagent system, employing Marginal Contribution Nets (MC-Nets) as rule-based cooperative game representations foster the combination of regulations and ISN games with no loss in expressiveness. We develop algorithmic methods for generating regulations that ensure the implementability of ISNs and as a policy support, present the policy requirements that guarantee the implementability of all the desired ISNs in a balanced-budget way.
[ { "created": "Tue, 2 Jun 2020 17:05:43 GMT", "version": "v1" } ]
2020-06-03
[ [ "Yazdanpanah", "Vahid", "" ], [ "Yazan", "Devrim Murat", "" ], [ "Zijm", "W. Henk M.", "" ] ]
We present a formal multiagent framework for coordinating a class of collaborative industrial practices called Industrial Symbiotic Networks (ISNs) as cooperative games. The game-theoretic formulation of ISNs enables systematic reasoning about what we call the ISN implementation problem. Specifically, the characteristics of ISNs may lead to the inapplicability of standard fair and stable benefit allocation methods. Inspired by realistic ISN scenarios and following the literature on normative multiagent systems, we consider regulations and normative socio-economic policies as coordination instruments that in combination with ISN games resolve the situation. In this multiagent system, employing Marginal Contribution Nets (MC-Nets) as rule-based cooperative game representations foster the combination of regulations and ISN games with no loss in expressiveness. We develop algorithmic methods for generating regulations that ensure the implementability of ISNs and as a policy support, present the policy requirements that guarantee the implementability of all the desired ISNs in a balanced-budget way.
1004.2425
Vishwambhar Rathi
Vishwambhar Rathi, Erik Aurell, Lars Rasmussen, Mikael Skoglund
Bounds on Thresholds Related to Maximum Satisfiability of Regular Random Formulas
6th International symposium on turbo codes & iterative information processing, 2010
null
null
null
cs.IT cs.CC cs.DM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the regular balanced model of formula generation in conjunctive normal form (CNF) introduced by Boufkhad, Dubois, Interian, and Selman. We say that a formula is $p$-satisfying if there is a truth assignment satisfying $1-2^{-k}+p 2^{-k}$ fraction of clauses. Using the first moment method we determine upper bound on the threshold clause density such that there are no $p$-satisfying assignments with high probability above this upper bound. There are two aspects in deriving the lower bound using the second moment method. The first aspect is, given any $p \in (0,1)$ and $k$, evaluate the lower bound on the threshold. This evaluation is numerical in nature. The second aspect is to derive the lower bound as a function of $p$ for large enough $k$. We address the first aspect and evaluate the lower bound on the $p$-satisfying threshold using the second moment method. We observe that as $k$ increases the lower bound seems to converge to the asymptotically derived lower bound for uniform model of formula generation by Achlioptas, Naor, and Peres.
[ { "created": "Wed, 14 Apr 2010 15:46:53 GMT", "version": "v1" } ]
2010-04-15
[ [ "Rathi", "Vishwambhar", "" ], [ "Aurell", "Erik", "" ], [ "Rasmussen", "Lars", "" ], [ "Skoglund", "Mikael", "" ] ]
We consider the regular balanced model of formula generation in conjunctive normal form (CNF) introduced by Boufkhad, Dubois, Interian, and Selman. We say that a formula is $p$-satisfying if there is a truth assignment satisfying $1-2^{-k}+p 2^{-k}$ fraction of clauses. Using the first moment method we determine upper bound on the threshold clause density such that there are no $p$-satisfying assignments with high probability above this upper bound. There are two aspects in deriving the lower bound using the second moment method. The first aspect is, given any $p \in (0,1)$ and $k$, evaluate the lower bound on the threshold. This evaluation is numerical in nature. The second aspect is to derive the lower bound as a function of $p$ for large enough $k$. We address the first aspect and evaluate the lower bound on the $p$-satisfying threshold using the second moment method. We observe that as $k$ increases the lower bound seems to converge to the asymptotically derived lower bound for uniform model of formula generation by Achlioptas, Naor, and Peres.
2310.03159
Dimitri Bertsekas
Dimitri Bertsekas
New Auction Algorithms for the Assignment Problem and Extensions
null
null
null
null
cs.GT
http://creativecommons.org/licenses/by/4.0/
We consider the classical linear assignment problem, and we introduce new auction algorithms for its optimal and suboptimal solution. The algorithms are founded on duality theory, and are related to ideas of competitive bidding by persons for objects and the attendant market equilibrium, which underlie real-life auction processes. We distinguish between two fundamentally different types of bidding mechanisms: aggressive and cooperative. Mathematically, aggressive bidding relies on a notion of approximate coordinate descent in dual space, an epsilon-complementary slackness condition to regulate the amount of descent approximation, and the idea of epsilon-scaling to resolve efficiently the price wars that occur naturally as multiple bidders compete for a smaller number of valuable objects. Cooperative bidding avoids price wars through detection and cooperative resolution of any competitive impasse that involves a group of persons. We discuss the relations between the aggressive and the cooperative bidding approaches, we derive new algorithms and variations that combine ideas from both of them, and we also make connections with other primal-dual methods, including the Hungarian method. Furthermore, our discussion points the way to algorithmic extensions that apply more broadly to network optimization, including shortest path, max-flow, transportation, and minimum cost flow problems with both linear and convex cost functions.
[ { "created": "Wed, 4 Oct 2023 20:54:41 GMT", "version": "v1" }, { "created": "Sat, 21 Oct 2023 16:17:58 GMT", "version": "v2" } ]
2023-10-24
[ [ "Bertsekas", "Dimitri", "" ] ]
We consider the classical linear assignment problem, and we introduce new auction algorithms for its optimal and suboptimal solution. The algorithms are founded on duality theory, and are related to ideas of competitive bidding by persons for objects and the attendant market equilibrium, which underlie real-life auction processes. We distinguish between two fundamentally different types of bidding mechanisms: aggressive and cooperative. Mathematically, aggressive bidding relies on a notion of approximate coordinate descent in dual space, an epsilon-complementary slackness condition to regulate the amount of descent approximation, and the idea of epsilon-scaling to resolve efficiently the price wars that occur naturally as multiple bidders compete for a smaller number of valuable objects. Cooperative bidding avoids price wars through detection and cooperative resolution of any competitive impasse that involves a group of persons. We discuss the relations between the aggressive and the cooperative bidding approaches, we derive new algorithms and variations that combine ideas from both of them, and we also make connections with other primal-dual methods, including the Hungarian method. Furthermore, our discussion points the way to algorithmic extensions that apply more broadly to network optimization, including shortest path, max-flow, transportation, and minimum cost flow problems with both linear and convex cost functions.
2403.09892
Jonathan Dunn
Jonathan Dunn and Lane Edwards-Brown
Geographically-Informed Language Identification
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper develops an approach to language identification in which the set of languages considered by the model depends on the geographic origin of the text in question. Given that many digital corpora can be geo-referenced at the country level, this paper formulates 16 region-specific models, each of which contains the languages expected to appear in countries within that region. These regional models also each include 31 widely-spoken international languages in order to ensure coverage of these linguae francae regardless of location. An upstream evaluation using traditional language identification testing data shows an improvement in f-score ranging from 1.7 points (Southeast Asia) to as much as 10.4 points (North Africa). A downstream evaluation on social media data shows that this improved performance has a significant impact on the language labels which are applied to large real-world corpora. The result is a highly-accurate model that covers 916 languages at a sample size of 50 characters, the performance improved by incorporating geographic information into the model.
[ { "created": "Thu, 14 Mar 2024 21:55:17 GMT", "version": "v1" } ]
2024-03-18
[ [ "Dunn", "Jonathan", "" ], [ "Edwards-Brown", "Lane", "" ] ]
This paper develops an approach to language identification in which the set of languages considered by the model depends on the geographic origin of the text in question. Given that many digital corpora can be geo-referenced at the country level, this paper formulates 16 region-specific models, each of which contains the languages expected to appear in countries within that region. These regional models also each include 31 widely-spoken international languages in order to ensure coverage of these linguae francae regardless of location. An upstream evaluation using traditional language identification testing data shows an improvement in f-score ranging from 1.7 points (Southeast Asia) to as much as 10.4 points (North Africa). A downstream evaluation on social media data shows that this improved performance has a significant impact on the language labels which are applied to large real-world corpora. The result is a highly-accurate model that covers 916 languages at a sample size of 50 characters, the performance improved by incorporating geographic information into the model.
2402.05954
Sijun Xia
Jianming Lv, Sijun Xia, Depin Liang, Wei Chen
EasyFS: an Efficient Model-free Feature Selection Framework via Elastic Transformation of Features
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional model-free feature selection methods treat each feature independently while disregarding the interrelationships among features, which leads to relatively poor performance compared with the model-aware methods. To address this challenge, we propose an efficient model-free feature selection framework via elastic expansion and compression of the features, namely EasyFS, to achieve better performance than state-of-the-art model-aware methods while sharing the characters of efficiency and flexibility with the existing model-free methods. In particular, EasyFS expands the feature space by using the random non-linear projection network to achieve the non-linear combinations of the original features, so as to model the interrelationships among the features and discover most correlated features. Meanwhile, a novel redundancy measurement based on the change of coding rate is proposed for efficient filtering of redundant features. Comprehensive experiments on 21 different datasets show that EasyFS outperforms state-of-the art methods up to 10.9\% in the regression tasks and 5.7\% in the classification tasks while saving more than 94\% of the time.
[ { "created": "Sun, 4 Feb 2024 09:25:07 GMT", "version": "v1" } ]
2024-02-12
[ [ "Lv", "Jianming", "" ], [ "Xia", "Sijun", "" ], [ "Liang", "Depin", "" ], [ "Chen", "Wei", "" ] ]
Traditional model-free feature selection methods treat each feature independently while disregarding the interrelationships among features, which leads to relatively poor performance compared with the model-aware methods. To address this challenge, we propose an efficient model-free feature selection framework via elastic expansion and compression of the features, namely EasyFS, to achieve better performance than state-of-the-art model-aware methods while sharing the characters of efficiency and flexibility with the existing model-free methods. In particular, EasyFS expands the feature space by using the random non-linear projection network to achieve the non-linear combinations of the original features, so as to model the interrelationships among the features and discover most correlated features. Meanwhile, a novel redundancy measurement based on the change of coding rate is proposed for efficient filtering of redundant features. Comprehensive experiments on 21 different datasets show that EasyFS outperforms state-of-the art methods up to 10.9\% in the regression tasks and 5.7\% in the classification tasks while saving more than 94\% of the time.
2003.00637
Shunping Ji
Jin Liu and Shunping Ji
A Novel Recurrent Encoder-Decoder Structure for Large-Scale Multi-view Stereo Reconstruction from An Open Aerial Dataset
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A great deal of research has demonstrated recently that multi-view stereo (MVS) matching can be solved with deep learning methods. However, these efforts were focused on close-range objects and only a very few of the deep learning-based methods were specifically designed for large-scale 3D urban reconstruction due to the lack of multi-view aerial image benchmarks. In this paper, we present a synthetic aerial dataset, called the WHU dataset, we created for MVS tasks, which, to our knowledge, is the first large-scale multi-view aerial dataset. It was generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters. We also introduce in this paper a novel network, called RED-Net, for wide-range depth inference, which we developed from a recurrent encoder-decoder structure to regularize cost maps across depths and a 2D fully convolutional network as framework. RED-Net's low memory requirements and high performance make it suitable for large-scale and highly accurate 3D Earth surface reconstruction. Our experiments confirmed that not only did our method exceed the current state-of-the-art MVS methods by more than 50% mean absolute error (MAE) with less memory and computational cost, but its efficiency as well. It outperformed one of the best commercial software programs based on conventional methods, improving their efficiency 16 times over. Moreover, we proved that our RED-Net model pre-trained on the synthetic WHU dataset can be efficiently transferred to very different multi-view aerial image datasets without any fine-tuning. Dataset are available at http://gpcv.whu.edu.cn/data.
[ { "created": "Mon, 2 Mar 2020 03:04:13 GMT", "version": "v1" }, { "created": "Mon, 9 Mar 2020 04:11:01 GMT", "version": "v2" }, { "created": "Mon, 16 Mar 2020 04:27:33 GMT", "version": "v3" } ]
2020-03-17
[ [ "Liu", "Jin", "" ], [ "Ji", "Shunping", "" ] ]
A great deal of research has demonstrated recently that multi-view stereo (MVS) matching can be solved with deep learning methods. However, these efforts were focused on close-range objects and only a very few of the deep learning-based methods were specifically designed for large-scale 3D urban reconstruction due to the lack of multi-view aerial image benchmarks. In this paper, we present a synthetic aerial dataset, called the WHU dataset, we created for MVS tasks, which, to our knowledge, is the first large-scale multi-view aerial dataset. It was generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters. We also introduce in this paper a novel network, called RED-Net, for wide-range depth inference, which we developed from a recurrent encoder-decoder structure to regularize cost maps across depths and a 2D fully convolutional network as framework. RED-Net's low memory requirements and high performance make it suitable for large-scale and highly accurate 3D Earth surface reconstruction. Our experiments confirmed that not only did our method exceed the current state-of-the-art MVS methods by more than 50% mean absolute error (MAE) with less memory and computational cost, but its efficiency as well. It outperformed one of the best commercial software programs based on conventional methods, improving their efficiency 16 times over. Moreover, we proved that our RED-Net model pre-trained on the synthetic WHU dataset can be efficiently transferred to very different multi-view aerial image datasets without any fine-tuning. Dataset are available at http://gpcv.whu.edu.cn/data.
2312.07925
Weiguang Zhang
Weiguang Zhang, Qiufeng Wang, Kaizhu Huang
Polar-Doc: One-Stage Document Dewarping with Multi-Scope Constraints under Polar Representation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Document dewarping, aiming to eliminate geometric deformation in photographed documents to benefit text recognition, has made great progress in recent years but is still far from being solved. While Cartesian coordinates are typically leveraged by state-of-the-art approaches to learn a group of deformation control points, such representation is not efficient for dewarping model to learn the deformation information. In this work, we explore Polar coordinates representation for each point in document dewarping, namely Polar-Doc. In contrast to most current works adopting a two-stage pipeline typically, Polar representation enables a unified point regression framework for both segmentation and dewarping network in one single stage. Such unification makes the whole model more efficient to learn under an end-to-end optimization pipeline, and also obtains a compact representation. Furthermore, we propose a novel multi-scope Polar-Doc-IOU loss to constrain the relationship among control points as a grid-based regularization under the Polar representation. Visual comparisons and quantitative experiments on two benchmarks show that, with much fewer parameters than the other mainstream counterparts, our one-stage model with multi-scope constraints achieves new state-of-the-art performance on both pixel alignment metrics and OCR metrics. Source codes will be available at \url{*****}.
[ { "created": "Wed, 13 Dec 2023 06:50:30 GMT", "version": "v1" } ]
2023-12-14
[ [ "Zhang", "Weiguang", "" ], [ "Wang", "Qiufeng", "" ], [ "Huang", "Kaizhu", "" ] ]
Document dewarping, aiming to eliminate geometric deformation in photographed documents to benefit text recognition, has made great progress in recent years but is still far from being solved. While Cartesian coordinates are typically leveraged by state-of-the-art approaches to learn a group of deformation control points, such representation is not efficient for dewarping model to learn the deformation information. In this work, we explore Polar coordinates representation for each point in document dewarping, namely Polar-Doc. In contrast to most current works adopting a two-stage pipeline typically, Polar representation enables a unified point regression framework for both segmentation and dewarping network in one single stage. Such unification makes the whole model more efficient to learn under an end-to-end optimization pipeline, and also obtains a compact representation. Furthermore, we propose a novel multi-scope Polar-Doc-IOU loss to constrain the relationship among control points as a grid-based regularization under the Polar representation. Visual comparisons and quantitative experiments on two benchmarks show that, with much fewer parameters than the other mainstream counterparts, our one-stage model with multi-scope constraints achieves new state-of-the-art performance on both pixel alignment metrics and OCR metrics. Source codes will be available at \url{*****}.
2203.15496
Gregory Kucherov
\'Eric Fusy, Gregory Kucherov
Phase transition in count approximation by Count-Min sketch with conservative updates
19 pages, 4 figures
null
10.1007/978-3-031-30448-4_17
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Count-Min sketch is a hash-based data structure to represent a dynamically changing associative array of counters. Here we analyse the counting version of Count-Min under a stronger update rule known as \textit{conservative update}, assuming the uniform distribution of input keys. We show that the accuracy of conservative update strategy undergoes a phase transition, depending on the number of distinct keys in the input as a fraction of the size of the Count-Min array. We prove that below the threshold, the relative error is asymptotically $o(1)$ (as opposed to the regular Count-Min strategy), whereas above the threshold, the relative error is $\Theta(1)$. The threshold corresponds to the peelability threshold of random $k$-uniform hypergraphs. We demonstrate that even for small number of keys, peelability of the underlying hypergraph is a crucial property to ensure the $o(1)$ error. Finally, we provide an experimental evidence that the phase transition does not extend to non-uniform distributions, in particular to the popular Zipf's distribution.
[ { "created": "Tue, 29 Mar 2022 12:46:04 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 08:34:32 GMT", "version": "v2" } ]
2023-09-08
[ [ "Fusy", "Éric", "" ], [ "Kucherov", "Gregory", "" ] ]
Count-Min sketch is a hash-based data structure to represent a dynamically changing associative array of counters. Here we analyse the counting version of Count-Min under a stronger update rule known as \textit{conservative update}, assuming the uniform distribution of input keys. We show that the accuracy of conservative update strategy undergoes a phase transition, depending on the number of distinct keys in the input as a fraction of the size of the Count-Min array. We prove that below the threshold, the relative error is asymptotically $o(1)$ (as opposed to the regular Count-Min strategy), whereas above the threshold, the relative error is $\Theta(1)$. The threshold corresponds to the peelability threshold of random $k$-uniform hypergraphs. We demonstrate that even for small number of keys, peelability of the underlying hypergraph is a crucial property to ensure the $o(1)$ error. Finally, we provide an experimental evidence that the phase transition does not extend to non-uniform distributions, in particular to the popular Zipf's distribution.
1606.02143
Paul Ferrand
Paul Ferrand and Mustapha Amara and Maxime Guillaud and Stefan Valentin
Trends and Challenges in Wireless Channel Modeling for an Evolving Radio Access
5 figures. To appear in IEEE Communication Magazine
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of 5G, standardization and research are currently defining the next generation of the radio access. Considering the high constraints imposed by the future standards, disruptive technologies such as Massive MIMO and mmWave are being proposed. At the heart of this process are wireless channel models that now need to cover a massive increase in design parameters, a large variety of frequency bands, and heterogeneous deployments. This tutorial describes how channel models address this new level of complexity and which tools the community prepares to efficiently but accurately capture the upcoming changes in radio access design. We analyze the main drivers behind these new modeling tools, the challenges they pose, and survey the current approaches to overcome them.
[ { "created": "Tue, 7 Jun 2016 13:59:56 GMT", "version": "v1" } ]
2016-06-08
[ [ "Ferrand", "Paul", "" ], [ "Amara", "Mustapha", "" ], [ "Guillaud", "Maxime", "" ], [ "Valentin", "Stefan", "" ] ]
With the advent of 5G, standardization and research are currently defining the next generation of the radio access. Considering the high constraints imposed by the future standards, disruptive technologies such as Massive MIMO and mmWave are being proposed. At the heart of this process are wireless channel models that now need to cover a massive increase in design parameters, a large variety of frequency bands, and heterogeneous deployments. This tutorial describes how channel models address this new level of complexity and which tools the community prepares to efficiently but accurately capture the upcoming changes in radio access design. We analyze the main drivers behind these new modeling tools, the challenges they pose, and survey the current approaches to overcome them.
2212.04441
Geoffroi C\^ot\'e
Geoffroi C\^ot\'e, Fahim Mannan, Simon Thibault, Jean-Fran\c{c}ois Lalonde, Felix Heide
The Differentiable Lens: Compound Lens Search over Glass Surfaces and Materials for Object Detection
15 pages, 12 figures, to appear in CVPR 2023 proceedings, updated to reflect camera-ready submission
null
null
null
cs.CV physics.optics
http://creativecommons.org/licenses/by-sa/4.0/
Most camera lens systems are designed in isolation, separately from downstream computer vision methods. Recently, joint optimization approaches that design lenses alongside other components of the image acquisition and processing pipeline -- notably, downstream neural networks -- have achieved improved imaging quality or better performance on vision tasks. However, these existing methods optimize only a subset of lens parameters and cannot optimize glass materials given their categorical nature. In this work, we develop a differentiable spherical lens simulation model that accurately captures geometrical aberrations. We propose an optimization strategy to address the challenges of lens design -- notorious for non-convex loss function landscapes and many manufacturing constraints -- that are exacerbated in joint optimization tasks. Specifically, we introduce quantized continuous glass variables to facilitate the optimization and selection of glass materials in an end-to-end design context, and couple this with carefully designed constraints to support manufacturability. In automotive object detection, we report improved detection performance over existing designs even when simplifying designs to two- or three-element lenses, despite significantly degrading the image quality.
[ { "created": "Thu, 8 Dec 2022 18:01:17 GMT", "version": "v1" }, { "created": "Mon, 27 Mar 2023 18:16:47 GMT", "version": "v2" } ]
2023-03-29
[ [ "Côté", "Geoffroi", "" ], [ "Mannan", "Fahim", "" ], [ "Thibault", "Simon", "" ], [ "Lalonde", "Jean-François", "" ], [ "Heide", "Felix", "" ] ]
Most camera lens systems are designed in isolation, separately from downstream computer vision methods. Recently, joint optimization approaches that design lenses alongside other components of the image acquisition and processing pipeline -- notably, downstream neural networks -- have achieved improved imaging quality or better performance on vision tasks. However, these existing methods optimize only a subset of lens parameters and cannot optimize glass materials given their categorical nature. In this work, we develop a differentiable spherical lens simulation model that accurately captures geometrical aberrations. We propose an optimization strategy to address the challenges of lens design -- notorious for non-convex loss function landscapes and many manufacturing constraints -- that are exacerbated in joint optimization tasks. Specifically, we introduce quantized continuous glass variables to facilitate the optimization and selection of glass materials in an end-to-end design context, and couple this with carefully designed constraints to support manufacturability. In automotive object detection, we report improved detection performance over existing designs even when simplifying designs to two- or three-element lenses, despite significantly degrading the image quality.
2010.09256
Michel Grabisch
Michel Grabisch, Agnieszka Rusinowska, Xavier Venel
Diffusion in large networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the phenomenon of diffusion in a countably infinite society of individuals interacting with their neighbors in a network. At a given time, each individual is either active or inactive. The diffusion is driven by two characteristics: the network structure and the diffusion mechanism represented by an aggregation function. We distinguish between two diffusion mechanisms (probabilistic, deterministic) and focus on two types of aggregation functions (strict, Boolean). Under strict aggregation functions, polarization of the society cannot happen, and its state evolves towards a mixture of infinitely many active and infinitely many inactive agents, or towards a homogeneous society. Under Boolean aggregation functions, the diffusion process becomes deterministic and the contagion model of Morris (2000) becomes a particular case of our framework. Polarization can then happen. Our dynamics also allows for cycles in both cases. The network structure is not relevant for these questions, but is important for establishing irreducibility, at the price of a richness assumption: the network should contain infinitely many complex stars and have enough space for storing local configurations. Our model can be given a game-theoretic interpretation via a local coordination game, where each player would apply a best-response strategy in a random neighborhood.
[ { "created": "Mon, 19 Oct 2020 06:56:18 GMT", "version": "v1" }, { "created": "Thu, 12 Aug 2021 12:53:49 GMT", "version": "v2" } ]
2021-08-13
[ [ "Grabisch", "Michel", "" ], [ "Rusinowska", "Agnieszka", "" ], [ "Venel", "Xavier", "" ] ]
We investigate the phenomenon of diffusion in a countably infinite society of individuals interacting with their neighbors in a network. At a given time, each individual is either active or inactive. The diffusion is driven by two characteristics: the network structure and the diffusion mechanism represented by an aggregation function. We distinguish between two diffusion mechanisms (probabilistic, deterministic) and focus on two types of aggregation functions (strict, Boolean). Under strict aggregation functions, polarization of the society cannot happen, and its state evolves towards a mixture of infinitely many active and infinitely many inactive agents, or towards a homogeneous society. Under Boolean aggregation functions, the diffusion process becomes deterministic and the contagion model of Morris (2000) becomes a particular case of our framework. Polarization can then happen. Our dynamics also allows for cycles in both cases. The network structure is not relevant for these questions, but is important for establishing irreducibility, at the price of a richness assumption: the network should contain infinitely many complex stars and have enough space for storing local configurations. Our model can be given a game-theoretic interpretation via a local coordination game, where each player would apply a best-response strategy in a random neighborhood.
2402.15439
Eike Schneiders
Steve Benford, Clara Mancini, Alan Chamberlain, Eike Schneiders, Simon Castle-Green, Joel Fischer, Ayse Kucukyilmaz, Guido Salimbeni, Victor Ngo, Pepita Barnard, Matt Adams, Nick Tandavanitj, Ju Row Farr
Charting Ethical Tensions in Multispecies Technology Research through Beneficiary-Epistemology Space
Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), May 11--16, 2024, Honolulu, HI, USA
null
10.1145/3613904.3641994
null
cs.HC cs.RO
http://creativecommons.org/licenses/by/4.0/
While ethical challenges are widely discussed in HCI, far less is reported about the ethical processes that researchers routinely navigate. We reflect on a multispecies project that negotiated an especially complex ethical approval process. Cat Royale was an artist-led exploration of creating an artwork to engage audiences in exploring trust in autonomous systems. The artwork took the form of a robot that played with three cats. Gaining ethical approval required an extensive dialogue with three Institutional Review Boards (IRBs) covering computer science, veterinary science and animal welfare, raising tensions around the welfare of the cats, perceived benefits and appropriate methods, and reputational risk to the University. To reveal these tensions we introduce beneficiary-epistemology space, that makes explicit who benefits from research (humans or animals) and underlying epistemologies. Positioning projects and IRBs in this space can help clarify tensions and highlight opportunities to recruit additional expertise.
[ { "created": "Fri, 23 Feb 2024 16:57:39 GMT", "version": "v1" } ]
2024-02-26
[ [ "Benford", "Steve", "" ], [ "Mancini", "Clara", "" ], [ "Chamberlain", "Alan", "" ], [ "Schneiders", "Eike", "" ], [ "Castle-Green", "Simon", "" ], [ "Fischer", "Joel", "" ], [ "Kucukyilmaz", "Ayse", "" ], [ "Salimbeni", "Guido", "" ], [ "Ngo", "Victor", "" ], [ "Barnard", "Pepita", "" ], [ "Adams", "Matt", "" ], [ "Tandavanitj", "Nick", "" ], [ "Farr", "Ju Row", "" ] ]
While ethical challenges are widely discussed in HCI, far less is reported about the ethical processes that researchers routinely navigate. We reflect on a multispecies project that negotiated an especially complex ethical approval process. Cat Royale was an artist-led exploration of creating an artwork to engage audiences in exploring trust in autonomous systems. The artwork took the form of a robot that played with three cats. Gaining ethical approval required an extensive dialogue with three Institutional Review Boards (IRBs) covering computer science, veterinary science and animal welfare, raising tensions around the welfare of the cats, perceived benefits and appropriate methods, and reputational risk to the University. To reveal these tensions we introduce beneficiary-epistemology space, that makes explicit who benefits from research (humans or animals) and underlying epistemologies. Positioning projects and IRBs in this space can help clarify tensions and highlight opportunities to recruit additional expertise.
2103.07234
Behrooz Makki
Behrooz Makki, Mohamed-Slim Alouini
Coded-Caching using Adaptive Transmission
under review in IEEE Wireless Communications Letters
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Coded-caching is a promising technique to reduce the peak rate requirement of backhaul links during high traffic periods. In this letter, we study the effect of adaptive transmission on the performance of coded-caching based networks. Particularly, concentrating on the reduction of backhaul peak load during the high traffic periods, we develop adaptive rate and power allocation schemes maximizing the network successful transmission probability, which is defined as the probability of the event with all cache nodes decoding their intended signals correctly. Moreover, we study the effect of different message decoding and buffering schemes on the system performance. As we show, the performance of coded-caching networks is considerably affected by rate/power allocation as well as the message decoding/buffering schemes.
[ { "created": "Fri, 12 Mar 2021 12:23:17 GMT", "version": "v1" } ]
2021-03-15
[ [ "Makki", "Behrooz", "" ], [ "Alouini", "Mohamed-Slim", "" ] ]
Coded-caching is a promising technique to reduce the peak rate requirement of backhaul links during high traffic periods. In this letter, we study the effect of adaptive transmission on the performance of coded-caching based networks. Particularly, concentrating on the reduction of backhaul peak load during the high traffic periods, we develop adaptive rate and power allocation schemes maximizing the network successful transmission probability, which is defined as the probability of the event with all cache nodes decoding their intended signals correctly. Moreover, we study the effect of different message decoding and buffering schemes on the system performance. As we show, the performance of coded-caching networks is considerably affected by rate/power allocation as well as the message decoding/buffering schemes.
2405.00672
Julia Guerrero-Viu
Julia Guerrero-Viu, Milos Hasan, Arthur Roullier, Midhun Harikumar, Yiwei Hu, Paul Guerrero, Diego Gutierrez, Belen Masia, Valentin Deschaintre
TexSliders: Diffusion-Based Texture Editing in CLIP Space
SIGGRAPH 2024 Conference Proceedings
null
10.1145/3641519.3657444
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative models have enabled intuitive image creation and manipulation using natural language. In particular, diffusion models have recently shown remarkable results for natural image editing. In this work, we propose to apply diffusion techniques to edit textures, a specific class of images that are an essential part of 3D content creation pipelines. We analyze existing editing methods and show that they are not directly applicable to textures, since their common underlying approach, manipulating attention maps, is unsuitable for the texture domain. To address this, we propose a novel approach that instead manipulates CLIP image embeddings to condition the diffusion generation. We define editing directions using simple text prompts (e.g., "aged wood" to "new wood") and map these to CLIP image embedding space using a texture prior, with a sampling-based approach that gives us identity-preserving directions in CLIP space. To further improve identity preservation, we project these directions to a CLIP subspace that minimizes identity variations resulting from entangled texture attributes. Our editing pipeline facilitates the creation of arbitrary sliders using natural language prompts only, with no ground-truth annotated data necessary.
[ { "created": "Wed, 1 May 2024 17:57:21 GMT", "version": "v1" } ]
2024-05-02
[ [ "Guerrero-Viu", "Julia", "" ], [ "Hasan", "Milos", "" ], [ "Roullier", "Arthur", "" ], [ "Harikumar", "Midhun", "" ], [ "Hu", "Yiwei", "" ], [ "Guerrero", "Paul", "" ], [ "Gutierrez", "Diego", "" ], [ "Masia", "Belen", "" ], [ "Deschaintre", "Valentin", "" ] ]
Generative models have enabled intuitive image creation and manipulation using natural language. In particular, diffusion models have recently shown remarkable results for natural image editing. In this work, we propose to apply diffusion techniques to edit textures, a specific class of images that are an essential part of 3D content creation pipelines. We analyze existing editing methods and show that they are not directly applicable to textures, since their common underlying approach, manipulating attention maps, is unsuitable for the texture domain. To address this, we propose a novel approach that instead manipulates CLIP image embeddings to condition the diffusion generation. We define editing directions using simple text prompts (e.g., "aged wood" to "new wood") and map these to CLIP image embedding space using a texture prior, with a sampling-based approach that gives us identity-preserving directions in CLIP space. To further improve identity preservation, we project these directions to a CLIP subspace that minimizes identity variations resulting from entangled texture attributes. Our editing pipeline facilitates the creation of arbitrary sliders using natural language prompts only, with no ground-truth annotated data necessary.
1001.0920
Ocan Sankur
Claire Mathieu, Ocan Sankur, Warren Schudy
Online Correlation Clustering
12 pages, 1 figure
null
null
null
cs.DS
http://creativecommons.org/licenses/by/3.0/
We study the online clustering problem where data items arrive in an online fashion. The algorithm maintains a clustering of data items into similarity classes. Upon arrival of v, the relation between v and previously arrived items is revealed, so that for each u we are told whether v is similar to u. The algorithm can create a new cluster for v and merge existing clusters. When the objective is to minimize disagreements between the clustering and the input, we prove that a natural greedy algorithm is O(n)-competitive, and this is optimal. When the objective is to maximize agreements between the clustering and the input, we prove that the greedy algorithm is .5-competitive; that no online algorithm can be better than .834-competitive; we prove that it is possible to get better than 1/2, by exhibiting a randomized algorithm with competitive ratio .5+c for a small positive fixed constant c.
[ { "created": "Wed, 6 Jan 2010 15:54:38 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2010 13:23:16 GMT", "version": "v2" } ]
2010-02-03
[ [ "Mathieu", "Claire", "" ], [ "Sankur", "Ocan", "" ], [ "Schudy", "Warren", "" ] ]
We study the online clustering problem where data items arrive in an online fashion. The algorithm maintains a clustering of data items into similarity classes. Upon arrival of v, the relation between v and previously arrived items is revealed, so that for each u we are told whether v is similar to u. The algorithm can create a new cluster for v and merge existing clusters. When the objective is to minimize disagreements between the clustering and the input, we prove that a natural greedy algorithm is O(n)-competitive, and this is optimal. When the objective is to maximize agreements between the clustering and the input, we prove that the greedy algorithm is .5-competitive; that no online algorithm can be better than .834-competitive; we prove that it is possible to get better than 1/2, by exhibiting a randomized algorithm with competitive ratio .5+c for a small positive fixed constant c.
1701.01290
Pengqian Yu
Pengqian Yu, William B. Haskell, Huan Xu
Approximate Value Iteration for Risk-aware Markov Decision Processes
null
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider large-scale Markov decision processes (MDPs) with a risk measure of variability in cost, under the risk-aware MDPs paradigm. Previous studies showed that risk-aware MDPs, based on a minimax approach to handling risk, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this paper, we employ an approximate dynamic programming approach, and develop a family of simulation-based algorithms to approximately solve large-scale risk-aware MDPs. In parallel, we develop a unified convergence analysis technique to derive sample complexity bounds for this new family of algorithms.
[ { "created": "Thu, 5 Jan 2017 12:10:26 GMT", "version": "v1" }, { "created": "Thu, 12 Jan 2017 08:56:12 GMT", "version": "v2" }, { "created": "Tue, 16 May 2017 13:51:48 GMT", "version": "v3" } ]
2017-05-17
[ [ "Yu", "Pengqian", "" ], [ "Haskell", "William B.", "" ], [ "Xu", "Huan", "" ] ]
We consider large-scale Markov decision processes (MDPs) with a risk measure of variability in cost, under the risk-aware MDPs paradigm. Previous studies showed that risk-aware MDPs, based on a minimax approach to handling risk, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this paper, we employ an approximate dynamic programming approach, and develop a family of simulation-based algorithms to approximately solve large-scale risk-aware MDPs. In parallel, we develop a unified convergence analysis technique to derive sample complexity bounds for this new family of algorithms.
1311.6165
Ying Long
Ying Long and Xingjian Liu
Automated identification and characterization of parcels (AICP) with OpenStreetMap and Points of Interest
26 pages, 6 figures
null
null
null
cs.CY cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Against the paucity of urban parcels in China, this paper proposes a method to automatically identify and characterize parcels (AICP) with OpenStreetMap (OSM) and Points of Interest (POI) data. Parcels are the basic spatial units for fine-scale urban modeling, urban studies, as well as spatial planning. Conventional ways of identification and characterization of parcels rely on remote sensing and field surveys, which are labor intensive and resource-consuming. Poorly developed digital infrastructure, limited resources, and institutional barriers have all hampered the gathering and application of parcel data in developing countries. Against this backdrop, we employ OSM road networks to identify parcel geometries and POI data to infer parcel characteristics. A vector-based CA model is adopted to select urban parcels. The method is applied to the entire state of China and identifies 82,645 urban parcels in 297 cities. Notwithstanding all the caveats of open and/or crowd-sourced data, our approach could produce reasonably good approximation of parcels identified from conventional methods, thus having the potential to become a useful supplement.
[ { "created": "Sun, 24 Nov 2013 20:24:41 GMT", "version": "v1" }, { "created": "Wed, 12 Feb 2014 15:01:12 GMT", "version": "v2" }, { "created": "Mon, 30 Mar 2015 12:04:33 GMT", "version": "v3" } ]
2015-03-31
[ [ "Long", "Ying", "" ], [ "Liu", "Xingjian", "" ] ]
Against the paucity of urban parcels in China, this paper proposes a method to automatically identify and characterize parcels (AICP) with OpenStreetMap (OSM) and Points of Interest (POI) data. Parcels are the basic spatial units for fine-scale urban modeling, urban studies, as well as spatial planning. Conventional ways of identification and characterization of parcels rely on remote sensing and field surveys, which are labor intensive and resource-consuming. Poorly developed digital infrastructure, limited resources, and institutional barriers have all hampered the gathering and application of parcel data in developing countries. Against this backdrop, we employ OSM road networks to identify parcel geometries and POI data to infer parcel characteristics. A vector-based CA model is adopted to select urban parcels. The method is applied to the entire state of China and identifies 82,645 urban parcels in 297 cities. Notwithstanding all the caveats of open and/or crowd-sourced data, our approach could produce reasonably good approximation of parcels identified from conventional methods, thus having the potential to become a useful supplement.
1403.3344
Walter Quattrociocchi
Delia Mocanu, Luca Rossi, Qian Zhang, M\`arton Karsai, Walter Quattrociocchi
Collective attention in the age of (mis)information
misinformation, attention patterns, false information, social response
null
null
null
cs.SI cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we study, on a sample of 2.3 million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we categorize users according to their interaction patterns among the different topics and measure how a sample of this social ecosystem (1279 users) responded to the injection of 2788 false information posts. Our analysis reveals that users which are prominently interacting with alternative information sources (i.e. more exposed to unsubstantiated claims) are more prone to interact with false claims.
[ { "created": "Thu, 13 Mar 2014 17:57:05 GMT", "version": "v1" } ]
2014-03-14
[ [ "Mocanu", "Delia", "" ], [ "Rossi", "Luca", "" ], [ "Zhang", "Qian", "" ], [ "Karsai", "Màrton", "" ], [ "Quattrociocchi", "Walter", "" ] ]
In this work we study, on a sample of 2.3 million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we categorize users according to their interaction patterns among the different topics and measure how a sample of this social ecosystem (1279 users) responded to the injection of 2788 false information posts. Our analysis reveals that users which are prominently interacting with alternative information sources (i.e. more exposed to unsubstantiated claims) are more prone to interact with false claims.
2401.03707
Geunhyuk Youk
Geunhyuk Youk, Jihyong Oh, Munchurl Kim
FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring
CVPR2024 (camera-ready version). The last two authors are co-corresponding authors. Please visit our project page at https://kaist-viclab.github.io/fmanet-site
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present a joint learning scheme of video super-resolution and deblurring, called VSRDB, to restore clean high-resolution (HR) videos from blurry low-resolution (LR) ones. This joint restoration problem has drawn much less attention compared to single restoration problems. In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net. Specifically, our proposed FGDF enables precise estimation of both spatio-temporally-variant degradation and restoration kernels that are aware of motion trajectories through sophisticated motion representation learning. Compared to conventional dynamic filtering, the FGDF enables the FMA-Net to effectively handle large motions into the VSRDB. Additionally, the stacked FRMA blocks trained with our novel temporal anchor (TA) loss, which temporally anchors and sharpens features, refine features in a course-to-fine manner through iterative updates. Extensive experiments demonstrate the superiority of the proposed FMA-Net over state-of-the-art methods in terms of both quantitative and qualitative quality. Codes and pre-trained models are available at: https://kaist-viclab.github.io/fmanet-site
[ { "created": "Mon, 8 Jan 2024 07:34:43 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2024 00:43:21 GMT", "version": "v2" } ]
2024-03-29
[ [ "Youk", "Geunhyuk", "" ], [ "Oh", "Jihyong", "" ], [ "Kim", "Munchurl", "" ] ]
We present a joint learning scheme of video super-resolution and deblurring, called VSRDB, to restore clean high-resolution (HR) videos from blurry low-resolution (LR) ones. This joint restoration problem has drawn much less attention compared to single restoration problems. In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net. Specifically, our proposed FGDF enables precise estimation of both spatio-temporally-variant degradation and restoration kernels that are aware of motion trajectories through sophisticated motion representation learning. Compared to conventional dynamic filtering, the FGDF enables the FMA-Net to effectively handle large motions into the VSRDB. Additionally, the stacked FRMA blocks trained with our novel temporal anchor (TA) loss, which temporally anchors and sharpens features, refine features in a course-to-fine manner through iterative updates. Extensive experiments demonstrate the superiority of the proposed FMA-Net over state-of-the-art methods in terms of both quantitative and qualitative quality. Codes and pre-trained models are available at: https://kaist-viclab.github.io/fmanet-site