id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2301.09325
Nhan-Phu Chung
Nhan-Phu Chung, Jaeseong Jeong, Namhun Koo and Soonhak Kwon
cc-differential uniformity, (almost) perfect cc-nonlinearity, and equivalences
18 pages. Comments welcome
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we introduce new notions $cc$-differential uniformity, $cc$-differential spectrum, PccN functions and APccN functions, and investigate their properties. We also introduce $c$-CCZ equivalence, $c$-EA equivalence, and $c1$-equivalence. We show that $c$-differential uniformity is invariant under $c1$-equivalence, and $cc$-differential uniformity and $cc$-differential spectrum are preserved under $c$-CCZ equivalence. We characterize $cc$-differential uniformity of vectorial Boolean functions in terms of the Walsh transformation. We investigate $cc$-differential uniformity of power functions $F(x)=x^d$. We also illustrate examples to prove that $c$-CCZ equivalence is strictly more general than $c$-EA equivalence.
[ { "created": "Mon, 23 Jan 2023 09:01:20 GMT", "version": "v1" } ]
2023-01-24
[ [ "Chung", "Nhan-Phu", "" ], [ "Jeong", "Jaeseong", "" ], [ "Koo", "Namhun", "" ], [ "Kwon", "Soonhak", "" ] ]
In this article, we introduce new notions $cc$-differential uniformity, $cc$-differential spectrum, PccN functions and APccN functions, and investigate their properties. We also introduce $c$-CCZ equivalence, $c$-EA equivalence, and $c1$-equivalence. We show that $c$-differential uniformity is invariant under $c1$-equivalence, and $cc$-differential uniformity and $cc$-differential spectrum are preserved under $c$-CCZ equivalence. We characterize $cc$-differential uniformity of vectorial Boolean functions in terms of the Walsh transformation. We investigate $cc$-differential uniformity of power functions $F(x)=x^d$. We also illustrate examples to prove that $c$-CCZ equivalence is strictly more general than $c$-EA equivalence.
1904.04187
Juraj Per\v{s}i\'c
Juraj Per\v{s}i\'c, Luka Petrovi\'c, Ivan Markovi\'c and Ivan Petrovi\'c
Spatio-Temporal Multisensor Calibration Based on Gaussian Processes Moving Object Tracking
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perception is one of the key abilities of autonomous mobile robotic systems, which often relies on fusion of heterogeneous sensors. Although this heterogeneity presents a challenge for sensor calibration, it is also the main prospect for reliability and robustness of autonomous systems. In this paper, we propose a method for multisensor calibration based on Gaussian processes (GPs) estimated moving object trajectories, resulting with temporal and extrinsic parameters. The appealing properties of the proposed temporal calibration method are: coordinate frame invariance, thus avoiding prior extrinsic calibration, theoretically grounded batch state estimation and interpolation using GPs, computational efficiency with O(n) complexity, leveraging data already available in autonomous robot platforms, and the end result enabling 3D point-to-point extrinsic multisensor calibration. The proposed method is validated both in simulations and real-world experiments. For real-world experiment we evaluated the method on two multisensor systems: an externally triggered stereo camera, thus having temporal ground truth readily available, and a heterogeneous combination of a camera and motion capture system. The results show that the estimated time delays are accurate up to a fraction of the fastest sensor sampling time.
[ { "created": "Mon, 8 Apr 2019 16:53:44 GMT", "version": "v1" } ]
2019-04-09
[ [ "Peršić", "Juraj", "" ], [ "Petrović", "Luka", "" ], [ "Marković", "Ivan", "" ], [ "Petrović", "Ivan", "" ] ]
Perception is one of the key abilities of autonomous mobile robotic systems, which often relies on fusion of heterogeneous sensors. Although this heterogeneity presents a challenge for sensor calibration, it is also the main prospect for reliability and robustness of autonomous systems. In this paper, we propose a method for multisensor calibration based on Gaussian processes (GPs) estimated moving object trajectories, resulting with temporal and extrinsic parameters. The appealing properties of the proposed temporal calibration method are: coordinate frame invariance, thus avoiding prior extrinsic calibration, theoretically grounded batch state estimation and interpolation using GPs, computational efficiency with O(n) complexity, leveraging data already available in autonomous robot platforms, and the end result enabling 3D point-to-point extrinsic multisensor calibration. The proposed method is validated both in simulations and real-world experiments. For real-world experiment we evaluated the method on two multisensor systems: an externally triggered stereo camera, thus having temporal ground truth readily available, and a heterogeneous combination of a camera and motion capture system. The results show that the estimated time delays are accurate up to a fraction of the fastest sensor sampling time.
1807.07389
Felix Diaz Hermida
F. D\'iaz-Hermida, Juan. C. Vidal
Fuzzy quantification for linguistic data analysis and data mining
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fuzzy quantification is a subtopic of fuzzy logic which deals with the modelling of the quantified expressions we can find in natural language. Fuzzy quantifiers have been successfully applied in several fields like fuzzy, control, fuzzy databases, information retrieval, natural language generation, etc. Their ability to model and evaluate linguistic expressions in a mathematical way, makes fuzzy quantifiers very powerful for data analytics and data mining applications. In this paper we will give a general overview of the main applications of fuzzy quantifiers in this field as well as some ideas to use them in new application contexts.
[ { "created": "Thu, 19 Jul 2018 13:22:01 GMT", "version": "v1" } ]
2018-07-20
[ [ "Díaz-Hermida", "F.", "" ], [ "Vidal", "Juan. C.", "" ] ]
Fuzzy quantification is a subtopic of fuzzy logic which deals with the modelling of the quantified expressions we can find in natural language. Fuzzy quantifiers have been successfully applied in several fields like fuzzy, control, fuzzy databases, information retrieval, natural language generation, etc. Their ability to model and evaluate linguistic expressions in a mathematical way, makes fuzzy quantifiers very powerful for data analytics and data mining applications. In this paper we will give a general overview of the main applications of fuzzy quantifiers in this field as well as some ideas to use them in new application contexts.
2401.04039
Nabajeet Barman
Nabajeet Barman, Maria G. Martini and Yuriy Reznik
Bj{\o}ntegaard Delta (BD): A Tutorial Overview of the Metric, Evolution, Challenges, and Recommendations
null
null
null
null
cs.MM cs.IT eess.IV math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Bj{\o}ntegaard Delta (BD) method proposed in 2001 has become a popular tool for comparing video codec compression efficiency. It was initially proposed to compute bitrate and quality differences between two Rate-Distortion curves using PSNR as a distortion metric. Over the years, many works have calculated and reported BD results using other objective quality metrics such as SSIM, VMAF and, in some cases, even subjective ratings (mean opinion scores). However, the lack of consolidated literature explaining the metric, its evolution over the years, and a systematic evaluation of the same under different test conditions can result in a wrong interpretation of the BD results thus obtained. Towards this end, this paper presents a detailed tutorial describing the BD method and example cases where the metric might fail. We also provide a detailed history of its evolution, including a discussion of various proposed improvements and variations over the last 20 years. In addition, we evaluate the various BD methods and their open-source implementations, considering different objective quality metrics and subjective ratings taking into account different RD characteristics. Based on our results, we present a set of recommendations on using existing BD metrics and various insights for possible exploration towards developing more effective tools for codec compression efficiency evaluation and comparison.
[ { "created": "Mon, 8 Jan 2024 17:24:16 GMT", "version": "v1" } ]
2024-01-09
[ [ "Barman", "Nabajeet", "" ], [ "Martini", "Maria G.", "" ], [ "Reznik", "Yuriy", "" ] ]
The Bj{\o}ntegaard Delta (BD) method proposed in 2001 has become a popular tool for comparing video codec compression efficiency. It was initially proposed to compute bitrate and quality differences between two Rate-Distortion curves using PSNR as a distortion metric. Over the years, many works have calculated and reported BD results using other objective quality metrics such as SSIM, VMAF and, in some cases, even subjective ratings (mean opinion scores). However, the lack of consolidated literature explaining the metric, its evolution over the years, and a systematic evaluation of the same under different test conditions can result in a wrong interpretation of the BD results thus obtained. Towards this end, this paper presents a detailed tutorial describing the BD method and example cases where the metric might fail. We also provide a detailed history of its evolution, including a discussion of various proposed improvements and variations over the last 20 years. In addition, we evaluate the various BD methods and their open-source implementations, considering different objective quality metrics and subjective ratings taking into account different RD characteristics. Based on our results, we present a set of recommendations on using existing BD metrics and various insights for possible exploration towards developing more effective tools for codec compression efficiency evaluation and comparison.
1609.01203
L\'eopold Crestel
L\'eopold Crestel and Philippe Esling
Live Orchestral Piano, a system for real-time orchestral music generation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the first system for performing automatic orchestration based on a real-time piano input. We believe that it is possible to learn the underlying regularities existing between piano scores and their orchestrations by notorious composers, in order to automatically perform this task on novel piano inputs. To that end, we investigate a class of statistical inference models called conditional Restricted Boltzmann Machine (cRBM). We introduce a specific evaluation framework for orchestral generation based on a prediction task in order to assess the quality of different models. As prediction and creation are two widely different endeavours, we discuss the potential biases in evaluating temporal generative models through prediction tasks and their impact on a creative system. Finally, we introduce an implementation of the proposed model called Live Orchestral Piano (LOP), which allows to perform real-time projective orchestration of a MIDI keyboard input.
[ { "created": "Mon, 5 Sep 2016 15:58:11 GMT", "version": "v1" }, { "created": "Thu, 18 May 2017 14:15:30 GMT", "version": "v2" } ]
2017-05-19
[ [ "Crestel", "Léopold", "" ], [ "Esling", "Philippe", "" ] ]
This paper introduces the first system for performing automatic orchestration based on a real-time piano input. We believe that it is possible to learn the underlying regularities existing between piano scores and their orchestrations by notorious composers, in order to automatically perform this task on novel piano inputs. To that end, we investigate a class of statistical inference models called conditional Restricted Boltzmann Machine (cRBM). We introduce a specific evaluation framework for orchestral generation based on a prediction task in order to assess the quality of different models. As prediction and creation are two widely different endeavours, we discuss the potential biases in evaluating temporal generative models through prediction tasks and their impact on a creative system. Finally, we introduce an implementation of the proposed model called Live Orchestral Piano (LOP), which allows to perform real-time projective orchestration of a MIDI keyboard input.
2011.02879
Mehdi Khoshboresh-Masouleh
Mehdi Khoshboresh-Masouleh, Mohammad R. Saradjian
Robust building footprint extraction from big multi-sensor data using deep competition network
8 pages, 5 figures
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W18, 2019
10.5194/isprs-archives-XLII-4-W18-615-2019
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Building footprint extraction (BFE) from multi-sensor data such as optical images and light detection and ranging (LiDAR) point clouds is widely used in various fields of remote sensing applications. However, it is still challenging research topic due to relatively inefficient building extraction techniques from variety of complex scenes in multi-sensor data. In this study, we develop and evaluate a deep competition network (DCN) that fuses very high spatial resolution optical remote sensing images with LiDAR data for robust BFE. DCN is a deep superpixelwise convolutional encoder-decoder architecture using the encoder vector quantization with classified structure. DCN consists of five encoding-decoding blocks with convolutional weights for robust binary representation (superpixel) learning. DCN is trained and tested in a big multi-sensor dataset obtained from the state of Indiana in the United States with multiple building scenes. Comparison results of the accuracy assessment showed that DCN has competitive BFE performance in comparison with other deep semantic binary segmentation architectures. Therefore, we conclude that the proposed model is a suitable solution to the robust BFE from big multi-sensor data.
[ { "created": "Wed, 4 Nov 2020 09:04:38 GMT", "version": "v1" }, { "created": "Mon, 16 Nov 2020 09:11:12 GMT", "version": "v2" }, { "created": "Sat, 28 Nov 2020 13:06:36 GMT", "version": "v3" } ]
2020-12-01
[ [ "Khoshboresh-Masouleh", "Mehdi", "" ], [ "Saradjian", "Mohammad R.", "" ] ]
Building footprint extraction (BFE) from multi-sensor data such as optical images and light detection and ranging (LiDAR) point clouds is widely used in various fields of remote sensing applications. However, it is still challenging research topic due to relatively inefficient building extraction techniques from variety of complex scenes in multi-sensor data. In this study, we develop and evaluate a deep competition network (DCN) that fuses very high spatial resolution optical remote sensing images with LiDAR data for robust BFE. DCN is a deep superpixelwise convolutional encoder-decoder architecture using the encoder vector quantization with classified structure. DCN consists of five encoding-decoding blocks with convolutional weights for robust binary representation (superpixel) learning. DCN is trained and tested in a big multi-sensor dataset obtained from the state of Indiana in the United States with multiple building scenes. Comparison results of the accuracy assessment showed that DCN has competitive BFE performance in comparison with other deep semantic binary segmentation architectures. Therefore, we conclude that the proposed model is a suitable solution to the robust BFE from big multi-sensor data.
2303.17805
Sebastian Neumayer
Sebastian Neumayer and L\'ena\"ic Chizat and Michael Unser
On the Effect of Initialization: The Scaling Path of 2-Layer Neural Networks
null
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In supervised learning, the regularization path is sometimes used as a convenient theoretical proxy for the optimization path of gradient descent initialized from zero. In this paper, we study a modification of the regularization path for infinite-width 2-layer ReLU neural networks with nonzero initial distribution of the weights at different scales. By exploiting a link with unbalanced optimal-transport theory, we show that, despite the non-convexity of the 2-layer network training, this problem admits an infinite-dimensional convex counterpart. We formulate the corresponding functional-optimization problem and investigate its main properties. In particular, we show that, as the scale of the initialization ranges between $0$ and $+\infty$, the associated path interpolates continuously between the so-called kernel and rich regimes. Numerical experiments confirm that, in our setting, the scaling path and the final states of the optimization path behave similarly, even beyond these extreme points.
[ { "created": "Fri, 31 Mar 2023 05:32:11 GMT", "version": "v1" }, { "created": "Wed, 9 Aug 2023 07:24:44 GMT", "version": "v2" } ]
2023-08-10
[ [ "Neumayer", "Sebastian", "" ], [ "Chizat", "Lénaïc", "" ], [ "Unser", "Michael", "" ] ]
In supervised learning, the regularization path is sometimes used as a convenient theoretical proxy for the optimization path of gradient descent initialized from zero. In this paper, we study a modification of the regularization path for infinite-width 2-layer ReLU neural networks with nonzero initial distribution of the weights at different scales. By exploiting a link with unbalanced optimal-transport theory, we show that, despite the non-convexity of the 2-layer network training, this problem admits an infinite-dimensional convex counterpart. We formulate the corresponding functional-optimization problem and investigate its main properties. In particular, we show that, as the scale of the initialization ranges between $0$ and $+\infty$, the associated path interpolates continuously between the so-called kernel and rich regimes. Numerical experiments confirm that, in our setting, the scaling path and the final states of the optimization path behave similarly, even beyond these extreme points.
2404.16000
Stefano Woerner
Stefano Woerner, Arthur Jaques and Christian F. Baumgartner
A comprehensive and easy-to-use multi-domain multi-task medical imaging meta-dataset (MedIMeta)
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
While the field of medical image analysis has undergone a transformative shift with the integration of machine learning techniques, the main challenge of these techniques is often the scarcity of large, diverse, and well-annotated datasets. Medical images vary in format, size, and other parameters and therefore require extensive preprocessing and standardization, for usage in machine learning. Addressing these challenges, we introduce the Medical Imaging Meta-Dataset (MedIMeta), a novel multi-domain, multi-task meta-dataset. MedIMeta contains 19 medical imaging datasets spanning 10 different domains and encompassing 54 distinct medical tasks, all of which are standardized to the same format and readily usable in PyTorch or other ML frameworks. We perform a technical validation of MedIMeta, demonstrating its utility through fully supervised and cross-domain few-shot learning baselines.
[ { "created": "Wed, 24 Apr 2024 17:27:57 GMT", "version": "v1" } ]
2024-04-25
[ [ "Woerner", "Stefano", "" ], [ "Jaques", "Arthur", "" ], [ "Baumgartner", "Christian F.", "" ] ]
While the field of medical image analysis has undergone a transformative shift with the integration of machine learning techniques, the main challenge of these techniques is often the scarcity of large, diverse, and well-annotated datasets. Medical images vary in format, size, and other parameters and therefore require extensive preprocessing and standardization, for usage in machine learning. Addressing these challenges, we introduce the Medical Imaging Meta-Dataset (MedIMeta), a novel multi-domain, multi-task meta-dataset. MedIMeta contains 19 medical imaging datasets spanning 10 different domains and encompassing 54 distinct medical tasks, all of which are standardized to the same format and readily usable in PyTorch or other ML frameworks. We perform a technical validation of MedIMeta, demonstrating its utility through fully supervised and cross-domain few-shot learning baselines.
2112.07339
Nicolae Paladi
Jakob Svenningsson, Nicolae Paladi, Arash Vahidi
Speeding up enclave transitions for IO-intensive applications
null
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Process-based confidential computing enclaves such as Intel SGX can be used to protect the confidentiality and integrity of workloads, without the overhead of virtualisation. However, they introduce a notable performance overhead, especially when it comes to transitions in and out of the enclave context. Such overhead makes the use of enclaves impractical for running IO-intensive applications, such as network packet processing or biological sequence analysis. We build on earlier approaches to improve the IO performance of work-loads in Intel SGX enclaves and propose the SGX-Bundler library, which helps reduce the cost of both individual single enclave transitions well as of the total number of enclave transitions in trusted applications running in Intel SGX enclaves. We describe the implementation of the SGX-Bundler library, evaluate its performance and demonstrate its practicality using the case study of Open vSwitch, a widely used software switch implementation.
[ { "created": "Tue, 14 Dec 2021 12:54:36 GMT", "version": "v1" } ]
2021-12-15
[ [ "Svenningsson", "Jakob", "" ], [ "Paladi", "Nicolae", "" ], [ "Vahidi", "Arash", "" ] ]
Process-based confidential computing enclaves such as Intel SGX can be used to protect the confidentiality and integrity of workloads, without the overhead of virtualisation. However, they introduce a notable performance overhead, especially when it comes to transitions in and out of the enclave context. Such overhead makes the use of enclaves impractical for running IO-intensive applications, such as network packet processing or biological sequence analysis. We build on earlier approaches to improve the IO performance of work-loads in Intel SGX enclaves and propose the SGX-Bundler library, which helps reduce the cost of both individual single enclave transitions well as of the total number of enclave transitions in trusted applications running in Intel SGX enclaves. We describe the implementation of the SGX-Bundler library, evaluate its performance and demonstrate its practicality using the case study of Open vSwitch, a widely used software switch implementation.
2202.00817
Hyung Ju Suh
H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Russ Tedrake
Do Differentiable Simulators Give Better Policy Gradients?
Accepted to ICML 2022
ICML 2022
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Differentiable simulators promise faster computation time for reinforcement learning by replacing zeroth-order gradient estimates of a stochastic objective with an estimate based on first-order gradients. However, it is yet unclear what factors decide the performance of the two estimators on complex landscapes that involve long-horizon planning and control on physical systems, despite the crucial relevance of this question for the utility of differentiable simulators. We show that characteristics of certain physical systems, such as stiffness or discontinuities, may compromise the efficacy of the first-order estimator, and analyze this phenomenon through the lens of bias and variance. We additionally propose an $\alpha$-order gradient estimator, with $\alpha \in [0,1]$, which correctly utilizes exact gradients to combine the efficiency of first-order estimates with the robustness of zero-order methods. We demonstrate the pitfalls of traditional estimators and the advantages of the $\alpha$-order estimator on some numerical examples.
[ { "created": "Wed, 2 Feb 2022 00:12:28 GMT", "version": "v1" }, { "created": "Mon, 22 Aug 2022 14:33:02 GMT", "version": "v2" } ]
2022-08-23
[ [ "Suh", "H. J. Terry", "" ], [ "Simchowitz", "Max", "" ], [ "Zhang", "Kaiqing", "" ], [ "Tedrake", "Russ", "" ] ]
Differentiable simulators promise faster computation time for reinforcement learning by replacing zeroth-order gradient estimates of a stochastic objective with an estimate based on first-order gradients. However, it is yet unclear what factors decide the performance of the two estimators on complex landscapes that involve long-horizon planning and control on physical systems, despite the crucial relevance of this question for the utility of differentiable simulators. We show that characteristics of certain physical systems, such as stiffness or discontinuities, may compromise the efficacy of the first-order estimator, and analyze this phenomenon through the lens of bias and variance. We additionally propose an $\alpha$-order gradient estimator, with $\alpha \in [0,1]$, which correctly utilizes exact gradients to combine the efficiency of first-order estimates with the robustness of zero-order methods. We demonstrate the pitfalls of traditional estimators and the advantages of the $\alpha$-order estimator on some numerical examples.
1905.01966
Amirreza Shirani
Amirreza Shirani, Bowen Xu, David Lo, Thamar Solorio and Amin Alipour
Question Relatedness on Stack Overflow: The Task, Dataset, and Corpus-inspired Models
null
AAAI 2019 Reasoning for Complex Question Answering Workshop
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Domain-specific community question answering is becoming an integral part of professions. Finding related questions and answers in these communities can significantly improve the effectiveness and efficiency of information seeking. Stack Overflow is one of the most popular communities that is being used by millions of programmers. In this paper, we analyze the problem of predicting knowledge unit (question thread) relatedness in Stack Overflow. In particular, we formulate the question relatedness task as a multi-class classification problem with four degrees of relatedness. We present a large-scale dataset with more than 300K pairs. To the best of our knowledge, this dataset is the largest domain-specific dataset for Question-Question relatedness. We present the steps that we took to collect, clean, process, and assure the quality of the dataset. The proposed dataset Stack Overflow is a useful resource to develop novel solutions, specifically data-hungry neural network models, for the prediction of relatedness in technical community question-answering forums. We adopt a neural network architecture and a traditional model for this task that effectively utilize information from different parts of knowledge units to compute the relatedness between them. These models can be used to benchmark novel models, as they perform well in our task and in a closely similar task.
[ { "created": "Fri, 3 May 2019 01:45:50 GMT", "version": "v1" }, { "created": "Tue, 7 May 2019 15:35:32 GMT", "version": "v2" } ]
2019-05-08
[ [ "Shirani", "Amirreza", "" ], [ "Xu", "Bowen", "" ], [ "Lo", "David", "" ], [ "Solorio", "Thamar", "" ], [ "Alipour", "Amin", "" ] ]
Domain-specific community question answering is becoming an integral part of professions. Finding related questions and answers in these communities can significantly improve the effectiveness and efficiency of information seeking. Stack Overflow is one of the most popular communities that is being used by millions of programmers. In this paper, we analyze the problem of predicting knowledge unit (question thread) relatedness in Stack Overflow. In particular, we formulate the question relatedness task as a multi-class classification problem with four degrees of relatedness. We present a large-scale dataset with more than 300K pairs. To the best of our knowledge, this dataset is the largest domain-specific dataset for Question-Question relatedness. We present the steps that we took to collect, clean, process, and assure the quality of the dataset. The proposed dataset Stack Overflow is a useful resource to develop novel solutions, specifically data-hungry neural network models, for the prediction of relatedness in technical community question-answering forums. We adopt a neural network architecture and a traditional model for this task that effectively utilize information from different parts of knowledge units to compute the relatedness between them. These models can be used to benchmark novel models, as they perform well in our task and in a closely similar task.
2103.05248
Mo Zhou
Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Yinghui Xu, Nanning Zheng, Gang Hua
Practical Relative Order Attack in Deep Ranking
ICCV2021 Poster
null
null
null
cs.LG cs.CV cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies unveil the vulnerabilities of deep ranking models, where an imperceptible perturbation can trigger dramatic changes in the ranking result. While previous attempts focus on manipulating absolute ranks of certain candidates, the possibility of adjusting their relative order remains under-explored. In this paper, we formulate a new adversarial attack against deep ranking systems, i.e., the Order Attack, which covertly alters the relative order among a selected set of candidates according to an attacker-specified permutation, with limited interference to other unrelated candidates. Specifically, it is formulated as a triplet-style loss imposing an inequality chain reflecting the specified permutation. However, direct optimization of such white-box objective is infeasible in a real-world attack scenario due to various black-box limitations. To cope with them, we propose a Short-range Ranking Correlation metric as a surrogate objective for black-box Order Attack to approximate the white-box method. The Order Attack is evaluated on the Fashion-MNIST and Stanford-Online-Products datasets under both white-box and black-box threat models. The black-box attack is also successfully implemented on a major e-commerce platform. Comprehensive experimental evaluations demonstrate the effectiveness of the proposed methods, revealing a new type of ranking model vulnerability.
[ { "created": "Tue, 9 Mar 2021 06:41:18 GMT", "version": "v1" }, { "created": "Wed, 17 Mar 2021 01:16:13 GMT", "version": "v2" }, { "created": "Sat, 20 Mar 2021 12:18:44 GMT", "version": "v3" }, { "created": "Mon, 9 Aug 2021 15:54:56 GMT", "version": "v4" } ]
2021-08-10
[ [ "Zhou", "Mo", "" ], [ "Wang", "Le", "" ], [ "Niu", "Zhenxing", "" ], [ "Zhang", "Qilin", "" ], [ "Xu", "Yinghui", "" ], [ "Zheng", "Nanning", "" ], [ "Hua", "Gang", "" ] ]
Recent studies unveil the vulnerabilities of deep ranking models, where an imperceptible perturbation can trigger dramatic changes in the ranking result. While previous attempts focus on manipulating absolute ranks of certain candidates, the possibility of adjusting their relative order remains under-explored. In this paper, we formulate a new adversarial attack against deep ranking systems, i.e., the Order Attack, which covertly alters the relative order among a selected set of candidates according to an attacker-specified permutation, with limited interference to other unrelated candidates. Specifically, it is formulated as a triplet-style loss imposing an inequality chain reflecting the specified permutation. However, direct optimization of such white-box objective is infeasible in a real-world attack scenario due to various black-box limitations. To cope with them, we propose a Short-range Ranking Correlation metric as a surrogate objective for black-box Order Attack to approximate the white-box method. The Order Attack is evaluated on the Fashion-MNIST and Stanford-Online-Products datasets under both white-box and black-box threat models. The black-box attack is also successfully implemented on a major e-commerce platform. Comprehensive experimental evaluations demonstrate the effectiveness of the proposed methods, revealing a new type of ranking model vulnerability.
2210.03731
Sheng-Chun Kao
Sheng-Chun Kao, Angshuman Parashar, Po-An Tsai, Tushar Krishna
Demystifying Map Space Exploration for NPUs
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
Map Space Exploration is the problem of finding optimized mappings of a Deep Neural Network (DNN) model on an accelerator. It is known to be extremely computationally expensive, and there has been active research looking at both heuristics and learning-based methods to make the problem computationally tractable. However, while there are dozens of mappers out there (all empirically claiming to find better mappings than others), the research community lacks systematic insights on how different search techniques navigate the map-space and how different mapping axes contribute to the accelerator's performance and efficiency. Such insights are crucial to developing mapping frameworks for emerging DNNs that are increasingly irregular (due to neural architecture search) and sparse, making the corresponding map spaces much more complex. In this work, rather than proposing yet another mapper, we do a first-of-its-kind apples-to-apples comparison of search techniques leveraged by different mappers. Next, we extract the learnings from our study and propose two new techniques that can augment existing mappers -- warm-start and sparsity-aware -- that demonstrate speedups, scalability, and robustness across diverse DNN models.
[ { "created": "Fri, 7 Oct 2022 17:58:45 GMT", "version": "v1" } ]
2022-10-10
[ [ "Kao", "Sheng-Chun", "" ], [ "Parashar", "Angshuman", "" ], [ "Tsai", "Po-An", "" ], [ "Krishna", "Tushar", "" ] ]
Map Space Exploration is the problem of finding optimized mappings of a Deep Neural Network (DNN) model on an accelerator. It is known to be extremely computationally expensive, and there has been active research looking at both heuristics and learning-based methods to make the problem computationally tractable. However, while there are dozens of mappers out there (all empirically claiming to find better mappings than others), the research community lacks systematic insights on how different search techniques navigate the map-space and how different mapping axes contribute to the accelerator's performance and efficiency. Such insights are crucial to developing mapping frameworks for emerging DNNs that are increasingly irregular (due to neural architecture search) and sparse, making the corresponding map spaces much more complex. In this work, rather than proposing yet another mapper, we do a first-of-its-kind apples-to-apples comparison of search techniques leveraged by different mappers. Next, we extract the learnings from our study and propose two new techniques that can augment existing mappers -- warm-start and sparsity-aware -- that demonstrate speedups, scalability, and robustness across diverse DNN models.
1710.01363
Carl Yang
Carl Yang and Kevin Chen-Chuan Chang
Relationship Profiling over Social Networks: Reverse Smoothness from Similarity to Closeness
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On social networks, while nodes bear rich attributes, we often lack the `semantics' of why each link is formed-- and thus we are missing the `road signs' to navigate and organize the complex social universe. How to identify relationship semantics without labels? Founded on the prevalent homophily principle, we propose the novel problem of Attribute-based Relationship Profiling (ARP), to profile the closeness w.r.t. the underlying relationships (e.g., schoolmate) between users based on their similarity in the corresponding attributes (e.g., education) and, as output, learn a set of social affinity graphs, where each link is weighted by its probabilities of carrying the relationships. As requirements, ARP should be systematic and complete to profile every link for every relationship-- our challenges lie in effectively modeling homophily. We propose a novel reverse smoothness principle by observing that the similarity-closeness duality of homophily is consistent with the well-known smoothness assumption in graph-based semi-supervised learning-- only the direction of inference is reversed. To realize smoothness over noisy social graphs, we further propose a novel holistic closeness modeling approach to capture `high-order' smoothness by extending closeness from edges to paths. Extensive experiments on three real-world datasets demonstrate the efficacy of ARP.
[ { "created": "Tue, 3 Oct 2017 19:46:48 GMT", "version": "v1" } ]
2017-10-05
[ [ "Yang", "Carl", "" ], [ "Chang", "Kevin Chen-Chuan", "" ] ]
On social networks, while nodes bear rich attributes, we often lack the `semantics' of why each link is formed-- and thus we are missing the `road signs' to navigate and organize the complex social universe. How to identify relationship semantics without labels? Founded on the prevalent homophily principle, we propose the novel problem of Attribute-based Relationship Profiling (ARP), to profile the closeness w.r.t. the underlying relationships (e.g., schoolmate) between users based on their similarity in the corresponding attributes (e.g., education) and, as output, learn a set of social affinity graphs, where each link is weighted by its probabilities of carrying the relationships. As requirements, ARP should be systematic and complete to profile every link for every relationship-- our challenges lie in effectively modeling homophily. We propose a novel reverse smoothness principle by observing that the similarity-closeness duality of homophily is consistent with the well-known smoothness assumption in graph-based semi-supervised learning-- only the direction of inference is reversed. To realize smoothness over noisy social graphs, we further propose a novel holistic closeness modeling approach to capture `high-order' smoothness by extending closeness from edges to paths. Extensive experiments on three real-world datasets demonstrate the efficacy of ARP.
2109.10161
Yi Fang
Mengxi Wu, Hao Huang, Yi Fang
3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation
11 page, 5 figures
null
10.1109/ICPR56361.2022.9956045
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the popularity of 3D sensors in self-driving and other robotics applications, extensive research has focused on designing novel neural network architectures for accurate 3D point cloud completion. However, unlike in point cloud classification and reconstruction, the role of adversarial samples in3D point cloud completion has seldom been explored. In this work, we show that training with adversarial samples can improve the performance of neural networks on 3D point cloud completion tasks. We propose a novel approach to generate adversarial samples that benefit both the performance of clean and adversarial samples. In contrast to the PGD-k attack, our method generates adversarial samples that keep the geometric features in clean samples and contain few outliers. In particular, we use principal directions to constrain the adversarial perturbations for each input point. The gradient components in the mean direction of principal directions are taken as adversarial perturbations. In addition, we also investigate the effect of using the minimum curvature direction. Besides, we adopt attack strength accumulation and auxiliary Batch Normalization layers method to speed up the training process and alleviate the distribution mismatch between clean and adversarial samples. Experimental results show that training with the adversarial samples crafted by our method effectively enhances the performance of PCN on the ShapeNet dataset.
[ { "created": "Tue, 21 Sep 2021 13:16:46 GMT", "version": "v1" } ]
2023-01-24
[ [ "Wu", "Mengxi", "" ], [ "Huang", "Hao", "" ], [ "Fang", "Yi", "" ] ]
With the popularity of 3D sensors in self-driving and other robotics applications, extensive research has focused on designing novel neural network architectures for accurate 3D point cloud completion. However, unlike in point cloud classification and reconstruction, the role of adversarial samples in3D point cloud completion has seldom been explored. In this work, we show that training with adversarial samples can improve the performance of neural networks on 3D point cloud completion tasks. We propose a novel approach to generate adversarial samples that benefit both the performance of clean and adversarial samples. In contrast to the PGD-k attack, our method generates adversarial samples that keep the geometric features in clean samples and contain few outliers. In particular, we use principal directions to constrain the adversarial perturbations for each input point. The gradient components in the mean direction of principal directions are taken as adversarial perturbations. In addition, we also investigate the effect of using the minimum curvature direction. Besides, we adopt attack strength accumulation and auxiliary Batch Normalization layers method to speed up the training process and alleviate the distribution mismatch between clean and adversarial samples. Experimental results show that training with the adversarial samples crafted by our method effectively enhances the performance of PCN on the ShapeNet dataset.
2206.13964
Chao Fan
Chao Fan, Saihui Hou, Jilong Wang, Yongzhen Huang, and Shiqi Yu
Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Gait depicts individuals' unique and distinguishing walking patterns and has become one of the most promising biometric features for human identification. As a fine-grained recognition task, gait recognition is easily affected by many factors and usually requires a large amount of completely annotated data that is costly and insatiable. This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning, aiming to learn the general gait representation from massive unlabelled walking videos for practical applications via offering informative walking priors and diverse real-world variations. Specifically, we collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences and propose a conceptually simple yet empirically powerful baseline model GaitSSB. Experimentally, we evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer learning. The unsupervised results are comparable to or even better than the early model-based and GEI-based methods. After transfer learning, our method outperforms existing methods by a large margin in most cases. Theoretically, we discuss the critical issues for gait-specific contrastive framework and present some insights for further study. As far as we know, GaitLU-1M is the first large-scale unlabelled gait dataset, and GaitSSB is the first method that achieves remarkable unsupervised results on the aforementioned benchmarks. The source code of GaitSSB will be integrated into OpenGait which is available at https://github.com/ShiqiYu/OpenGait.
[ { "created": "Tue, 28 Jun 2022 12:33:42 GMT", "version": "v1" }, { "created": "Mon, 4 Sep 2023 07:12:45 GMT", "version": "v2" } ]
2023-09-06
[ [ "Fan", "Chao", "" ], [ "Hou", "Saihui", "" ], [ "Wang", "Jilong", "" ], [ "Huang", "Yongzhen", "" ], [ "Yu", "Shiqi", "" ] ]
Gait depicts individuals' unique and distinguishing walking patterns and has become one of the most promising biometric features for human identification. As a fine-grained recognition task, gait recognition is easily affected by many factors and usually requires a large amount of completely annotated data that is costly and insatiable. This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning, aiming to learn the general gait representation from massive unlabelled walking videos for practical applications via offering informative walking priors and diverse real-world variations. Specifically, we collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences and propose a conceptually simple yet empirically powerful baseline model GaitSSB. Experimentally, we evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer learning. The unsupervised results are comparable to or even better than the early model-based and GEI-based methods. After transfer learning, our method outperforms existing methods by a large margin in most cases. Theoretically, we discuss the critical issues for gait-specific contrastive framework and present some insights for further study. As far as we know, GaitLU-1M is the first large-scale unlabelled gait dataset, and GaitSSB is the first method that achieves remarkable unsupervised results on the aforementioned benchmarks. The source code of GaitSSB will be integrated into OpenGait which is available at https://github.com/ShiqiYu/OpenGait.
2309.13614
Zenan Li
Zenan Li, Fan Nie, Qiao Sun, Fang Da, Hang Zhao
Boosting Offline Reinforcement Learning for Autonomous Driving with Hierarchical Latent Skills
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Learning-based vehicle planning is receiving increasing attention with the emergence of diverse driving simulators and large-scale driving datasets. While offline reinforcement learning (RL) is well suited for these safety-critical tasks, it still struggles to plan over extended periods. In this work, we present a skill-based framework that enhances offline RL to overcome the long-horizon vehicle planning challenge. Specifically, we design a variational autoencoder (VAE) to learn skills from offline demonstrations. To mitigate posterior collapse of common VAEs, we introduce a two-branch sequence encoder to capture both discrete options and continuous variations of the complex driving skills. The final policy treats learned skills as actions and can be trained by any off-the-shelf offline RL algorithms. This facilitates a shift in focus from per-step actions to temporally extended skills, thereby enabling long-term reasoning into the future. Extensive results on CARLA prove that our model consistently outperforms strong baselines at both training and new scenarios. Additional visualizations and experiments demonstrate the interpretability and transferability of extracted skills.
[ { "created": "Sun, 24 Sep 2023 11:51:17 GMT", "version": "v1" }, { "created": "Fri, 17 Nov 2023 05:44:54 GMT", "version": "v2" } ]
2023-11-20
[ [ "Li", "Zenan", "" ], [ "Nie", "Fan", "" ], [ "Sun", "Qiao", "" ], [ "Da", "Fang", "" ], [ "Zhao", "Hang", "" ] ]
Learning-based vehicle planning is receiving increasing attention with the emergence of diverse driving simulators and large-scale driving datasets. While offline reinforcement learning (RL) is well suited for these safety-critical tasks, it still struggles to plan over extended periods. In this work, we present a skill-based framework that enhances offline RL to overcome the long-horizon vehicle planning challenge. Specifically, we design a variational autoencoder (VAE) to learn skills from offline demonstrations. To mitigate posterior collapse of common VAEs, we introduce a two-branch sequence encoder to capture both discrete options and continuous variations of the complex driving skills. The final policy treats learned skills as actions and can be trained by any off-the-shelf offline RL algorithms. This facilitates a shift in focus from per-step actions to temporally extended skills, thereby enabling long-term reasoning into the future. Extensive results on CARLA prove that our model consistently outperforms strong baselines at both training and new scenarios. Additional visualizations and experiments demonstrate the interpretability and transferability of extracted skills.
2108.07789
Xianrui Zheng
Xianrui Zheng, Chao Zhang and Philip C. Woodland
Adapting GPT, GPT-2 and BERT Language Models for Speech Recognition
To appear in ASRU 2021
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language models (LMs) pre-trained on massive amounts of text, in particular bidirectional encoder representations from Transformers (BERT), generative pre-training (GPT), and GPT-2, have become a key technology for many natural language processing tasks. In this paper, we present results using fine-tuned GPT, GPT-2, and their combination for automatic speech recognition (ASR). Unlike unidirectional LM GPT and GPT-2, BERT is bidirectional whose direct product of the output probabilities is no longer a valid language prior probability. A conversion method is proposed to compute the correct language prior probability based on bidirectional LM outputs in a mathematically exact way. Experimental results on the widely used AMI and Switchboard ASR tasks showed that the combination of the fine-tuned GPT and GPT-2 outperformed the combination of three neural LMs with different architectures trained from scratch on the in-domain text by up to a 12% relative word error rate reduction (WERR). Furthermore, on the AMI corpus, the proposed conversion for language prior probabilities enables BERT to obtain an extra 3% relative WERR, and the combination of BERT, GPT and GPT-2 results in further improvements.
[ { "created": "Thu, 29 Jul 2021 16:53:37 GMT", "version": "v1" }, { "created": "Fri, 1 Oct 2021 14:19:39 GMT", "version": "v2" } ]
2021-10-04
[ [ "Zheng", "Xianrui", "" ], [ "Zhang", "Chao", "" ], [ "Woodland", "Philip C.", "" ] ]
Language models (LMs) pre-trained on massive amounts of text, in particular bidirectional encoder representations from Transformers (BERT), generative pre-training (GPT), and GPT-2, have become a key technology for many natural language processing tasks. In this paper, we present results using fine-tuned GPT, GPT-2, and their combination for automatic speech recognition (ASR). Unlike unidirectional LM GPT and GPT-2, BERT is bidirectional whose direct product of the output probabilities is no longer a valid language prior probability. A conversion method is proposed to compute the correct language prior probability based on bidirectional LM outputs in a mathematically exact way. Experimental results on the widely used AMI and Switchboard ASR tasks showed that the combination of the fine-tuned GPT and GPT-2 outperformed the combination of three neural LMs with different architectures trained from scratch on the in-domain text by up to a 12% relative word error rate reduction (WERR). Furthermore, on the AMI corpus, the proposed conversion for language prior probabilities enables BERT to obtain an extra 3% relative WERR, and the combination of BERT, GPT and GPT-2 results in further improvements.
2402.09410
Asad Khaliq
Asad Khaliq
On Computability of Computable Problems
18 Pages,5 figures, and 1 table
null
null
null
cs.CC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Computational problems are classified into computable and uncomputable problems.If there exists an effective procedure (algorithm) to compute a problem then the problem is computable otherwise it is uncomputable.Turing machines can execute any algorithm therefore every computable problem is Turing computable.There are some variants of Turing machine that appear computationally more powerful but all these variants have been proven equally powerful.The main objective of this work is to revisit and examine the computational power of different variants of Turing machines at very fine-grain level.We achieve this objective by constructing a transform technique for Turing computable problems that transforms computable problems into another type of problems, and then we try to compute the transformed problems through different variants of Turing machine.This paper shows the existence of a realizable computational scheme that can establish a framework to analyze computational characteristics of different variants of Turing machine at infinitesimal scale.
[ { "created": "Wed, 13 Dec 2023 18:02:12 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 18:15:18 GMT", "version": "v2" } ]
2024-02-19
[ [ "Khaliq", "Asad", "" ] ]
Computational problems are classified into computable and uncomputable problems.If there exists an effective procedure (algorithm) to compute a problem then the problem is computable otherwise it is uncomputable.Turing machines can execute any algorithm therefore every computable problem is Turing computable.There are some variants of Turing machine that appear computationally more powerful but all these variants have been proven equally powerful.The main objective of this work is to revisit and examine the computational power of different variants of Turing machines at very fine-grain level.We achieve this objective by constructing a transform technique for Turing computable problems that transforms computable problems into another type of problems, and then we try to compute the transformed problems through different variants of Turing machine.This paper shows the existence of a realizable computational scheme that can establish a framework to analyze computational characteristics of different variants of Turing machine at infinitesimal scale.
2406.08124
Duanyu Feng
Duanyu Feng, Bowen Qin, Chen Huang, Youcheng Huang, Zheng Zhang, Wenqiang Lei
Legend: Leveraging Representation Engineering to Annotate Safety Margin for Preference Datasets
Our code is available at https://github.com/colfeng/Legend
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The success of the reward model in distinguishing between responses with subtle safety differences depends critically on the high-quality preference dataset, which should capture the fine-grained nuances of harmful and harmless responses. This motivates the need to develop a dataset involving preference margins, which accurately quantify how harmless one response is compared to another. In this paper, we take the first step to propose an effective and cost-efficient framework to promote the margin-enhanced preference dataset development. Our framework, Legend, Leverages representation engineering to annotate preference datasets. It constructs the specific direction within the LLM's embedding space that represents safety. By leveraging this safety direction, Legend can then leverage the semantic distances of paired responses along this direction to annotate margins automatically. We experimentally demonstrate our effectiveness in both reward modeling and harmless alignment for LLMs. Legend also stands out for its efficiency, requiring only the inference time rather than additional training. This efficiency allows for easier implementation and scalability, making Legend particularly valuable for practical applications in aligning LLMs with safe conversations.
[ { "created": "Wed, 12 Jun 2024 12:06:32 GMT", "version": "v1" } ]
2024-06-13
[ [ "Feng", "Duanyu", "" ], [ "Qin", "Bowen", "" ], [ "Huang", "Chen", "" ], [ "Huang", "Youcheng", "" ], [ "Zhang", "Zheng", "" ], [ "Lei", "Wenqiang", "" ] ]
The success of the reward model in distinguishing between responses with subtle safety differences depends critically on the high-quality preference dataset, which should capture the fine-grained nuances of harmful and harmless responses. This motivates the need to develop a dataset involving preference margins, which accurately quantify how harmless one response is compared to another. In this paper, we take the first step to propose an effective and cost-efficient framework to promote the margin-enhanced preference dataset development. Our framework, Legend, Leverages representation engineering to annotate preference datasets. It constructs the specific direction within the LLM's embedding space that represents safety. By leveraging this safety direction, Legend can then leverage the semantic distances of paired responses along this direction to annotate margins automatically. We experimentally demonstrate our effectiveness in both reward modeling and harmless alignment for LLMs. Legend also stands out for its efficiency, requiring only the inference time rather than additional training. This efficiency allows for easier implementation and scalability, making Legend particularly valuable for practical applications in aligning LLMs with safe conversations.
2003.10585
Pietro Verzelli
Pietro Verzelli and Cesare Alippi and Lorenzo Livi and Peter Tino
Input-to-State Representation in linear reservoirs dynamics
null
null
null
null
cs.NE cs.LG math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners.
[ { "created": "Tue, 24 Mar 2020 00:14:25 GMT", "version": "v1" }, { "created": "Tue, 5 Jan 2021 23:22:16 GMT", "version": "v2" }, { "created": "Fri, 12 Feb 2021 14:29:49 GMT", "version": "v3" } ]
2021-02-15
[ [ "Verzelli", "Pietro", "" ], [ "Alippi", "Cesare", "" ], [ "Livi", "Lorenzo", "" ], [ "Tino", "Peter", "" ] ]
Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners.
2308.02870
Fangyuan Wang
Fangyuan Wang, Ming Hao, Yuhai Shi, Bo Xu
ApproBiVT: Lead ASR Models to Generalize Better Using Approximated Bias-Variance Tradeoff Guided Early Stopping and Checkpoint Averaging
null
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The conventional recipe for Automatic Speech Recognition (ASR) models is to 1) train multiple checkpoints on a training set while relying on a validation set to prevent overfitting using early stopping and 2) average several last checkpoints or that of the lowest validation losses to obtain the final model. In this paper, we rethink and update the early stopping and checkpoint averaging from the perspective of the bias-variance tradeoff. Theoretically, the bias and variance represent the fitness and variability of a model and the tradeoff of them determines the overall generalization error. But, it's impractical to evaluate them precisely. As an alternative, we take the training loss and validation loss as proxies of bias and variance and guide the early stopping and checkpoint averaging using their tradeoff, namely an Approximated Bias-Variance Tradeoff (ApproBiVT). When evaluating with advanced ASR models, our recipe provides 2.5%-3.7% and 3.1%-4.6% CER reduction on the AISHELL-1 and AISHELL-2, respectively.
[ { "created": "Sat, 5 Aug 2023 12:50:54 GMT", "version": "v1" } ]
2023-08-08
[ [ "Wang", "Fangyuan", "" ], [ "Hao", "Ming", "" ], [ "Shi", "Yuhai", "" ], [ "Xu", "Bo", "" ] ]
The conventional recipe for Automatic Speech Recognition (ASR) models is to 1) train multiple checkpoints on a training set while relying on a validation set to prevent overfitting using early stopping and 2) average several last checkpoints or that of the lowest validation losses to obtain the final model. In this paper, we rethink and update the early stopping and checkpoint averaging from the perspective of the bias-variance tradeoff. Theoretically, the bias and variance represent the fitness and variability of a model and the tradeoff of them determines the overall generalization error. But, it's impractical to evaluate them precisely. As an alternative, we take the training loss and validation loss as proxies of bias and variance and guide the early stopping and checkpoint averaging using their tradeoff, namely an Approximated Bias-Variance Tradeoff (ApproBiVT). When evaluating with advanced ASR models, our recipe provides 2.5%-3.7% and 3.1%-4.6% CER reduction on the AISHELL-1 and AISHELL-2, respectively.
1502.04820
Tanmoy Maitra
Tanmoy Maitra
Cryptanalysis of A Secure Remote User Authentication Scheme Using Smart Cards
4 pages, no figure, fresh submission
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart card based authentication schemes are used in various fields like e-banking, e-commerce, wireless sensor networks, medical system and so on to authenticate the both remote user and the application server during the communication via internet. Recently, Karuppiah and Saravanan proposed an authentication scheme which is based on password and one-way cryptographic hash function. They have used a secure identity mechanism i.e., users' and server's identity are not public. Thus, the user and the server do not send their identity directly to each other during communications. In this paper, we have found out that their scheme does not overcome the reply attack and also there is a fault in the login phase, which makes their scheme is not perfect for practical use.
[ { "created": "Tue, 17 Feb 2015 07:55:50 GMT", "version": "v1" } ]
2015-02-18
[ [ "Maitra", "Tanmoy", "" ] ]
Smart card based authentication schemes are used in various fields like e-banking, e-commerce, wireless sensor networks, medical system and so on to authenticate the both remote user and the application server during the communication via internet. Recently, Karuppiah and Saravanan proposed an authentication scheme which is based on password and one-way cryptographic hash function. They have used a secure identity mechanism i.e., users' and server's identity are not public. Thus, the user and the server do not send their identity directly to each other during communications. In this paper, we have found out that their scheme does not overcome the reply attack and also there is a fault in the login phase, which makes their scheme is not perfect for practical use.
2303.14404
Muhammad Akhtar Munir
Muhammad Akhtar Munir and Muhammad Haris Khan and Salman Khan and Fahad Shahbaz Khan
Bridging Precision and Confidence: A Train-Time Loss for Calibrating Object Detection
Accepted at CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep neural networks (DNNs) have enabled astounding progress in several vision-based problems. Despite showing high predictive accuracy, recently, several works have revealed that they tend to provide overconfident predictions and thus are poorly calibrated. The majority of the works addressing the miscalibration of DNNs fall under the scope of classification and consider only in-domain predictions. However, there is little to no progress in studying the calibration of DNN-based object detection models, which are central to many vision-based safety-critical applications. In this paper, inspired by the train-time calibration methods, we propose a novel auxiliary loss formulation that explicitly aims to align the class confidence of bounding boxes with the accurateness of predictions (i.e. precision). Since the original formulation of our loss depends on the counts of true positives and false positives in a minibatch, we develop a differentiable proxy of our loss that can be used during training with other application-specific loss functions. We perform extensive experiments on challenging in-domain and out-domain scenarios with six benchmark datasets including MS-COCO, Cityscapes, Sim10k, and BDD100k. Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios. Our source code and pre-trained models are available at https://github.com/akhtarvision/bpc_calibration
[ { "created": "Sat, 25 Mar 2023 08:56:21 GMT", "version": "v1" } ]
2023-03-28
[ [ "Munir", "Muhammad Akhtar", "" ], [ "Khan", "Muhammad Haris", "" ], [ "Khan", "Salman", "" ], [ "Khan", "Fahad Shahbaz", "" ] ]
Deep neural networks (DNNs) have enabled astounding progress in several vision-based problems. Despite showing high predictive accuracy, recently, several works have revealed that they tend to provide overconfident predictions and thus are poorly calibrated. The majority of the works addressing the miscalibration of DNNs fall under the scope of classification and consider only in-domain predictions. However, there is little to no progress in studying the calibration of DNN-based object detection models, which are central to many vision-based safety-critical applications. In this paper, inspired by the train-time calibration methods, we propose a novel auxiliary loss formulation that explicitly aims to align the class confidence of bounding boxes with the accurateness of predictions (i.e. precision). Since the original formulation of our loss depends on the counts of true positives and false positives in a minibatch, we develop a differentiable proxy of our loss that can be used during training with other application-specific loss functions. We perform extensive experiments on challenging in-domain and out-domain scenarios with six benchmark datasets including MS-COCO, Cityscapes, Sim10k, and BDD100k. Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios. Our source code and pre-trained models are available at https://github.com/akhtarvision/bpc_calibration
2112.12054
Chennakesava Kadapa
Chennakesava Kadapa
Machine Learning for Computational Science and Engineering -- a brief introduction and some critical questions
16 papges
null
null
null
cs.LG cs.CE physics.comp-ph
http://creativecommons.org/licenses/by/4.0/
Artificial Intelligence (AI) is now entering every sub-field of science, technology, engineering, arts, and management. Thanks to the hype and availability of research funds, it is being adapted in many fields without much thought. Computational Science and Engineering (CS&E) is one such sub-field. By highlighting some critical questions around the issues and challenges in adapting Machine Learning (ML) for CS&E, most of which are often overlooked in journal papers, this contribution hopes to offer some insights into the adaptation of ML for applications in CS\&E and related fields. This is a general-purpose article written for a general audience and researchers new to the fields of ML and/or CS\&E. This work focuses only on the forward problems in computational science and engineering. Some basic equations and MATLAB code are also provided to help the reader understand the basics.
[ { "created": "Wed, 22 Dec 2021 17:25:32 GMT", "version": "v1" } ]
2021-12-23
[ [ "Kadapa", "Chennakesava", "" ] ]
Artificial Intelligence (AI) is now entering every sub-field of science, technology, engineering, arts, and management. Thanks to the hype and availability of research funds, it is being adapted in many fields without much thought. Computational Science and Engineering (CS&E) is one such sub-field. By highlighting some critical questions around the issues and challenges in adapting Machine Learning (ML) for CS&E, most of which are often overlooked in journal papers, this contribution hopes to offer some insights into the adaptation of ML for applications in CS\&E and related fields. This is a general-purpose article written for a general audience and researchers new to the fields of ML and/or CS\&E. This work focuses only on the forward problems in computational science and engineering. Some basic equations and MATLAB code are also provided to help the reader understand the basics.
2202.02177
Jakub Nawa{\l}a
Jakub Nawa{\l}a (1), Lucjan Janowski (1), Bogdan \'Cmiel (2), Krzysztof Rusek (1), Pablo P\'erez (3) ((1) AGH University of Science and Technology, Institute of Telecommunications, (2) AGH University of Science and Technology, Department of Mathematical Analysis, Computational Mathematics and Probability Methods, (3) Applications & Platforms Software Systems, Nokia Bell Labs, Madrid, Spain)
Generalised Score Distribution: A Two-Parameter Discrete Distribution Accurately Describing Responses from Quality of Experience Subjective Experiments
15 pages, 6 figures. Under review in IEEE Transactions on Multimedia
IEEE Transactions on Multimedia, 2022
10.1109/TMM.2022.3205444
null
cs.MM
http://creativecommons.org/licenses/by/4.0/
Subjective responses from Multimedia Quality Assessment (MQA) experiments are conventionally analysed with methods not suitable for the data type these responses represent. Furthermore, obtaining subjective responses is resource intensive. A method allowing reuse of existing responses would be thus beneficial. Applying improper data analysis methods leads to difficult to interpret results. This encourages drawing erroneous conclusions. Building upon existing subjective responses is resource friendly and helps develop machine learning (ML) based visual quality predictors. We show that using a discrete model for analysis of responses from MQA subjective experiments is feasible. We indicate that our proposed Generalised Score Distribution (GSD) properly describes response distributions observed in typical MQA experiments. We highlight interpretability of GSD parameters and indicate that the GSD outperforms the approach based on sample empirical distribution when it comes to bootstrapping. We evidence that the GSD outcompetes the state-of-the-art model both in terms of goodness-of-fit and bootstrapping capabilities. To do all of that we analyse more than one million subjective responses from more than 30 subjective experiments. Furthermore, we make the code implementing the GSD model and related analyses available through our GitHub repository: https://github.com/Qub3k/subjective-exp-consistency-check
[ { "created": "Fri, 4 Feb 2022 15:11:24 GMT", "version": "v1" } ]
2022-10-07
[ [ "Nawała", "Jakub", "" ], [ "Janowski", "Lucjan", "" ], [ "Ćmiel", "Bogdan", "" ], [ "Rusek", "Krzysztof", "" ], [ "Pérez", "Pablo", "" ] ]
Subjective responses from Multimedia Quality Assessment (MQA) experiments are conventionally analysed with methods not suitable for the data type these responses represent. Furthermore, obtaining subjective responses is resource intensive. A method allowing reuse of existing responses would be thus beneficial. Applying improper data analysis methods leads to difficult to interpret results. This encourages drawing erroneous conclusions. Building upon existing subjective responses is resource friendly and helps develop machine learning (ML) based visual quality predictors. We show that using a discrete model for analysis of responses from MQA subjective experiments is feasible. We indicate that our proposed Generalised Score Distribution (GSD) properly describes response distributions observed in typical MQA experiments. We highlight interpretability of GSD parameters and indicate that the GSD outperforms the approach based on sample empirical distribution when it comes to bootstrapping. We evidence that the GSD outcompetes the state-of-the-art model both in terms of goodness-of-fit and bootstrapping capabilities. To do all of that we analyse more than one million subjective responses from more than 30 subjective experiments. Furthermore, we make the code implementing the GSD model and related analyses available through our GitHub repository: https://github.com/Qub3k/subjective-exp-consistency-check
2307.10273
Cl\'ement Canonne
Cl\'ement L. Canonne, Samuel B. Hopkins, Jerry Li, Allen Liu, and Shyam Narayanan
The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination
To appear in FOCS 2023
null
null
null
cs.DS math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the question of Gaussian mean testing, a fundamental task in high-dimensional distribution testing and signal processing, subject to adversarial corruptions of the samples. We focus on the relative power of different adversaries, and show that, in contrast to the common wisdom in robust statistics, there exists a strict separation between adaptive adversaries (strong contamination) and oblivious ones (weak contamination) for this task. Specifically, we resolve both the information-theoretic and computational landscapes for robust mean testing. In the exponential-time setting, we establish the tight sample complexity of testing $\mathcal{N}(0,I)$ against $\mathcal{N}(\alpha v, I)$, where $\|v\|_2 = 1$, with an $\varepsilon$-fraction of adversarial corruptions, to be \[ \tilde{\Theta}\!\left(\max\left(\frac{\sqrt{d}}{\alpha^2}, \frac{d\varepsilon^3}{\alpha^4},\min\left(\frac{d^{2/3}\varepsilon^{2/3}}{\alpha^{8/3}}, \frac{d \varepsilon}{\alpha^2}\right)\right) \right) \,, \] while the complexity against adaptive adversaries is \[ \tilde{\Theta}\!\left(\max\left(\frac{\sqrt{d}}{\alpha^2}, \frac{d\varepsilon^2}{\alpha^4} \right)\right) \,, \] which is strictly worse for a large range of vanishing $\varepsilon,\alpha$. To the best of our knowledge, ours is the first separation in sample complexity between the strong and weak contamination models. In the polynomial-time setting, we close a gap in the literature by providing a polynomial-time algorithm against adaptive adversaries achieving the above sample complexity $\tilde{\Theta}(\max({\sqrt{d}}/{\alpha^2}, {d\varepsilon^2}/{\alpha^4} ))$, and a low-degree lower bound (which complements an existing reduction from planted clique) suggesting that all efficient algorithms require this many samples, even in the oblivious-adversary setting.
[ { "created": "Tue, 18 Jul 2023 05:02:54 GMT", "version": "v1" } ]
2023-07-21
[ [ "Canonne", "Clément L.", "" ], [ "Hopkins", "Samuel B.", "" ], [ "Li", "Jerry", "" ], [ "Liu", "Allen", "" ], [ "Narayanan", "Shyam", "" ] ]
We consider the question of Gaussian mean testing, a fundamental task in high-dimensional distribution testing and signal processing, subject to adversarial corruptions of the samples. We focus on the relative power of different adversaries, and show that, in contrast to the common wisdom in robust statistics, there exists a strict separation between adaptive adversaries (strong contamination) and oblivious ones (weak contamination) for this task. Specifically, we resolve both the information-theoretic and computational landscapes for robust mean testing. In the exponential-time setting, we establish the tight sample complexity of testing $\mathcal{N}(0,I)$ against $\mathcal{N}(\alpha v, I)$, where $\|v\|_2 = 1$, with an $\varepsilon$-fraction of adversarial corruptions, to be \[ \tilde{\Theta}\!\left(\max\left(\frac{\sqrt{d}}{\alpha^2}, \frac{d\varepsilon^3}{\alpha^4},\min\left(\frac{d^{2/3}\varepsilon^{2/3}}{\alpha^{8/3}}, \frac{d \varepsilon}{\alpha^2}\right)\right) \right) \,, \] while the complexity against adaptive adversaries is \[ \tilde{\Theta}\!\left(\max\left(\frac{\sqrt{d}}{\alpha^2}, \frac{d\varepsilon^2}{\alpha^4} \right)\right) \,, \] which is strictly worse for a large range of vanishing $\varepsilon,\alpha$. To the best of our knowledge, ours is the first separation in sample complexity between the strong and weak contamination models. In the polynomial-time setting, we close a gap in the literature by providing a polynomial-time algorithm against adaptive adversaries achieving the above sample complexity $\tilde{\Theta}(\max({\sqrt{d}}/{\alpha^2}, {d\varepsilon^2}/{\alpha^4} ))$, and a low-degree lower bound (which complements an existing reduction from planted clique) suggesting that all efficient algorithms require this many samples, even in the oblivious-adversary setting.
2310.13298
Milad Abolpour
Milad Abolpour, MohammadJavad Salehi, and Antti T\"olli
Cache-Aided Communications in MISO Networks with Dynamic User Behavior
Accepted in IEEE Transaction On Wireless Communications, 2024. arXiv admin note: substantial text overlap with arXiv:2304.11623
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Coded caching (CC) can substantially enhance network performance by leveraging memory as an additional communication resource. However, the use of CC is challenging in various practical applications due to dynamic user behavior. The existing solutions, based on shared caching, cannot directly handle all scenarios where users freely enter and depart the network at any time as they are constrained by specific conditions on network parameters. This paper proposes a universally applicable shared-caching scheme for dynamic setups without any restriction on network parameters. The closed-form expressions for the achievable degrees of freedom (DoF) are computed for the resulting generalized scheme, and are shown to achieve the existing optimal bounds of the shared-cache model. Furthermore, a successive-interference-cancellation-free extension based on a fast iterative optimized beamformer design is devised to optimize the use of excess spatial dimensions freed by cache-aided interference cancellation. Extensive numerical experiments are carried out to assess the performance of the proposed scheme. In particular, the results demonstrate that while a dynamic setup may achieve a DoF substantially lower than the optimal DoF of shared caching, our proposed scheme significantly improves the performance at the finite signal-to-noise ratio compared to unicasting, which only benefits from the local caching gain.
[ { "created": "Fri, 20 Oct 2023 06:17:39 GMT", "version": "v1" }, { "created": "Mon, 6 May 2024 09:13:09 GMT", "version": "v2" } ]
2024-05-07
[ [ "Abolpour", "Milad", "" ], [ "Salehi", "MohammadJavad", "" ], [ "Tölli", "Antti", "" ] ]
Coded caching (CC) can substantially enhance network performance by leveraging memory as an additional communication resource. However, the use of CC is challenging in various practical applications due to dynamic user behavior. The existing solutions, based on shared caching, cannot directly handle all scenarios where users freely enter and depart the network at any time as they are constrained by specific conditions on network parameters. This paper proposes a universally applicable shared-caching scheme for dynamic setups without any restriction on network parameters. The closed-form expressions for the achievable degrees of freedom (DoF) are computed for the resulting generalized scheme, and are shown to achieve the existing optimal bounds of the shared-cache model. Furthermore, a successive-interference-cancellation-free extension based on a fast iterative optimized beamformer design is devised to optimize the use of excess spatial dimensions freed by cache-aided interference cancellation. Extensive numerical experiments are carried out to assess the performance of the proposed scheme. In particular, the results demonstrate that while a dynamic setup may achieve a DoF substantially lower than the optimal DoF of shared caching, our proposed scheme significantly improves the performance at the finite signal-to-noise ratio compared to unicasting, which only benefits from the local caching gain.
2004.01909
Jimmy Lin
Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an empirical study of conversational question reformulation (CQR) with sequence-to-sequence architectures and pretrained language models (PLMs). We leverage PLMs to address the strong token-to-token independence assumption made in the common objective, maximum likelihood estimation, for the CQR task. In CQR benchmarks of task-oriented dialogue systems, we evaluate fine-tuned PLMs on the recently-introduced CANARD dataset as an in-domain task and validate the models using data from the TREC 2019 CAsT Track as an out-domain task. Examining a variety of architectures with different numbers of parameters, we demonstrate that the recent text-to-text transfer transformer (T5) achieves the best results both on CANARD and CAsT with fewer parameters, compared to similar transformer architectures.
[ { "created": "Sat, 4 Apr 2020 11:07:54 GMT", "version": "v1" } ]
2020-04-07
[ [ "Lin", "Sheng-Chieh", "" ], [ "Yang", "Jheng-Hong", "" ], [ "Nogueira", "Rodrigo", "" ], [ "Tsai", "Ming-Feng", "" ], [ "Wang", "Chuan-Ju", "" ], [ "Lin", "Jimmy", "" ] ]
This paper presents an empirical study of conversational question reformulation (CQR) with sequence-to-sequence architectures and pretrained language models (PLMs). We leverage PLMs to address the strong token-to-token independence assumption made in the common objective, maximum likelihood estimation, for the CQR task. In CQR benchmarks of task-oriented dialogue systems, we evaluate fine-tuned PLMs on the recently-introduced CANARD dataset as an in-domain task and validate the models using data from the TREC 2019 CAsT Track as an out-domain task. Examining a variety of architectures with different numbers of parameters, we demonstrate that the recent text-to-text transfer transformer (T5) achieves the best results both on CANARD and CAsT with fewer parameters, compared to similar transformer architectures.
2406.03439
Joachim Ott
Joachim Ott, Zuowen Wang, Shih-Chii Liu
Text-to-Events: Synthetic Event Camera Streams from Conditional Text Input
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event cameras are advantageous for tasks that require vision sensors with low-latency and sparse output responses. However, the development of deep network algorithms using event cameras has been slow because of the lack of large labelled event camera datasets for network training. This paper reports a method for creating new labelled event datasets by using a text-to-X model, where X is one or multiple output modalities, in the case of this work, events. Our proposed text-to-events model produces synthetic event frames directly from text prompts. It uses an autoencoder which is trained to produce sparse event frames representing event camera outputs. By combining the pretrained autoencoder with a diffusion model architecture, the new text-to-events model is able to generate smooth synthetic event streams of moving objects. The autoencoder was first trained on an event camera dataset of diverse scenes. In the combined training with the diffusion model, the DVS gesture dataset was used. We demonstrate that the model can generate realistic event sequences of human gestures prompted by different text statements. The classification accuracy of the generated sequences, using a classifier trained on the real dataset, ranges between 42% to 92%, depending on the gesture group. The results demonstrate the capability of this method in synthesizing event datasets.
[ { "created": "Wed, 5 Jun 2024 16:34:12 GMT", "version": "v1" } ]
2024-06-06
[ [ "Ott", "Joachim", "" ], [ "Wang", "Zuowen", "" ], [ "Liu", "Shih-Chii", "" ] ]
Event cameras are advantageous for tasks that require vision sensors with low-latency and sparse output responses. However, the development of deep network algorithms using event cameras has been slow because of the lack of large labelled event camera datasets for network training. This paper reports a method for creating new labelled event datasets by using a text-to-X model, where X is one or multiple output modalities, in the case of this work, events. Our proposed text-to-events model produces synthetic event frames directly from text prompts. It uses an autoencoder which is trained to produce sparse event frames representing event camera outputs. By combining the pretrained autoencoder with a diffusion model architecture, the new text-to-events model is able to generate smooth synthetic event streams of moving objects. The autoencoder was first trained on an event camera dataset of diverse scenes. In the combined training with the diffusion model, the DVS gesture dataset was used. We demonstrate that the model can generate realistic event sequences of human gestures prompted by different text statements. The classification accuracy of the generated sequences, using a classifier trained on the real dataset, ranges between 42% to 92%, depending on the gesture group. The results demonstrate the capability of this method in synthesizing event datasets.
2004.01030
Lachlan Kermode
Lachlan Kermode, Jan Freyberg, Alican Akturk, Robert Trafford, Denis Kochetkov, Rafael Pardinas, Eyal Weizman, and Julien Cornebise
Objects of violence: synthetic data for practical ML in human rights investigations
Presented at NeurIPS 2019 in the AI for Social Good track
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a machine learning workflow to search for, identify, and meaningfully triage videos and images of munitions, weapons, and military equipment, even when limited training data exists for the object of interest. This workflow is designed to expedite the work of OSINT ("open source intelligence") researchers in human rights investigations. It consists of three components: automatic rendering and annotating of synthetic datasets that make up for a lack of training data; training image classifiers from combined sets of photographic and synthetic data; and mtriage, an open source software that orchestrates these classifiers' deployment to triage public domain media, and visualise predictions in a web interface. We show that synthetic data helps to train classifiers more effectively, and that certain approaches yield better results for different architectures. We then demonstrate our workflow in two real-world human rights investigations: the use of the Triple-Chaser tear gas grenade against civilians, and the verification of allegations of military presence in Ukraine in 2014.
[ { "created": "Wed, 1 Apr 2020 14:50:43 GMT", "version": "v1" } ]
2020-04-03
[ [ "Kermode", "Lachlan", "" ], [ "Freyberg", "Jan", "" ], [ "Akturk", "Alican", "" ], [ "Trafford", "Robert", "" ], [ "Kochetkov", "Denis", "" ], [ "Pardinas", "Rafael", "" ], [ "Weizman", "Eyal", "" ], [ "Cornebise", "Julien", "" ] ]
We introduce a machine learning workflow to search for, identify, and meaningfully triage videos and images of munitions, weapons, and military equipment, even when limited training data exists for the object of interest. This workflow is designed to expedite the work of OSINT ("open source intelligence") researchers in human rights investigations. It consists of three components: automatic rendering and annotating of synthetic datasets that make up for a lack of training data; training image classifiers from combined sets of photographic and synthetic data; and mtriage, an open source software that orchestrates these classifiers' deployment to triage public domain media, and visualise predictions in a web interface. We show that synthetic data helps to train classifiers more effectively, and that certain approaches yield better results for different architectures. We then demonstrate our workflow in two real-world human rights investigations: the use of the Triple-Chaser tear gas grenade against civilians, and the verification of allegations of military presence in Ukraine in 2014.
2307.11449
Yong Song
Ye Ouyang, Yaqin Zhang, Xiaozhou Ye, Yunxin Liu, Yong Song, Yang Liu, Sen Bian, Zhiyong Liu
AIGC Empowering Telecom Sector White Paper_chinese
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the global craze of GPT, people have deeply realized that AI, as a transformative technology and key force in economic and social development, will bring great leaps and breakthroughs to the global industry and profoundly influence the future world competition pattern. As the builder and operator of information and communication infrastructure, the telecom sector provides infrastructure support for the development of AI, and even takes the lead in the implementation of AI applications. How to enable the application of AIGC (GPT) and implement AIGC in the telecom sector are questions that telecom practitioners must ponder and answer. Through the study of GPT, a typical representative of AIGC, the authors have analyzed how GPT empowers the telecom sector in the form of scenarios, discussed the gap between the current GPT general model and telecom services, proposed for the first time a Telco Augmented Cognition capability system, provided answers to how to construct a telecom service GPT in the telecom sector, and carried out various practices. Our counterparts in the industry are expected to focus on collaborative innovation around telecom and AI, build an open and shared innovation ecosystem, promote the deep integration of AI and telecom sector, and accelerate the construction of next-generation information infrastructure, in an effort to facilitate the digital transformation of the economy and society.
[ { "created": "Fri, 21 Jul 2023 09:30:08 GMT", "version": "v1" }, { "created": "Mon, 24 Jul 2023 01:54:25 GMT", "version": "v2" } ]
2023-07-25
[ [ "Ouyang", "Ye", "" ], [ "Zhang", "Yaqin", "" ], [ "Ye", "Xiaozhou", "" ], [ "Liu", "Yunxin", "" ], [ "Song", "Yong", "" ], [ "Liu", "Yang", "" ], [ "Bian", "Sen", "" ], [ "Liu", "Zhiyong", "" ] ]
In the global craze of GPT, people have deeply realized that AI, as a transformative technology and key force in economic and social development, will bring great leaps and breakthroughs to the global industry and profoundly influence the future world competition pattern. As the builder and operator of information and communication infrastructure, the telecom sector provides infrastructure support for the development of AI, and even takes the lead in the implementation of AI applications. How to enable the application of AIGC (GPT) and implement AIGC in the telecom sector are questions that telecom practitioners must ponder and answer. Through the study of GPT, a typical representative of AIGC, the authors have analyzed how GPT empowers the telecom sector in the form of scenarios, discussed the gap between the current GPT general model and telecom services, proposed for the first time a Telco Augmented Cognition capability system, provided answers to how to construct a telecom service GPT in the telecom sector, and carried out various practices. Our counterparts in the industry are expected to focus on collaborative innovation around telecom and AI, build an open and shared innovation ecosystem, promote the deep integration of AI and telecom sector, and accelerate the construction of next-generation information infrastructure, in an effort to facilitate the digital transformation of the economy and society.
2312.01561
Yan Xu
Yan Xu, Kris Kitani
Multi-View Person Matching and 3D Pose Estimation with Arbitrary Uncalibrated Camera Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-view person matching and 3D human pose estimation in multi-camera networks are particularly difficult when the cameras are extrinsically uncalibrated. Existing efforts generally require large amounts of 3D data for training neural networks or known camera poses for geometric constraints to solve the problem. However, camera poses and 3D data annotation are usually expensive and not always available. We present a method, PME, that solves the two tasks without requiring either information. Our idea is to address cross-view person matching as a clustering problem using each person as a cluster center, then obtain correspondences from person matches, and estimate 3D human poses through multi-view triangulation and bundle adjustment. We solve the clustering problem by introducing a "size constraint" using the number of cameras and a "source constraint" using the fact that two people from the same camera view should not match, to narrow the solution space to a small feasible region. The 2D human poses used in clustering are obtained through a pre-trained 2D pose detector, so our method does not require expensive 3D training data for each new scene. We extensively evaluate our method on three open datasets and two indoor and outdoor datasets collected using arbitrarily set cameras. Our method outperforms other methods by a large margin on cross-view person matching, reaches SOTA performance on 3D human pose estimation without using either camera poses or 3D training data, and shows good generalization ability across five datasets of various environment settings.
[ { "created": "Mon, 4 Dec 2023 01:28:38 GMT", "version": "v1" } ]
2023-12-05
[ [ "Xu", "Yan", "" ], [ "Kitani", "Kris", "" ] ]
Cross-view person matching and 3D human pose estimation in multi-camera networks are particularly difficult when the cameras are extrinsically uncalibrated. Existing efforts generally require large amounts of 3D data for training neural networks or known camera poses for geometric constraints to solve the problem. However, camera poses and 3D data annotation are usually expensive and not always available. We present a method, PME, that solves the two tasks without requiring either information. Our idea is to address cross-view person matching as a clustering problem using each person as a cluster center, then obtain correspondences from person matches, and estimate 3D human poses through multi-view triangulation and bundle adjustment. We solve the clustering problem by introducing a "size constraint" using the number of cameras and a "source constraint" using the fact that two people from the same camera view should not match, to narrow the solution space to a small feasible region. The 2D human poses used in clustering are obtained through a pre-trained 2D pose detector, so our method does not require expensive 3D training data for each new scene. We extensively evaluate our method on three open datasets and two indoor and outdoor datasets collected using arbitrarily set cameras. Our method outperforms other methods by a large margin on cross-view person matching, reaches SOTA performance on 3D human pose estimation without using either camera poses or 3D training data, and shows good generalization ability across five datasets of various environment settings.
2311.11550
Kun Wang
Kun Wang, Yu Fua, Xueyuan Duan, Taotao Liu, Jianqiao Xu
Abnormal traffic detection system in SDN based on deep learning hybrid models
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Software defined network (SDN) provides technical support for network construction in smart cities, However, the openness of SDN is also prone to more network attacks. Traditional abnormal traffic detection methods have complex algorithms and find it difficult to detect abnormalities in the network promptly, which cannot meet the demand for abnormal detection in the SDN environment. Therefore, we propose an abnormal traffic detection system based on deep learning hybrid model. The system adopts a hierarchical detection technique, which first achieves rough detection of abnormal traffic based on port information. Then it uses wavelet transform and deep learning techniques for fine detection of all traffic data flowing through suspicious switches. The experimental results show that the proposed detection method based on port information can quickly complete the approximate localization of the source of abnormal traffic. the accuracy, precision, and recall of the fine detection are significantly improved compared with the traditional method of abnormal traffic detection in SDN.
[ { "created": "Mon, 20 Nov 2023 06:05:32 GMT", "version": "v1" } ]
2023-11-21
[ [ "Wang", "Kun", "" ], [ "Fua", "Yu", "" ], [ "Duan", "Xueyuan", "" ], [ "Liu", "Taotao", "" ], [ "Xu", "Jianqiao", "" ] ]
Software defined network (SDN) provides technical support for network construction in smart cities, However, the openness of SDN is also prone to more network attacks. Traditional abnormal traffic detection methods have complex algorithms and find it difficult to detect abnormalities in the network promptly, which cannot meet the demand for abnormal detection in the SDN environment. Therefore, we propose an abnormal traffic detection system based on deep learning hybrid model. The system adopts a hierarchical detection technique, which first achieves rough detection of abnormal traffic based on port information. Then it uses wavelet transform and deep learning techniques for fine detection of all traffic data flowing through suspicious switches. The experimental results show that the proposed detection method based on port information can quickly complete the approximate localization of the source of abnormal traffic. the accuracy, precision, and recall of the fine detection are significantly improved compared with the traditional method of abnormal traffic detection in SDN.
2107.06054
Maxime Buron
Maxime Buron, Marie-Laure Mugnier, Micha\"el Thomazo
Parallelisable Existential Rules: a Story of Pieces
null
null
null
null
cs.AI cs.DB
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider existential rules, an expressive formalism well suited to the representation of ontological knowledge and data-to-ontology mappings in the context of ontology-based data integration. The chase is a fundamental tool to do reasoning with existential rules as it computes all the facts entailed by the rules from a database instance. We introduce parallelisable sets of existential rules, for which the chase can be computed in a single breadth-first step from any instance. The question we investigate is the characterization of such rule sets. We show that parallelisable rule sets are exactly those rule sets both bounded for the chase and belonging to a novel class of rules, called pieceful. The pieceful class includes in particular frontier-guarded existential rules and (plain) datalog. We also give another characterization of parallelisable rule sets in terms of rule composition based on rewriting.
[ { "created": "Tue, 13 Jul 2021 13:09:14 GMT", "version": "v1" } ]
2021-07-14
[ [ "Buron", "Maxime", "" ], [ "Mugnier", "Marie-Laure", "" ], [ "Thomazo", "Michaël", "" ] ]
In this paper, we consider existential rules, an expressive formalism well suited to the representation of ontological knowledge and data-to-ontology mappings in the context of ontology-based data integration. The chase is a fundamental tool to do reasoning with existential rules as it computes all the facts entailed by the rules from a database instance. We introduce parallelisable sets of existential rules, for which the chase can be computed in a single breadth-first step from any instance. The question we investigate is the characterization of such rule sets. We show that parallelisable rule sets are exactly those rule sets both bounded for the chase and belonging to a novel class of rules, called pieceful. The pieceful class includes in particular frontier-guarded existential rules and (plain) datalog. We also give another characterization of parallelisable rule sets in terms of rule composition based on rewriting.
2202.04431
Filipe Cogo
Filipe R. Cogo and Xin Xia and Ahmed E. Hassan
Assessing the alignment between the information needs of developers and the documentation of programming languages: A case study on Rust
null
ACM Transactions on Software Engineering and Methodology (2022)
10.1145/3546945
null
cs.SE cs.PL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Programming language documentation refers to the set of technical documents that provide application developers with a description of the high-level concepts of a language. Such documentation is essential to support application developers in the effective use of a programming language. One of the challenges faced by documenters (i.e., personnel that produce documentation) is to ensure that documentation has relevant information that aligns with the concrete needs of developers. In this paper, we present an automated approach to support documenters in evaluating the differences and similarities between the concrete information need of developers and the current state of documentation (a problem that we refer to as the topical alignment of a programming language documentation). Our approach leverages semi-supervised topic modelling to assess the similarities and differences between the topics of Q&A posts and the official documentation. To demonstrate the application of our approach, we perform a case study on the documentation of Rust. Our results show that there is a relatively high level of topical alignment in Rust documentation. Still, information about specific topics is scarce in both the Q&A websites and the documentation, particularly related topics with programming niches such as network, game, and database development. For other topics (e.g., related topics with language features such as structs, patterns and matchings, and foreign function interface), information is only available on Q&A websites while lacking in the official documentation. Finally, we discuss implications for programming language documenters, particularly how to leverage our approach to prioritize topics that should be added to the documentation.
[ { "created": "Tue, 8 Feb 2022 14:45:16 GMT", "version": "v1" } ]
2022-10-11
[ [ "Cogo", "Filipe R.", "" ], [ "Xia", "Xin", "" ], [ "Hassan", "Ahmed E.", "" ] ]
Programming language documentation refers to the set of technical documents that provide application developers with a description of the high-level concepts of a language. Such documentation is essential to support application developers in the effective use of a programming language. One of the challenges faced by documenters (i.e., personnel that produce documentation) is to ensure that documentation has relevant information that aligns with the concrete needs of developers. In this paper, we present an automated approach to support documenters in evaluating the differences and similarities between the concrete information need of developers and the current state of documentation (a problem that we refer to as the topical alignment of a programming language documentation). Our approach leverages semi-supervised topic modelling to assess the similarities and differences between the topics of Q&A posts and the official documentation. To demonstrate the application of our approach, we perform a case study on the documentation of Rust. Our results show that there is a relatively high level of topical alignment in Rust documentation. Still, information about specific topics is scarce in both the Q&A websites and the documentation, particularly related topics with programming niches such as network, game, and database development. For other topics (e.g., related topics with language features such as structs, patterns and matchings, and foreign function interface), information is only available on Q&A websites while lacking in the official documentation. Finally, we discuss implications for programming language documenters, particularly how to leverage our approach to prioritize topics that should be added to the documentation.
1405.3033
Zeeshan Bhatti
Zeeshan Bhatti, Ahmad Waqas, Imdad Ali Ismaili, Dil Nawaz Hakro, Waseem Javaid Soomro
Phonetic based SoundEx & ShapeEx algorithm for Sindhi Spell Checker System
9 pages, 6 figures, 5 Tables, Sindhi Computing, Sindhi Language
Adv. Environ. Biol., 8(4), 1147-1155, AENSI Publisher, 2014
null
null
cs.CL
http://creativecommons.org/licenses/by/3.0/
This paper presents a novel combinational phonetic algorithm for Sindhi Language, to be used in developing Sindhi Spell Checker which has yet not been developed prior to this work. The compound textual forms and glyphs of Sindhi language presents a substantial challenge for developing Sindhi spell checker system and generating similar suggestion list for misspelled words. In order to implement such a system, phonetic based Sindhi language rules and patterns must be considered into account for increasing the accuracy and efficiency. The proposed system is developed with a blend between Phonetic based SoundEx algorithm and ShapeEx algorithm for pattern or glyph matching, generating accurate and efficient suggestion list for incorrect or misspelled Sindhi words. A table of phonetically similar sounding Sindhi characters for SoundEx algorithm is also generated along with another table containing similar glyph or shape based character groups for ShapeEx algorithm. Both these are first ever attempt of any such type of categorization and representation for Sindhi Language.
[ { "created": "Tue, 13 May 2014 04:33:04 GMT", "version": "v1" } ]
2014-05-14
[ [ "Bhatti", "Zeeshan", "" ], [ "Waqas", "Ahmad", "" ], [ "Ismaili", "Imdad Ali", "" ], [ "Hakro", "Dil Nawaz", "" ], [ "Soomro", "Waseem Javaid", "" ] ]
This paper presents a novel combinational phonetic algorithm for Sindhi Language, to be used in developing Sindhi Spell Checker which has yet not been developed prior to this work. The compound textual forms and glyphs of Sindhi language presents a substantial challenge for developing Sindhi spell checker system and generating similar suggestion list for misspelled words. In order to implement such a system, phonetic based Sindhi language rules and patterns must be considered into account for increasing the accuracy and efficiency. The proposed system is developed with a blend between Phonetic based SoundEx algorithm and ShapeEx algorithm for pattern or glyph matching, generating accurate and efficient suggestion list for incorrect or misspelled Sindhi words. A table of phonetically similar sounding Sindhi characters for SoundEx algorithm is also generated along with another table containing similar glyph or shape based character groups for ShapeEx algorithm. Both these are first ever attempt of any such type of categorization and representation for Sindhi Language.
1602.03644
Jesus Arnau
Jes\'us Arnau and Italo Atzeni and Marios Kountouris
Impact of LOS/NLOS Propagation and Path Loss in Ultra-Dense Cellular Networks
Paper presented at IEEE ICC 2016 - Wireless Communications Symposium
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most prior work on performance analysis of ultra-dense cellular networks (UDNs) has considered standard power-law path loss models and non-line-of-sight (NLOS) propagation modeled by Rayleigh fading. The effect of line-of-sight (LOS) on coverage and throughput and its implication on network densification are still not fully understood. In this paper, we investigate the performance of UDNs when the signal propagation includes both LOS and NLOS components. Using a stochastic geometry based cellular network model, we derive expressions for the coverage probability, as well as tight approximations and upper bounds for both closest and strongest base station (BS) association. Our results show that under standard singular path loss model, LOS propagation increases the coverage, especially with nearest BS association. On the contrary, using dual slope path loss, LOS propagation is beneficial with closest BS association and detrimental for strongest BS association.
[ { "created": "Thu, 11 Feb 2016 09:09:50 GMT", "version": "v1" }, { "created": "Fri, 17 Jun 2016 15:50:01 GMT", "version": "v2" }, { "created": "Wed, 28 Sep 2016 15:22:46 GMT", "version": "v3" } ]
2016-09-29
[ [ "Arnau", "Jesús", "" ], [ "Atzeni", "Italo", "" ], [ "Kountouris", "Marios", "" ] ]
Most prior work on performance analysis of ultra-dense cellular networks (UDNs) has considered standard power-law path loss models and non-line-of-sight (NLOS) propagation modeled by Rayleigh fading. The effect of line-of-sight (LOS) on coverage and throughput and its implication on network densification are still not fully understood. In this paper, we investigate the performance of UDNs when the signal propagation includes both LOS and NLOS components. Using a stochastic geometry based cellular network model, we derive expressions for the coverage probability, as well as tight approximations and upper bounds for both closest and strongest base station (BS) association. Our results show that under standard singular path loss model, LOS propagation increases the coverage, especially with nearest BS association. On the contrary, using dual slope path loss, LOS propagation is beneficial with closest BS association and detrimental for strongest BS association.
0908.2295
Ezra N. Hoch
Danny Dolev, Ezra N. Hoch, Yoram Moses
An Optimal Self-Stabilizing Firing Squad
Shorter version to appear in SSS09
null
10.1007/978-3-642-05118-0_20
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider a fully connected network where up to $t$ processes may crash, and all processes start in an arbitrary memory state. The self-stabilizing firing squad problem consists of eventually guaranteeing simultaneous response to an external input. This is modeled by requiring that the non-crashed processes "fire" simultaneously if some correct process received an external "GO" input, and that they only fire as a response to some process receiving such an input. This paper presents FireAlg, the first self-stabilizing firing squad algorithm. The FireAlg algorithm is optimal in two respects: (a) Once the algorithm is in a safe state, it fires in response to a GO input as fast as any other algorithm does, and (b) Starting from an arbitrary state, it converges to a safe state as fast as any other algorithm does.
[ { "created": "Mon, 17 Aug 2009 07:39:06 GMT", "version": "v1" } ]
2015-05-13
[ [ "Dolev", "Danny", "" ], [ "Hoch", "Ezra N.", "" ], [ "Moses", "Yoram", "" ] ]
Consider a fully connected network where up to $t$ processes may crash, and all processes start in an arbitrary memory state. The self-stabilizing firing squad problem consists of eventually guaranteeing simultaneous response to an external input. This is modeled by requiring that the non-crashed processes "fire" simultaneously if some correct process received an external "GO" input, and that they only fire as a response to some process receiving such an input. This paper presents FireAlg, the first self-stabilizing firing squad algorithm. The FireAlg algorithm is optimal in two respects: (a) Once the algorithm is in a safe state, it fires in response to a GO input as fast as any other algorithm does, and (b) Starting from an arbitrary state, it converges to a safe state as fast as any other algorithm does.
1111.0594
Andrey Nikolaev
Andrey Nikolaev
Exploring Oracle RDBMS latches using Solaris DTrace
14 pages, 6 figures, 6 tables. MEDIAS 2011 Conference. Limassol, Cyprus
null
null
null
cs.DB cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rise of hundreds cores technologies bring again to the first plan the problem of interprocess synchronization in database engines. Spinlocks are widely used in contemporary DBMS to synchronize processes at microsecond timescale. Latches are Oracle RDBMS specific spinlocks. The latch contention is common to observe in contemporary high concurrency OLTP environments. In contrast to system spinlocks used in operating systems kernels, latches work in user context. Such user level spinlocks are influenced by context preemption and multitasking. Until recently there were no direct methods to measure effectiveness of user spinlocks. This became possible with the emergence of Solaris 10 Dynamic Tracing framework. DTrace allows tracing and profiling both OS and user applications. This work investigates the possibilities to diagnose and tune Oracle latches. It explores the contemporary latch realization and spinning-blocking strategies, analyses corresponding statistic counters. A mathematical model developed to estimate analytically the effect of tuning _SPIN_COUNT value.
[ { "created": "Wed, 2 Nov 2011 18:20:36 GMT", "version": "v1" } ]
2011-11-03
[ [ "Nikolaev", "Andrey", "" ] ]
Rise of hundreds cores technologies bring again to the first plan the problem of interprocess synchronization in database engines. Spinlocks are widely used in contemporary DBMS to synchronize processes at microsecond timescale. Latches are Oracle RDBMS specific spinlocks. The latch contention is common to observe in contemporary high concurrency OLTP environments. In contrast to system spinlocks used in operating systems kernels, latches work in user context. Such user level spinlocks are influenced by context preemption and multitasking. Until recently there were no direct methods to measure effectiveness of user spinlocks. This became possible with the emergence of Solaris 10 Dynamic Tracing framework. DTrace allows tracing and profiling both OS and user applications. This work investigates the possibilities to diagnose and tune Oracle latches. It explores the contemporary latch realization and spinning-blocking strategies, analyses corresponding statistic counters. A mathematical model developed to estimate analytically the effect of tuning _SPIN_COUNT value.
1710.01692
Dokhyam Hoshen
Dokhyam Hoshen, Michael Werman
IQ of Neural Networks
null
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
IQ tests are an accepted method for assessing human intelligence. The tests consist of several parts that must be solved under a time constraint. Of all the tested abilities, pattern recognition has been found to have the highest correlation with general intelligence. This is primarily because pattern recognition is the ability to find order in a noisy environment, a necessary skill for intelligent agents. In this paper, we propose a convolutional neural network (CNN) model for solving geometric pattern recognition problems. The CNN receives as input multiple ordered input images and outputs the next image according to the pattern. Our CNN is able to solve problems involving rotation, reflection, color, size and shape patterns and score within the top 5% of human performance.
[ { "created": "Fri, 29 Sep 2017 11:48:58 GMT", "version": "v1" } ]
2017-10-05
[ [ "Hoshen", "Dokhyam", "" ], [ "Werman", "Michael", "" ] ]
IQ tests are an accepted method for assessing human intelligence. The tests consist of several parts that must be solved under a time constraint. Of all the tested abilities, pattern recognition has been found to have the highest correlation with general intelligence. This is primarily because pattern recognition is the ability to find order in a noisy environment, a necessary skill for intelligent agents. In this paper, we propose a convolutional neural network (CNN) model for solving geometric pattern recognition problems. The CNN receives as input multiple ordered input images and outputs the next image according to the pattern. Our CNN is able to solve problems involving rotation, reflection, color, size and shape patterns and score within the top 5% of human performance.
1501.04232
Christian Bauckhage
Christian Bauckhage, Kristian Kersting, Fabian Hadiji
Maximum Entropy Models of Shortest Path and Outbreak Distributions in Networks
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.
[ { "created": "Sat, 17 Jan 2015 21:37:10 GMT", "version": "v1" } ]
2015-01-20
[ [ "Bauckhage", "Christian", "" ], [ "Kersting", "Kristian", "" ], [ "Hadiji", "Fabian", "" ] ]
Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.
2107.09725
Hung La
Ashutosh Singandhupe, Hung La, Trung Dung Ngo, Van Ho
Registration of 3D Point Sets Using Correntropy Similarity Matrix
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work focuses on Registration or Alignment of 3D point sets. Although the Registration problem is a well established problem and it's solved using multiple variants of Iterative Closest Point (ICP) Algorithm, most of the approaches in the current state of the art still suffers from misalignment when the \textit{Source} and the \textit{Target} point sets are separated by large rotations and translation. In this work, we propose a variant of the Standard ICP algorithm, where we introduce a Correntropy Relationship Matrix in the computation of rotation and translation component which attempts to solve the large rotation and translation problem between \textit{Source} and \textit{Target} point sets. This matrix is created through correntropy criterion which is updated in every iteration. The correntropy criterion defined in this approach maintains the relationship between the points in the \textit{Source} dataset and the \textit{Target} dataset. Through our experiments and validation we verify that our approach has performed well under various rotation and translation in comparison to the other well-known state of the art methods available in the Point Cloud Library (PCL) as well as other methods available as open source. We have uploaded our code in the github repository for the readers to validate and verify our approach https://github.com/aralab-unr/CoSM-ICP.
[ { "created": "Tue, 20 Jul 2021 18:56:22 GMT", "version": "v1" } ]
2021-07-22
[ [ "Singandhupe", "Ashutosh", "" ], [ "La", "Hung", "" ], [ "Ngo", "Trung Dung", "" ], [ "Ho", "Van", "" ] ]
This work focuses on Registration or Alignment of 3D point sets. Although the Registration problem is a well established problem and it's solved using multiple variants of Iterative Closest Point (ICP) Algorithm, most of the approaches in the current state of the art still suffers from misalignment when the \textit{Source} and the \textit{Target} point sets are separated by large rotations and translation. In this work, we propose a variant of the Standard ICP algorithm, where we introduce a Correntropy Relationship Matrix in the computation of rotation and translation component which attempts to solve the large rotation and translation problem between \textit{Source} and \textit{Target} point sets. This matrix is created through correntropy criterion which is updated in every iteration. The correntropy criterion defined in this approach maintains the relationship between the points in the \textit{Source} dataset and the \textit{Target} dataset. Through our experiments and validation we verify that our approach has performed well under various rotation and translation in comparison to the other well-known state of the art methods available in the Point Cloud Library (PCL) as well as other methods available as open source. We have uploaded our code in the github repository for the readers to validate and verify our approach https://github.com/aralab-unr/CoSM-ICP.
1704.02373
Achintya Sarkar
Achintya Kr. Sarkar and Zheng-Hua Tan
Time-Contrastive Learning Based DNN Bottleneck Features for Text-Dependent Speaker Verification
null
NIPS Time Series Workshop 2017, Long Beach, CA, USA
null
null
cs.SD cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a time-contrastive learning (TCL) based bottleneck (BN)feature extraction method for speech signals with an application to text-dependent (TD) speaker verification (SV). It is well-known that speech signals exhibit quasi-stationary behavior in and only in a short interval, and the TCL method aims to exploit this temporal structure. More specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting speech signals, in contrast to existing DNN based BN feature extraction methods that train DNNs using labeled data to discriminate speakers or pass-phrases or phones or a combination of them. In the context of speaker verification, speech data of fixed pass-phrases are used for TCL-BN training, while the pass-phrases used for TCL-BN training are excluded from being used for SV, so that the learned features can be considered generic. The method is evaluated on the RedDots Challenge 2016 database. Experimental results show that TCL-BN is superior to the existing speaker and pass-phrase discriminant BN features and the Mel-frequency cepstral coefficient feature for text-dependent speaker verification.
[ { "created": "Thu, 6 Apr 2017 09:37:41 GMT", "version": "v1" }, { "created": "Mon, 27 Nov 2017 16:56:31 GMT", "version": "v2" }, { "created": "Sat, 11 May 2019 16:19:20 GMT", "version": "v3" } ]
2019-05-14
[ [ "Sarkar", "Achintya Kr.", "" ], [ "Tan", "Zheng-Hua", "" ] ]
In this paper, we present a time-contrastive learning (TCL) based bottleneck (BN)feature extraction method for speech signals with an application to text-dependent (TD) speaker verification (SV). It is well-known that speech signals exhibit quasi-stationary behavior in and only in a short interval, and the TCL method aims to exploit this temporal structure. More specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting speech signals, in contrast to existing DNN based BN feature extraction methods that train DNNs using labeled data to discriminate speakers or pass-phrases or phones or a combination of them. In the context of speaker verification, speech data of fixed pass-phrases are used for TCL-BN training, while the pass-phrases used for TCL-BN training are excluded from being used for SV, so that the learned features can be considered generic. The method is evaluated on the RedDots Challenge 2016 database. Experimental results show that TCL-BN is superior to the existing speaker and pass-phrase discriminant BN features and the Mel-frequency cepstral coefficient feature for text-dependent speaker verification.
1805.02917
Motoki Sato
Motoki Sato, Jun Suzuki, Hiroyuki Shindo, Yuji Matsumoto
Interpretable Adversarial Perturbation in Input Embedding Space for Text
8 pages, 4 figures
IJCAI-ECAI-2018
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following great success in the image processing field, the idea of adversarial training has been applied to tasks in the natural language processing (NLP) field. One promising approach directly applies adversarial training developed in the image processing field to the input word embedding space instead of the discrete input space of texts. However, this approach abandons such interpretability as generating adversarial texts to significantly improve the performance of NLP tasks. This paper restores interpretability to such methods by restricting the directions of perturbations toward the existing words in the input embedding space. As a result, we can straightforwardly reconstruct each input with perturbations to an actual text by considering the perturbations to be the replacement of words in the sentence while maintaining or even improving the task performance.
[ { "created": "Tue, 8 May 2018 09:27:46 GMT", "version": "v1" } ]
2018-05-09
[ [ "Sato", "Motoki", "" ], [ "Suzuki", "Jun", "" ], [ "Shindo", "Hiroyuki", "" ], [ "Matsumoto", "Yuji", "" ] ]
Following great success in the image processing field, the idea of adversarial training has been applied to tasks in the natural language processing (NLP) field. One promising approach directly applies adversarial training developed in the image processing field to the input word embedding space instead of the discrete input space of texts. However, this approach abandons such interpretability as generating adversarial texts to significantly improve the performance of NLP tasks. This paper restores interpretability to such methods by restricting the directions of perturbations toward the existing words in the input embedding space. As a result, we can straightforwardly reconstruct each input with perturbations to an actual text by considering the perturbations to be the replacement of words in the sentence while maintaining or even improving the task performance.
2309.13869
Minseok Choi
Minseok Choi, Hyesu Lim, Jaegul Choo
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Accepted to Findings of IJCNLP-AACL 2023
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document. A key challenge in DocRE is the cost of annotating such data which requires intensive human effort. Thus, we investigate the case of DocRE in a low-resource setting, and we find that existing models trained on low data overestimate the NA ("no relation") label, causing limited performance. In this work, we approach the problem from a calibration perspective and propose PRiSM, which learns to adapt logits based on relation semantic information. We evaluate our method on three DocRE datasets and demonstrate that integrating existing models with PRiSM improves performance by as much as 26.38 F1 score, while the calibration error drops as much as 36 times when trained with about 3% of data. The code is publicly available at https://github.com/brightjade/PRiSM.
[ { "created": "Mon, 25 Sep 2023 04:42:39 GMT", "version": "v1" } ]
2023-10-13
[ [ "Choi", "Minseok", "" ], [ "Lim", "Hyesu", "" ], [ "Choo", "Jaegul", "" ] ]
Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document. A key challenge in DocRE is the cost of annotating such data which requires intensive human effort. Thus, we investigate the case of DocRE in a low-resource setting, and we find that existing models trained on low data overestimate the NA ("no relation") label, causing limited performance. In this work, we approach the problem from a calibration perspective and propose PRiSM, which learns to adapt logits based on relation semantic information. We evaluate our method on three DocRE datasets and demonstrate that integrating existing models with PRiSM improves performance by as much as 26.38 F1 score, while the calibration error drops as much as 36 times when trained with about 3% of data. The code is publicly available at https://github.com/brightjade/PRiSM.
2208.13154
Haoxiang Wang
Haoxiang Wang, Zhanhong Jiang, Chao Liu, Soumik Sarkar, Dongxiang Jiang, Young M. Lee
Asynchronous Training Schemes in Distributed Learning with Time Delay
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of distributed deep learning, the issue of stale weights or gradients could result in poor algorithmic performance. This issue is usually tackled by delay tolerant algorithms with some mild assumptions on the objective functions and step sizes. In this paper, we propose a different approach to develop a new algorithm, called $\textbf{P}$redicting $\textbf{C}$lipping $\textbf{A}$synchronous $\textbf{S}$tochastic $\textbf{G}$radient $\textbf{D}$escent (aka, PC-ASGD). Specifically, PC-ASGD has two steps - the $\textit{predicting step}$ leverages the gradient prediction using Taylor expansion to reduce the staleness of the outdated weights while the $\textit{clipping step}$ selectively drops the outdated weights to alleviate their negative effects. A tradeoff parameter is introduced to balance the effects between these two steps. Theoretically, we present the convergence rate considering the effects of delay of the proposed algorithm with constant step size when the smooth objective functions are weakly strongly-convex and nonconvex. One practical variant of PC-ASGD is also proposed by adopting a condition to help with the determination of the tradeoff parameter. For empirical validation, we demonstrate the performance of the algorithm with two deep neural network architectures on two benchmark datasets.
[ { "created": "Sun, 28 Aug 2022 07:14:59 GMT", "version": "v1" } ]
2022-08-30
[ [ "Wang", "Haoxiang", "" ], [ "Jiang", "Zhanhong", "" ], [ "Liu", "Chao", "" ], [ "Sarkar", "Soumik", "" ], [ "Jiang", "Dongxiang", "" ], [ "Lee", "Young M.", "" ] ]
In the context of distributed deep learning, the issue of stale weights or gradients could result in poor algorithmic performance. This issue is usually tackled by delay tolerant algorithms with some mild assumptions on the objective functions and step sizes. In this paper, we propose a different approach to develop a new algorithm, called $\textbf{P}$redicting $\textbf{C}$lipping $\textbf{A}$synchronous $\textbf{S}$tochastic $\textbf{G}$radient $\textbf{D}$escent (aka, PC-ASGD). Specifically, PC-ASGD has two steps - the $\textit{predicting step}$ leverages the gradient prediction using Taylor expansion to reduce the staleness of the outdated weights while the $\textit{clipping step}$ selectively drops the outdated weights to alleviate their negative effects. A tradeoff parameter is introduced to balance the effects between these two steps. Theoretically, we present the convergence rate considering the effects of delay of the proposed algorithm with constant step size when the smooth objective functions are weakly strongly-convex and nonconvex. One practical variant of PC-ASGD is also proposed by adopting a condition to help with the determination of the tradeoff parameter. For empirical validation, we demonstrate the performance of the algorithm with two deep neural network architectures on two benchmark datasets.
1404.6218
Ashkan Tousimojarad Mr
Ashkan Tousimojarad and Wim Vanderbauwhede
A Parallel Task-based Approach to Linear Algebra
Final version as appeared in "dx.doi.org/10.1109/ISPDC.2014.11"
Tousimojarad, A., Vanderbauwhede, W.: A parallel task-based approach to linear algebra. In: Parallel and Distributed Computing (ISPDC), 2014 IEEE 13th International Symposium on. pp. 59-66. IEEE (2014)
10.1109/ISPDC.2014.11
null
cs.DC cs.PF cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Processors with large numbers of cores are becoming commonplace. In order to take advantage of the available resources in these systems, the programming paradigm has to move towards increased parallelism. However, increasing the level of concurrency in the program does not necessarily lead to better performance. Parallel programming models have to provide flexible ways of defining parallel tasks and at the same time, efficiently managing the created tasks. OpenMP is a widely accepted programming model for shared-memory architectures. In this paper we highlight some of the drawbacks in the OpenMP tasking approach, and propose an alternative model based on the Glasgow Parallel Reduction Machine (GPRM) programming framework. As the main focus of this study, we deploy our model to solve a fundamental linear algebra problem, LU factorisation of sparse matrices. We have used the SparseLU benchmark from the BOTS benchmark suite, and compared the results obtained from our model to those of the OpenMP tasking approach. The TILEPro64 system has been used to run the experiments. The results are very promising, not only because of the performance improvement for this particular problem, but also because they verify the task management efficiency, stability, and flexibility of our model, which can be applied to solve problems in future many-core systems.
[ { "created": "Thu, 24 Apr 2014 18:39:30 GMT", "version": "v1" }, { "created": "Fri, 3 Oct 2014 14:53:58 GMT", "version": "v2" }, { "created": "Mon, 6 Oct 2014 15:46:24 GMT", "version": "v3" } ]
2014-10-07
[ [ "Tousimojarad", "Ashkan", "" ], [ "Vanderbauwhede", "Wim", "" ] ]
Processors with large numbers of cores are becoming commonplace. In order to take advantage of the available resources in these systems, the programming paradigm has to move towards increased parallelism. However, increasing the level of concurrency in the program does not necessarily lead to better performance. Parallel programming models have to provide flexible ways of defining parallel tasks and at the same time, efficiently managing the created tasks. OpenMP is a widely accepted programming model for shared-memory architectures. In this paper we highlight some of the drawbacks in the OpenMP tasking approach, and propose an alternative model based on the Glasgow Parallel Reduction Machine (GPRM) programming framework. As the main focus of this study, we deploy our model to solve a fundamental linear algebra problem, LU factorisation of sparse matrices. We have used the SparseLU benchmark from the BOTS benchmark suite, and compared the results obtained from our model to those of the OpenMP tasking approach. The TILEPro64 system has been used to run the experiments. The results are very promising, not only because of the performance improvement for this particular problem, but also because they verify the task management efficiency, stability, and flexibility of our model, which can be applied to solve problems in future many-core systems.
1005.0783
Yiwei Sun
Omer Shahid Ahmad, Faisal Alrashdi, Jason (Jun-Duo) Chen, Najah Ilham, Jianhai Lu, Yiwei Sun, Tong Wang, Yongxin Zhu
Software Requirements Specification of the IUfA's UUIS -- a Team 2 COMP5541-W10 Project Approach
52 pages. 51 tables, 4 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the 52-page document, we describe our approach to the Software Requirements Specification of the IUfA's UUIS prototype. This includes the overall system description, functional requirements, non-functional requirements, use cases, the corresponding data dictionary for all entities involved, mock user interface (UI) design, and the overall projected cost estimate. The design specification of UUIS can be found in arXiv:1005.0665.
[ { "created": "Wed, 5 May 2010 15:59:50 GMT", "version": "v1" }, { "created": "Fri, 7 May 2010 15:59:22 GMT", "version": "v2" } ]
2015-03-17
[ [ "Ahmad", "Omer Shahid", "", "Jun-Duo" ], [ "Alrashdi", "Faisal", "", "Jun-Duo" ], [ "Jason", "", "", "Jun-Duo" ], [ "Chen", "", "" ], [ "Ilham", "Najah", "" ], [ "Lu", "Jianhai", "" ], [ "Sun", "Yiwei", "" ], [ "Wang", "Tong", "" ], [ "Zhu", "Yongxin", "" ] ]
In the 52-page document, we describe our approach to the Software Requirements Specification of the IUfA's UUIS prototype. This includes the overall system description, functional requirements, non-functional requirements, use cases, the corresponding data dictionary for all entities involved, mock user interface (UI) design, and the overall projected cost estimate. The design specification of UUIS can be found in arXiv:1005.0665.
1410.7923
Jesper W. Mikkelsen
Jesper W. Mikkelsen
Optimal Online Edge Coloring of Planar Graphs with Advice
CIAC 2015
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using the framework of advice complexity, we study the amount of knowledge about the future that an online algorithm needs to color the edges of a graph optimally, i.e., using as few colors as possible. For graphs of maximum degree $\Delta$, it follows from Vizing's Theorem that $O(m\log \Delta)$ bits of advice suffice to achieve optimality, where $m$ is the number of edges. We show that for graphs of bounded degeneracy (a class of graphs including e.g. trees and planar graphs), only $O(m)$ bits of advice are needed to compute an optimal solution online, independently of how large $\Delta$ is. On the other hand, we show that $\Omega (m)$ bits of advice are necessary just to achieve a competitive ratio better than that of the best deterministic online algorithm without advice. Furthermore, we consider algorithms which use a fixed number of advice bits per edge (our algorithm for graphs of bounded degeneracy belongs to this class of algorithms). We show that for bipartite graphs, any such algorithm must use at least $\Omega(m\log \Delta)$ bits of advice to achieve optimality.
[ { "created": "Wed, 29 Oct 2014 10:34:01 GMT", "version": "v1" }, { "created": "Thu, 12 Feb 2015 08:20:06 GMT", "version": "v2" } ]
2015-02-13
[ [ "Mikkelsen", "Jesper W.", "" ] ]
Using the framework of advice complexity, we study the amount of knowledge about the future that an online algorithm needs to color the edges of a graph optimally, i.e., using as few colors as possible. For graphs of maximum degree $\Delta$, it follows from Vizing's Theorem that $O(m\log \Delta)$ bits of advice suffice to achieve optimality, where $m$ is the number of edges. We show that for graphs of bounded degeneracy (a class of graphs including e.g. trees and planar graphs), only $O(m)$ bits of advice are needed to compute an optimal solution online, independently of how large $\Delta$ is. On the other hand, we show that $\Omega (m)$ bits of advice are necessary just to achieve a competitive ratio better than that of the best deterministic online algorithm without advice. Furthermore, we consider algorithms which use a fixed number of advice bits per edge (our algorithm for graphs of bounded degeneracy belongs to this class of algorithms). We show that for bipartite graphs, any such algorithm must use at least $\Omega(m\log \Delta)$ bits of advice to achieve optimality.
2102.06603
James O' Neill
James O' Neill, Danushka Bollegala
Semantically-Conditioned Negative Samples for Efficient Contrastive Learning
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Negative sampling is a limiting factor w.r.t. the generalization of metric-learned neural networks. We show that uniform negative sampling provides little information about the class boundaries and thus propose three novel techniques for efficient negative sampling: drawing negative samples from (1) the top-$k$ most semantically similar classes, (2) the top-$k$ most semantically similar samples and (3) interpolating between contrastive latent representations to create pseudo negatives. Our experiments on CIFAR-10, CIFAR-100 and Tiny-ImageNet-200 show that our proposed \textit{Semantically Conditioned Negative Sampling} and Latent Mixup lead to consistent performance improvements. In the standard supervised learning setting, on average we increase test accuracy by 1.52\% percentage points on CIFAR-10 across various network architectures. In the knowledge distillation setting, (1) the performance of student networks increase by 4.56\% percentage points on Tiny-ImageNet-200 and 3.29\% on CIFAR-100 over student networks trained with no teacher and (2) 1.23\% and 1.72\% respectively over a \textit{hard-to-beat} baseline (Hinton et al., 2015).
[ { "created": "Fri, 12 Feb 2021 16:26:52 GMT", "version": "v1" } ]
2021-02-15
[ [ "Neill", "James O'", "" ], [ "Bollegala", "Danushka", "" ] ]
Negative sampling is a limiting factor w.r.t. the generalization of metric-learned neural networks. We show that uniform negative sampling provides little information about the class boundaries and thus propose three novel techniques for efficient negative sampling: drawing negative samples from (1) the top-$k$ most semantically similar classes, (2) the top-$k$ most semantically similar samples and (3) interpolating between contrastive latent representations to create pseudo negatives. Our experiments on CIFAR-10, CIFAR-100 and Tiny-ImageNet-200 show that our proposed \textit{Semantically Conditioned Negative Sampling} and Latent Mixup lead to consistent performance improvements. In the standard supervised learning setting, on average we increase test accuracy by 1.52\% percentage points on CIFAR-10 across various network architectures. In the knowledge distillation setting, (1) the performance of student networks increase by 4.56\% percentage points on Tiny-ImageNet-200 and 3.29\% on CIFAR-100 over student networks trained with no teacher and (2) 1.23\% and 1.72\% respectively over a \textit{hard-to-beat} baseline (Hinton et al., 2015).
1204.4322
David Pichardie
Thomas Jensen (INRIA Rennes), Florent Kirchner (INRIA Rennes), David Pichardie (INRIA Rennes)
Secure the Clones
null
Logical Methods in Computer Science, Volume 8, Issue 2 (May 31, 2012) lmcs:801
10.2168/LMCS-8(2:5)2012
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exchanging mutable data objects with untrusted code is a delicate matter because of the risk of creating a data space that is accessible by an attacker. Consequently, secure programming guidelines for Java stress the importance of using defensive copying before accepting or handing out references to an internal mutable object. However, implementation of a copy method (like clone()) is entirely left to the programmer. It may not provide a sufficiently deep copy of an object and is subject to overriding by a malicious sub-class. Currently no language-based mechanism supports secure object cloning. This paper proposes a type-based annotation system for defining modular copy policies for class-based object-oriented programs. A copy policy specifies the maximally allowed sharing between an object and its clone. We present a static enforcement mechanism that will guarantee that all classes fulfil their copy policy, even in the presence of overriding of copy methods, and establish the semantic correctness of the overall approach in Coq. The mechanism has been implemented and experimentally evaluated on clone methods from several Java libraries.
[ { "created": "Thu, 19 Apr 2012 11:49:03 GMT", "version": "v1" }, { "created": "Wed, 30 May 2012 19:21:46 GMT", "version": "v2" }, { "created": "Mon, 4 Jun 2012 10:04:07 GMT", "version": "v3" } ]
2015-07-01
[ [ "Jensen", "Thomas", "", "INRIA Rennes" ], [ "Kirchner", "Florent", "", "INRIA Rennes" ], [ "Pichardie", "David", "", "INRIA Rennes" ] ]
Exchanging mutable data objects with untrusted code is a delicate matter because of the risk of creating a data space that is accessible by an attacker. Consequently, secure programming guidelines for Java stress the importance of using defensive copying before accepting or handing out references to an internal mutable object. However, implementation of a copy method (like clone()) is entirely left to the programmer. It may not provide a sufficiently deep copy of an object and is subject to overriding by a malicious sub-class. Currently no language-based mechanism supports secure object cloning. This paper proposes a type-based annotation system for defining modular copy policies for class-based object-oriented programs. A copy policy specifies the maximally allowed sharing between an object and its clone. We present a static enforcement mechanism that will guarantee that all classes fulfil their copy policy, even in the presence of overriding of copy methods, and establish the semantic correctness of the overall approach in Coq. The mechanism has been implemented and experimentally evaluated on clone methods from several Java libraries.
1901.01919
Rui Chen
Rui Chen, Christos G. Cassandras
Optimization of Ride Sharing Systems Using Event-driven Receding Horizon Control
14 pages, 12 figures
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop an event-driven Receding Horizon Control (RHC) scheme for a Ride Sharing System (RSS) in a transportation network where vehicles are shared to pick up and drop off passengers so as to minimize a weighted sum of passenger waiting and traveling times. The RSS is modeled as a discrete event system and the event-driven nature of the controller significantly reduces the complexity of the vehicle assignment problem, thus enabling its real-time implementation. Simulation results using actual city maps and real taxi traffic data illustrate the effectiveness of the RH controller in terms of real-time implementation and performance relative to known greedy heuristics.
[ { "created": "Mon, 7 Jan 2019 16:58:23 GMT", "version": "v1" }, { "created": "Tue, 8 Jan 2019 22:40:14 GMT", "version": "v2" } ]
2019-01-10
[ [ "Chen", "Rui", "" ], [ "Cassandras", "Christos G.", "" ] ]
We develop an event-driven Receding Horizon Control (RHC) scheme for a Ride Sharing System (RSS) in a transportation network where vehicles are shared to pick up and drop off passengers so as to minimize a weighted sum of passenger waiting and traveling times. The RSS is modeled as a discrete event system and the event-driven nature of the controller significantly reduces the complexity of the vehicle assignment problem, thus enabling its real-time implementation. Simulation results using actual city maps and real taxi traffic data illustrate the effectiveness of the RH controller in terms of real-time implementation and performance relative to known greedy heuristics.
1912.07004
Wei Quan Lim
Wei Quan Lim
Small Connected Planar Graph with 1-Cop-Move Number 4
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a 720-vertex connected planar graph G such that cop1(G), denoting the minimum number of cops needed to catch the robber in the 1-cop-move game on G, is at least 4 and at most 7. Furthermore, G has a connected subgraph H such that cop1(H) is exactly 4, meaning that 4 cops are barely sufficient to catch the robber in the 1-cop-move game on H. This is a significant improvement over the graph given by Gao and Yang in 2017.
[ { "created": "Sun, 15 Dec 2019 08:47:55 GMT", "version": "v1" } ]
2019-12-17
[ [ "Lim", "Wei Quan", "" ] ]
This paper describes a 720-vertex connected planar graph G such that cop1(G), denoting the minimum number of cops needed to catch the robber in the 1-cop-move game on G, is at least 4 and at most 7. Furthermore, G has a connected subgraph H such that cop1(H) is exactly 4, meaning that 4 cops are barely sufficient to catch the robber in the 1-cop-move game on H. This is a significant improvement over the graph given by Gao and Yang in 2017.
1310.6173
Pantelis Monogioudis
Carl Weaver, Pantelis Monogioudis
Self-Organizing Mobility Robustness Optimization in LTE Networks with eICIC
18 pages, 8 figures
null
null
null
cs.NI cs.PF cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of Mobility Robustness Optimization (MRO) and describe centralized Self Organizing Network (SON) solutions that can optimize connected-mode mobility Key Performance Indicators (KPIs). Our solution extends the earlier work of eICIC parameter optimization [7], to heterogeneous networks with mobility, and outline methods of progressive complexity that optimize the Retaining/Offloading Bias which are macro/pico views of Cell Individual Offset parameters. Simulation results under real LTE network deployment assumptions of a US metropolitan area demonstrate the effects of such solutions on the mobility KPIs. To our knowledge, this solution is the first that demonstrates the joint optimization of eICIC and MRO.
[ { "created": "Wed, 23 Oct 2013 10:27:49 GMT", "version": "v1" } ]
2013-10-24
[ [ "Weaver", "Carl", "" ], [ "Monogioudis", "Pantelis", "" ] ]
We address the problem of Mobility Robustness Optimization (MRO) and describe centralized Self Organizing Network (SON) solutions that can optimize connected-mode mobility Key Performance Indicators (KPIs). Our solution extends the earlier work of eICIC parameter optimization [7], to heterogeneous networks with mobility, and outline methods of progressive complexity that optimize the Retaining/Offloading Bias which are macro/pico views of Cell Individual Offset parameters. Simulation results under real LTE network deployment assumptions of a US metropolitan area demonstrate the effects of such solutions on the mobility KPIs. To our knowledge, this solution is the first that demonstrates the joint optimization of eICIC and MRO.
1711.02760
Christoph Trattner
Christoph Trattner, David Elsweiler
Food Recommender Systems: Important Contributions, Challenges and Future Research Directions
null
null
null
null
cs.IR cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recommendation of food items is important for many reasons. Attaining cooking inspiration via digital sources is becoming evermore popular; as are systems, which recommend other types of food, such as meals in restaurants or products in supermarkets. Researchers have been studying these kinds of systems for many years, suggesting not only that can they be a means to help people find food they might want to eat, but also help them nourish themselves more healthily. This paper provides a summary of the state-of-the-art of so-called food recommender systems, highlighting both seminal and most recent approaches to the problem, as well as important specializations, such as food recommendation systems for groups of users or systems which promote healthy eating. We moreover discuss the diverse challenges involved in designing recsys for food, summarise the lessons learned from past research and outline what we believe to be important future directions and open questions for the field. In providing these contributions we hope to provide a useful resource for researchers and practitioners alike.
[ { "created": "Tue, 7 Nov 2017 22:52:12 GMT", "version": "v1" }, { "created": "Fri, 10 Nov 2017 10:23:29 GMT", "version": "v2" } ]
2017-11-13
[ [ "Trattner", "Christoph", "" ], [ "Elsweiler", "David", "" ] ]
The recommendation of food items is important for many reasons. Attaining cooking inspiration via digital sources is becoming evermore popular; as are systems, which recommend other types of food, such as meals in restaurants or products in supermarkets. Researchers have been studying these kinds of systems for many years, suggesting not only that can they be a means to help people find food they might want to eat, but also help them nourish themselves more healthily. This paper provides a summary of the state-of-the-art of so-called food recommender systems, highlighting both seminal and most recent approaches to the problem, as well as important specializations, such as food recommendation systems for groups of users or systems which promote healthy eating. We moreover discuss the diverse challenges involved in designing recsys for food, summarise the lessons learned from past research and outline what we believe to be important future directions and open questions for the field. In providing these contributions we hope to provide a useful resource for researchers and practitioners alike.
2307.05914
Weipeng Zhuo
Weipeng Zhuo, Ka Ho Chiu, Jierun Chen, Ziqi Zhao, S.-H. Gary Chan, Sangtae Ha, Chul-Ho Lee
FIS-ONE: Floor Identification System with One Label for Crowdsourced RF Signals
Accepted by IEEE ICDCS 2023
null
null
null
cs.NI cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Floor labels of crowdsourced RF signals are crucial for many smart-city applications, such as multi-floor indoor localization, geofencing, and robot surveillance. To build a prediction model to identify the floor number of a new RF signal upon its measurement, conventional approaches using the crowdsourced RF signals assume that at least few labeled signal samples are available on each floor. In this work, we push the envelope further and demonstrate that it is technically feasible to enable such floor identification with only one floor-labeled signal sample on the bottom floor while having the rest of signal samples unlabeled. We propose FIS-ONE, a novel floor identification system with only one labeled sample. FIS-ONE consists of two steps, namely signal clustering and cluster indexing. We first build a bipartite graph to model the RF signal samples and obtain a latent representation of each node (each signal sample) using our attention-based graph neural network model so that the RF signal samples can be clustered more accurately. Then, we tackle the problem of indexing the clusters with proper floor labels, by leveraging the observation that signals from an access point can be detected on different floors, i.e., signal spillover. Specifically, we formulate a cluster indexing problem as a combinatorial optimization problem and show that it is equivalent to solving a traveling salesman problem, whose (near-)optimal solution can be found efficiently. We have implemented FIS-ONE and validated its effectiveness on the Microsoft dataset and in three large shopping malls. Our results show that FIS-ONE outperforms other baseline algorithms significantly, with up to 23% improvement in adjusted rand index and 25% improvement in normalized mutual information using only one floor-labeled signal sample.
[ { "created": "Wed, 12 Jul 2023 04:43:59 GMT", "version": "v1" } ]
2023-07-13
[ [ "Zhuo", "Weipeng", "" ], [ "Chiu", "Ka Ho", "" ], [ "Chen", "Jierun", "" ], [ "Zhao", "Ziqi", "" ], [ "Chan", "S. -H. Gary", "" ], [ "Ha", "Sangtae", "" ], [ "Lee", "Chul-Ho", "" ] ]
Floor labels of crowdsourced RF signals are crucial for many smart-city applications, such as multi-floor indoor localization, geofencing, and robot surveillance. To build a prediction model to identify the floor number of a new RF signal upon its measurement, conventional approaches using the crowdsourced RF signals assume that at least few labeled signal samples are available on each floor. In this work, we push the envelope further and demonstrate that it is technically feasible to enable such floor identification with only one floor-labeled signal sample on the bottom floor while having the rest of signal samples unlabeled. We propose FIS-ONE, a novel floor identification system with only one labeled sample. FIS-ONE consists of two steps, namely signal clustering and cluster indexing. We first build a bipartite graph to model the RF signal samples and obtain a latent representation of each node (each signal sample) using our attention-based graph neural network model so that the RF signal samples can be clustered more accurately. Then, we tackle the problem of indexing the clusters with proper floor labels, by leveraging the observation that signals from an access point can be detected on different floors, i.e., signal spillover. Specifically, we formulate a cluster indexing problem as a combinatorial optimization problem and show that it is equivalent to solving a traveling salesman problem, whose (near-)optimal solution can be found efficiently. We have implemented FIS-ONE and validated its effectiveness on the Microsoft dataset and in three large shopping malls. Our results show that FIS-ONE outperforms other baseline algorithms significantly, with up to 23% improvement in adjusted rand index and 25% improvement in normalized mutual information using only one floor-labeled signal sample.
1811.01256
Eric Rowland
Eric Rowland, Reem Yassawi
Automaticity and invariant measures of linear cellular automata
33 pages, 8 figures; fixed some typos
Can. J. Math.-J. Can. Math. 72 (2020) 1691-1726
10.4153/S0008414X19000488
null
cs.FL cs.DM math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that spacetime diagrams of linear cellular automata $\Phi : {\mathbb F}_p^{\mathbb Z} \to {\mathbb F}_p^{\mathbb Z}$ with $(-p)$-automatic initial conditions are automatic. This extends existing results on initial conditions which are eventually constant. Each automatic spacetime diagram defines a $(\sigma, \Phi)$-invariant subset of ${\mathbb F}_p^{\mathbb Z}$, where $\sigma$ is the left shift map, and if the initial condition is not eventually periodic then this invariant set is nontrivial. For the Ledrappier cellular automaton we construct a family of nontrivial $(\sigma, \Phi)$-invariant measures on ${\mathbb F}_3^{\mathbb Z}$. Finally, given a linear cellular automaton $\Phi$, we construct a nontrivial $(\sigma, \Phi)$-invariant measure on ${\mathbb F}_p^{\mathbb Z}$ for all but finitely many $p$.
[ { "created": "Sat, 3 Nov 2018 17:33:02 GMT", "version": "v1" }, { "created": "Thu, 14 Feb 2019 18:21:19 GMT", "version": "v2" }, { "created": "Wed, 19 Feb 2020 16:40:11 GMT", "version": "v3" } ]
2023-09-06
[ [ "Rowland", "Eric", "" ], [ "Yassawi", "Reem", "" ] ]
We show that spacetime diagrams of linear cellular automata $\Phi : {\mathbb F}_p^{\mathbb Z} \to {\mathbb F}_p^{\mathbb Z}$ with $(-p)$-automatic initial conditions are automatic. This extends existing results on initial conditions which are eventually constant. Each automatic spacetime diagram defines a $(\sigma, \Phi)$-invariant subset of ${\mathbb F}_p^{\mathbb Z}$, where $\sigma$ is the left shift map, and if the initial condition is not eventually periodic then this invariant set is nontrivial. For the Ledrappier cellular automaton we construct a family of nontrivial $(\sigma, \Phi)$-invariant measures on ${\mathbb F}_3^{\mathbb Z}$. Finally, given a linear cellular automaton $\Phi$, we construct a nontrivial $(\sigma, \Phi)$-invariant measure on ${\mathbb F}_p^{\mathbb Z}$ for all but finitely many $p$.
2403.11878
Jiaxiang Tang
Jiaxiang Tang, Ruijie Lu, Xiaokang Chen, Xiang Wen, Gang Zeng, Ziwei Liu
InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting
Project Page: https://me.kiui.moe/intex/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-texture synthesis has become a new frontier in 3D content creation thanks to the recent advances in text-to-image models. Existing methods primarily adopt a combination of pretrained depth-aware diffusion and inpainting models, yet they exhibit shortcomings such as 3D inconsistency and limited controllability. To address these challenges, we introduce InteX, a novel framework for interactive text-to-texture synthesis. 1) InteX includes a user-friendly interface that facilitates interaction and control throughout the synthesis process, enabling region-specific repainting and precise texture editing. 2) Additionally, we develop a unified depth-aware inpainting model that integrates depth information with inpainting cues, effectively mitigating 3D inconsistencies and improving generation speed. Through extensive experiments, our framework has proven to be both practical and effective in text-to-texture synthesis, paving the way for high-quality 3D content creation.
[ { "created": "Mon, 18 Mar 2024 15:31:57 GMT", "version": "v1" } ]
2024-03-19
[ [ "Tang", "Jiaxiang", "" ], [ "Lu", "Ruijie", "" ], [ "Chen", "Xiaokang", "" ], [ "Wen", "Xiang", "" ], [ "Zeng", "Gang", "" ], [ "Liu", "Ziwei", "" ] ]
Text-to-texture synthesis has become a new frontier in 3D content creation thanks to the recent advances in text-to-image models. Existing methods primarily adopt a combination of pretrained depth-aware diffusion and inpainting models, yet they exhibit shortcomings such as 3D inconsistency and limited controllability. To address these challenges, we introduce InteX, a novel framework for interactive text-to-texture synthesis. 1) InteX includes a user-friendly interface that facilitates interaction and control throughout the synthesis process, enabling region-specific repainting and precise texture editing. 2) Additionally, we develop a unified depth-aware inpainting model that integrates depth information with inpainting cues, effectively mitigating 3D inconsistencies and improving generation speed. Through extensive experiments, our framework has proven to be both practical and effective in text-to-texture synthesis, paving the way for high-quality 3D content creation.
1607.05427
Dacheng Tao
Changxing Ding and Dacheng Tao
Trunk-Branch Ensemble Convolutional Neural Networks for Video-based Face Recognition
Accepted Version to IEEE T-PAMI
null
10.1109/TPAMI.2017.2700390
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human faces in surveillance videos often suffer from severe image blur, dramatic pose variations, and occlusion. In this paper, we propose a comprehensive framework based on Convolutional Neural Networks (CNN) to overcome challenges in video-based face recognition (VFR). First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Using training data composed of both still images and artificially blurred data, CNN is encouraged to learn blur-insensitive features automatically. Second, to enhance robustness of CNN features to pose variations and occlusion, we propose a Trunk-Branch Ensemble CNN model (TBE-CNN), which extracts complementary information from holistic face images and patches cropped around facial components. TBE-CNN is an end-to-end model that extracts features efficiently by sharing the low- and middle-level convolutional layers between the trunk and branch networks. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. Systematic experiments justify the effectiveness of the proposed techniques. Most impressively, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces. With the proposed techniques, we also obtain the first place in the BTAS 2016 Video Person Recognition Evaluation.
[ { "created": "Tue, 19 Jul 2016 07:14:28 GMT", "version": "v1" }, { "created": "Wed, 17 May 2017 09:12:19 GMT", "version": "v2" } ]
2017-05-18
[ [ "Ding", "Changxing", "" ], [ "Tao", "Dacheng", "" ] ]
Human faces in surveillance videos often suffer from severe image blur, dramatic pose variations, and occlusion. In this paper, we propose a comprehensive framework based on Convolutional Neural Networks (CNN) to overcome challenges in video-based face recognition (VFR). First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Using training data composed of both still images and artificially blurred data, CNN is encouraged to learn blur-insensitive features automatically. Second, to enhance robustness of CNN features to pose variations and occlusion, we propose a Trunk-Branch Ensemble CNN model (TBE-CNN), which extracts complementary information from holistic face images and patches cropped around facial components. TBE-CNN is an end-to-end model that extracts features efficiently by sharing the low- and middle-level convolutional layers between the trunk and branch networks. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. Systematic experiments justify the effectiveness of the proposed techniques. Most impressively, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces. With the proposed techniques, we also obtain the first place in the BTAS 2016 Video Person Recognition Evaluation.
2208.12014
Sung Sik Nam
Sung Sik Nam, Changseok Yoon, Ki-Hong Park and Mohamed-Slim Alouini
Technical Report: Development of an Ultrahigh Bandwidth Software-defined Radio Platform
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the development of new digital signal processing systems and services, the rapid, easy, and convenient prototyping of ideas and the rapid time-to-market of products are becoming important with advances in technology. Conventionally, for the development stage, particularly when confirming the feasibility or performance of a new system or service, an idea is first confirmed through a computerbased software simulation after developing an accurate model of the operating environment. Next, this idea is validated and tested in the real operating environment. The new systems or services and their operating environments are becoming increasingly complicated. Hence, their development processes too are more complex cost- and time-intensive tasks that require engineers with skill and professional knowledge/experience. Furthermore, for ensuring fast time-to-market, all the development processes encompassing the (i) algorithm development, (ii) product prototyping, and (iii) final product development, must be closely linked such that they can be quickly completed. In this context, the aim of this paper is to propose an ultrahigh bandwidth software-defined radio platform that can prototype a quasi-real-time operating system without a developer having sophisticated hardware/software expertise. This platform allows the realization of a software-implemented digital signal processing system in minimal time with minimal efforts and without the need of a host computer.
[ { "created": "Thu, 25 Aug 2022 11:31:35 GMT", "version": "v1" }, { "created": "Sat, 27 Aug 2022 02:34:32 GMT", "version": "v2" } ]
2022-08-30
[ [ "Nam", "Sung Sik", "" ], [ "Yoon", "Changseok", "" ], [ "Park", "Ki-Hong", "" ], [ "Alouini", "Mohamed-Slim", "" ] ]
For the development of new digital signal processing systems and services, the rapid, easy, and convenient prototyping of ideas and the rapid time-to-market of products are becoming important with advances in technology. Conventionally, for the development stage, particularly when confirming the feasibility or performance of a new system or service, an idea is first confirmed through a computerbased software simulation after developing an accurate model of the operating environment. Next, this idea is validated and tested in the real operating environment. The new systems or services and their operating environments are becoming increasingly complicated. Hence, their development processes too are more complex cost- and time-intensive tasks that require engineers with skill and professional knowledge/experience. Furthermore, for ensuring fast time-to-market, all the development processes encompassing the (i) algorithm development, (ii) product prototyping, and (iii) final product development, must be closely linked such that they can be quickly completed. In this context, the aim of this paper is to propose an ultrahigh bandwidth software-defined radio platform that can prototype a quasi-real-time operating system without a developer having sophisticated hardware/software expertise. This platform allows the realization of a software-implemented digital signal processing system in minimal time with minimal efforts and without the need of a host computer.
2306.15390
Yanjing Li
Yanjing Li, Sheng Xu, Xianbin Cao, Li'an Zhuo, Baochang Zhang, Tian Wang, Guodong Guo
DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Accepted by International Journal of Computer Vision
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Neural architecture search (NAS) proves to be among the effective approaches for many tasks by generating an application-adaptive neural architecture, which is still challenged by high computational cost and memory consumption. At the same time, 1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices. One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS by taking advantage of the strengths of each in a unified framework, while searching the 1-bit CNNs is more challenging due to the more complicated processes involved. In this paper, we introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs, based on a new framework of searching the 1-bit model (Child) under the supervision of a real-valued model (Parent). Particularly, we first utilize a Parent model to calculate a tangent direction, based on which the tangent propagation method is introduced to search the optimized 1-bit Child. We further observe a coupling relationship between the weights and architecture parameters existing in such differentiable frameworks. To address the issue, we propose a decoupled optimization method to search an optimized architecture. Extensive experiments demonstrate that our DCP-NAS achieves much better results than prior arts on both CIFAR-10 and ImageNet datasets. In particular, the backbones achieved by our DCP-NAS achieve strong generalization performance on person re-identification and object detection.
[ { "created": "Tue, 27 Jun 2023 11:28:29 GMT", "version": "v1" } ]
2023-06-28
[ [ "Li", "Yanjing", "" ], [ "Xu", "Sheng", "" ], [ "Cao", "Xianbin", "" ], [ "Zhuo", "Li'an", "" ], [ "Zhang", "Baochang", "" ], [ "Wang", "Tian", "" ], [ "Guo", "Guodong", "" ] ]
Neural architecture search (NAS) proves to be among the effective approaches for many tasks by generating an application-adaptive neural architecture, which is still challenged by high computational cost and memory consumption. At the same time, 1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices. One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS by taking advantage of the strengths of each in a unified framework, while searching the 1-bit CNNs is more challenging due to the more complicated processes involved. In this paper, we introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs, based on a new framework of searching the 1-bit model (Child) under the supervision of a real-valued model (Parent). Particularly, we first utilize a Parent model to calculate a tangent direction, based on which the tangent propagation method is introduced to search the optimized 1-bit Child. We further observe a coupling relationship between the weights and architecture parameters existing in such differentiable frameworks. To address the issue, we propose a decoupled optimization method to search an optimized architecture. Extensive experiments demonstrate that our DCP-NAS achieves much better results than prior arts on both CIFAR-10 and ImageNet datasets. In particular, the backbones achieved by our DCP-NAS achieve strong generalization performance on person re-identification and object detection.
1605.07760
Jens Grubert
Jens Grubert, Matthias Kranz, Aaron Quigley
Challenges in Mobile Multi-Device Ecosystems
null
mUX: The Journal of Mobile User Experience, 5(1), 1-22, 2016
10.1186/s13678-016-0007-y
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coordinated multi-display environments from the desktop, second-screen to gigapixel display walls are increasingly common. Personal and intimate mobile and wearable devices such as head-mounted displays, smartwatches, smartphones and tablets are rarely part of such multi-device ecosystems. With this paper, we contribute to a better understanding about factors that impede the creation and use of such mobile multi-device ecosystems. We base our findings on literature research and an expert survey. Specifically, we present grounded challenges relevant for the design, development and use of mobile multi-device environments.
[ { "created": "Wed, 25 May 2016 07:26:02 GMT", "version": "v1" } ]
2016-10-04
[ [ "Grubert", "Jens", "" ], [ "Kranz", "Matthias", "" ], [ "Quigley", "Aaron", "" ] ]
Coordinated multi-display environments from the desktop, second-screen to gigapixel display walls are increasingly common. Personal and intimate mobile and wearable devices such as head-mounted displays, smartwatches, smartphones and tablets are rarely part of such multi-device ecosystems. With this paper, we contribute to a better understanding about factors that impede the creation and use of such mobile multi-device ecosystems. We base our findings on literature research and an expert survey. Specifically, we present grounded challenges relevant for the design, development and use of mobile multi-device environments.
2302.08570
Ashwin Maran
Jin-Yi Cai, Ashwin Maran
The complexity of counting planar graph homomorphisms of domain size 3
32 pages, 2 figures, accepted by STOC 2023
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
We prove a complexity dichotomy theorem for counting planar graph homomorphisms of domain size 3. Given any 3 by 3 real valued symmetric matrix $H$ defining a graph homomorphism from all planar graphs $G \mapsto Z_H(G)$, we completely classify the computational complexity of this problem according to the matrix $H$. We show that for every $H$, the problem is either polynomial time computable or \#P-hard. The P-time computable cases consist of precisely those that are P-time computable for general graphs (a complete classification is known) or computable by Valiant's holographic algorithm via matchgates. We also prove several results about planar graph homomorphisms for general domain size $q$. The proof uses mainly analytic arguments.
[ { "created": "Thu, 16 Feb 2023 20:33:07 GMT", "version": "v1" } ]
2023-02-20
[ [ "Cai", "Jin-Yi", "" ], [ "Maran", "Ashwin", "" ] ]
We prove a complexity dichotomy theorem for counting planar graph homomorphisms of domain size 3. Given any 3 by 3 real valued symmetric matrix $H$ defining a graph homomorphism from all planar graphs $G \mapsto Z_H(G)$, we completely classify the computational complexity of this problem according to the matrix $H$. We show that for every $H$, the problem is either polynomial time computable or \#P-hard. The P-time computable cases consist of precisely those that are P-time computable for general graphs (a complete classification is known) or computable by Valiant's holographic algorithm via matchgates. We also prove several results about planar graph homomorphisms for general domain size $q$. The proof uses mainly analytic arguments.
2003.07450
Heng Chang
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Somayeh Sojoudi, Junzhou Huang, Wenwu Zhu
Spectral Graph Attention Network with Fast Eigen-approximation
Accepted by Deep Learning on Graphs: Method and Applications (DLG-KDD21)
null
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variants of Graph Neural Networks (GNNs) for representation learning have been proposed recently and achieved fruitful results in various fields. Among them, Graph Attention Network (GAT) first employs a self-attention strategy to learn attention weights for each edge in the spatial domain. However, learning the attentions over edges can only focus on the local information of graphs and greatly increases the computational costs. In this paper, we first introduce the attention mechanism in the spectral domain of graphs and present Spectral Graph Attention Network (SpGAT) that learns representations for different frequency components regarding weighted filters and graph wavelets bases. In this way, SpGAT can better capture global patterns of graphs in an efficient manner with much fewer learned parameters than that of GAT. Further, to reduce the computational cost of SpGAT brought by the eigen-decomposition, we propose a fast approximation variant SpGAT-Cheby. We thoroughly evaluate the performance of SpGAT and SpGAT-Cheby in semi-supervised node classification tasks and verify the effectiveness of the learned attentions in the spectral domain.
[ { "created": "Mon, 16 Mar 2020 21:49:34 GMT", "version": "v1" }, { "created": "Tue, 27 Jul 2021 11:58:57 GMT", "version": "v2" } ]
2021-07-28
[ [ "Chang", "Heng", "" ], [ "Rong", "Yu", "" ], [ "Xu", "Tingyang", "" ], [ "Huang", "Wenbing", "" ], [ "Sojoudi", "Somayeh", "" ], [ "Huang", "Junzhou", "" ], [ "Zhu", "Wenwu", "" ] ]
Variants of Graph Neural Networks (GNNs) for representation learning have been proposed recently and achieved fruitful results in various fields. Among them, Graph Attention Network (GAT) first employs a self-attention strategy to learn attention weights for each edge in the spatial domain. However, learning the attentions over edges can only focus on the local information of graphs and greatly increases the computational costs. In this paper, we first introduce the attention mechanism in the spectral domain of graphs and present Spectral Graph Attention Network (SpGAT) that learns representations for different frequency components regarding weighted filters and graph wavelets bases. In this way, SpGAT can better capture global patterns of graphs in an efficient manner with much fewer learned parameters than that of GAT. Further, to reduce the computational cost of SpGAT brought by the eigen-decomposition, we propose a fast approximation variant SpGAT-Cheby. We thoroughly evaluate the performance of SpGAT and SpGAT-Cheby in semi-supervised node classification tasks and verify the effectiveness of the learned attentions in the spectral domain.
2211.12081
Ran Gu
Ran Gu, Guotai Wang, Jiangshan Lu, Jingyang Zhang, Wenhui Lei, Yinan Chen, Wenjun Liao, Shichuan Zhang, Kang Li, Dimitris N. Metaxas, Shaoting Zhang
CDDSA: Contrastive Domain Disentanglement and Style Augmentation for Generalizable Medical Image Segmentation
14 pages, 8 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Generalization to previously unseen images with potential domain shifts and different styles is essential for clinically applicable medical image segmentation, and the ability to disentangle domain-specific and domain-invariant features is key for achieving Domain Generalization (DG). However, existing DG methods can hardly achieve effective disentanglement to get high generalizability. To deal with this problem, we propose an efficient Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for generalizable medical image segmentation. First, a disentangle network is proposed to decompose an image into a domain-invariant anatomical representation and a domain-specific style code, where the former is sent to a segmentation model that is not affected by the domain shift, and the disentangle network is regularized by a decoder that combines the anatomical and style codes to reconstruct the input image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Thirdly, to further improve generalizability, we propose a style augmentation method based on the disentanglement representation to synthesize images in various unseen styles with shared anatomical structures. Our method was validated on a public multi-site fundus image dataset for optic cup and disc segmentation and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset for nasopharynx Gross Tumor Volume (GTVnx) segmentation. Experimental results showed that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in domain-generalizable segmentation.
[ { "created": "Tue, 22 Nov 2022 08:25:35 GMT", "version": "v1" } ]
2022-11-23
[ [ "Gu", "Ran", "" ], [ "Wang", "Guotai", "" ], [ "Lu", "Jiangshan", "" ], [ "Zhang", "Jingyang", "" ], [ "Lei", "Wenhui", "" ], [ "Chen", "Yinan", "" ], [ "Liao", "Wenjun", "" ], [ "Zhang", "Shichuan", "" ], [ "Li", "Kang", "" ], [ "Metaxas", "Dimitris N.", "" ], [ "Zhang", "Shaoting", "" ] ]
Generalization to previously unseen images with potential domain shifts and different styles is essential for clinically applicable medical image segmentation, and the ability to disentangle domain-specific and domain-invariant features is key for achieving Domain Generalization (DG). However, existing DG methods can hardly achieve effective disentanglement to get high generalizability. To deal with this problem, we propose an efficient Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for generalizable medical image segmentation. First, a disentangle network is proposed to decompose an image into a domain-invariant anatomical representation and a domain-specific style code, where the former is sent to a segmentation model that is not affected by the domain shift, and the disentangle network is regularized by a decoder that combines the anatomical and style codes to reconstruct the input image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Thirdly, to further improve generalizability, we propose a style augmentation method based on the disentanglement representation to synthesize images in various unseen styles with shared anatomical structures. Our method was validated on a public multi-site fundus image dataset for optic cup and disc segmentation and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset for nasopharynx Gross Tumor Volume (GTVnx) segmentation. Experimental results showed that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in domain-generalizable segmentation.
0711.4324
Jinshan Zhang
Jinshan Zhang
Report on "American Option Pricing and Hedging Strategies"
14pages
null
null
null
cs.CE cs.DM
null
This paper mainly discusses the American option's hedging strategies via binomialmodel and the basic idea of pricing and hedging American option. Although the essential scheme of hedging is almost the same as European option, small differences may arise when simulating the process for American option holder has more rights, spelling that the option can be exercised at anytime before its maturity. Our method is dynamic-hedging method.
[ { "created": "Tue, 27 Nov 2007 18:34:40 GMT", "version": "v1" } ]
2007-11-28
[ [ "Zhang", "Jinshan", "" ] ]
This paper mainly discusses the American option's hedging strategies via binomialmodel and the basic idea of pricing and hedging American option. Although the essential scheme of hedging is almost the same as European option, small differences may arise when simulating the process for American option holder has more rights, spelling that the option can be exercised at anytime before its maturity. Our method is dynamic-hedging method.
2211.00262
Elad Segal
Elad Segal, Ben Bogin, Jonathan Berant
Training Vision-Language Models with Less Bimodal Supervision
AKBC 2022
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Standard practice in pretraining multimodal models, such as vision-language models, is to rely on pairs of aligned inputs from both modalities, for example, aligned image-text pairs. However, such pairs can be difficult to obtain in low-resource settings and for some modality pairs (e.g., structured tables and images). In this work, we investigate the extent to which we can reduce the reliance on such parallel data, which we term \emph{bimodal supervision}, and use models that are pretrained on each modality independently. We experiment with a high-performing vision-language model, and analyze the effect of bimodal supervision on three vision-language tasks. We find that on simpler tasks, such as VQAv2 and GQA, one can eliminate bimodal supervision completely, suffering only a minor loss in performance. Conversely, for NLVR2, which requires more complex reasoning, training without bimodal supervision leads to random performance. Nevertheless, using only 5\% of the bimodal data (142K images along with their captions), or leveraging weak supervision in the form of a list of machine-generated labels for each image, leads to only a moderate degradation compared to using 3M image-text pairs: 74\%$\rightarrow$$\sim$70\%. Our code is available at https://github.com/eladsegal/less-bimodal-sup.
[ { "created": "Tue, 1 Nov 2022 04:07:11 GMT", "version": "v1" } ]
2022-11-02
[ [ "Segal", "Elad", "" ], [ "Bogin", "Ben", "" ], [ "Berant", "Jonathan", "" ] ]
Standard practice in pretraining multimodal models, such as vision-language models, is to rely on pairs of aligned inputs from both modalities, for example, aligned image-text pairs. However, such pairs can be difficult to obtain in low-resource settings and for some modality pairs (e.g., structured tables and images). In this work, we investigate the extent to which we can reduce the reliance on such parallel data, which we term \emph{bimodal supervision}, and use models that are pretrained on each modality independently. We experiment with a high-performing vision-language model, and analyze the effect of bimodal supervision on three vision-language tasks. We find that on simpler tasks, such as VQAv2 and GQA, one can eliminate bimodal supervision completely, suffering only a minor loss in performance. Conversely, for NLVR2, which requires more complex reasoning, training without bimodal supervision leads to random performance. Nevertheless, using only 5\% of the bimodal data (142K images along with their captions), or leveraging weak supervision in the form of a list of machine-generated labels for each image, leads to only a moderate degradation compared to using 3M image-text pairs: 74\%$\rightarrow$$\sim$70\%. Our code is available at https://github.com/eladsegal/less-bimodal-sup.
1711.01068
Raphael Shu
Raphael Shu, Hideki Nakayama
Compressing Word Embeddings via Deep Compositional Code Learning
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.
[ { "created": "Fri, 3 Nov 2017 09:05:44 GMT", "version": "v1" }, { "created": "Fri, 17 Nov 2017 15:31:45 GMT", "version": "v2" } ]
2017-11-20
[ [ "Shu", "Raphael", "" ], [ "Nakayama", "Hideki", "" ] ]
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.
1109.4323
Adrien Poteaux
Adrien Poteaux, \'Eric Schost
On the complexity of computing with zero-dimensional triangular sets
null
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the complexity of some fundamental operations for triangular sets in dimension zero. Using Las-Vegas algorithms, we prove that one can perform such operations as change of order, equiprojectable decomposition, or quasi-inverse computation with a cost that is essentially that of modular composition. Over an abstract field, this leads to a subquadratic cost (with respect to the degree of the underlying algebraic set). Over a finite field, in a boolean RAM model, we obtain a quasi-linear running time using Kedlaya and Umans' algorithm for modular composition. Conversely, we also show how to reduce the problem of modular composition to change of order for triangular sets, so that all these problems are essentially equivalent. Our algorithms are implemented in Maple; we present some experimental results.
[ { "created": "Tue, 20 Sep 2011 15:34:04 GMT", "version": "v1" } ]
2011-09-21
[ [ "Poteaux", "Adrien", "" ], [ "Schost", "Éric", "" ] ]
We study the complexity of some fundamental operations for triangular sets in dimension zero. Using Las-Vegas algorithms, we prove that one can perform such operations as change of order, equiprojectable decomposition, or quasi-inverse computation with a cost that is essentially that of modular composition. Over an abstract field, this leads to a subquadratic cost (with respect to the degree of the underlying algebraic set). Over a finite field, in a boolean RAM model, we obtain a quasi-linear running time using Kedlaya and Umans' algorithm for modular composition. Conversely, we also show how to reduce the problem of modular composition to change of order for triangular sets, so that all these problems are essentially equivalent. Our algorithms are implemented in Maple; we present some experimental results.
2101.01115
Yunus Camg\"ozl\"u
Yunus Camg\"ozl\"u, Yakup Kutlu
Analysis of Filter Size Effect In Deep Learning
10 Pages, 9 Figures, Journal of Artificial Intelligence with Applications, published
Journal of Artificial Intelligence with Applications, 1(1), 20-29, 2020
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
With the use of deep learning in many areas, how to improve this technology or how to develop the structure used more effectively and in a shorter time is an issue that is of interest to many people working in this field. Many studies are carried out on this subject, it is aimed to reduce the duration of the operation and the processing power required, except to obtain the best result with the changes made in the variables, functions and data in the models used. In this study, in the leaf classification made using Mendeley data set consisting of leaf images with a fixed background, all other variables such as layer number, iteration, number of layers in the model and pooling process were kept constant, except for the filter dimensions of the convolution layers in the determined model. Convolution layers in 3 different filter sizes and in addition to this, many results obtained in 2 different structures, increasing and decreasing, and 3 different image sizes were examined. In the literature, it is seen that different uses of pooling layers, changes due to increase or decrease in the number of layers, the difference in the size of the data used, and the results of many functions used with different parameters are evaluated. In the leaf classification of the determined data set with CNN, the change in the filter size of the convolution layer together with the change in different filter combinations and in different sized images was focused. Using the data set and data reproduction methods, it was aimed to make the differences in filter sizes and image sizes more distinct. Using the fixed number of iterations, model and data set, the effect of different filter sizes has been observed.
[ { "created": "Sat, 12 Dec 2020 11:05:47 GMT", "version": "v1" } ]
2021-01-05
[ [ "Camgözlü", "Yunus", "" ], [ "Kutlu", "Yakup", "" ] ]
With the use of deep learning in many areas, how to improve this technology or how to develop the structure used more effectively and in a shorter time is an issue that is of interest to many people working in this field. Many studies are carried out on this subject, it is aimed to reduce the duration of the operation and the processing power required, except to obtain the best result with the changes made in the variables, functions and data in the models used. In this study, in the leaf classification made using Mendeley data set consisting of leaf images with a fixed background, all other variables such as layer number, iteration, number of layers in the model and pooling process were kept constant, except for the filter dimensions of the convolution layers in the determined model. Convolution layers in 3 different filter sizes and in addition to this, many results obtained in 2 different structures, increasing and decreasing, and 3 different image sizes were examined. In the literature, it is seen that different uses of pooling layers, changes due to increase or decrease in the number of layers, the difference in the size of the data used, and the results of many functions used with different parameters are evaluated. In the leaf classification of the determined data set with CNN, the change in the filter size of the convolution layer together with the change in different filter combinations and in different sized images was focused. Using the data set and data reproduction methods, it was aimed to make the differences in filter sizes and image sizes more distinct. Using the fixed number of iterations, model and data set, the effect of different filter sizes has been observed.
2208.10188
Soura Sena Das
Soura Sena Das, Soumen Nandi and Sagnik Sen
The oriented relative clique number of triangle-free planar graphs is 10
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In relation to oriented coloring and chromatic number, the parameter oriented relative clique number of an oriented graph $\overrightarrow{G}$, denoted by $\omega_{ro}(\overrightarrow{G})$, is the main focus of this work. We solve an open problem mentioned in the recent survey on oriented coloring by Sopena (Discrete Mathematics 2016), and positively settle a conjecture due to Sen (PhD thesis 2014), by proving that the maximum value of $\omega_{ro}(\overrightarrow{G})$ is $10$ when $\overrightarrow{G}$ is a planar graph.
[ { "created": "Mon, 22 Aug 2022 10:13:48 GMT", "version": "v1" } ]
2022-08-23
[ [ "Das", "Soura Sena", "" ], [ "Nandi", "Soumen", "" ], [ "Sen", "Sagnik", "" ] ]
In relation to oriented coloring and chromatic number, the parameter oriented relative clique number of an oriented graph $\overrightarrow{G}$, denoted by $\omega_{ro}(\overrightarrow{G})$, is the main focus of this work. We solve an open problem mentioned in the recent survey on oriented coloring by Sopena (Discrete Mathematics 2016), and positively settle a conjecture due to Sen (PhD thesis 2014), by proving that the maximum value of $\omega_{ro}(\overrightarrow{G})$ is $10$ when $\overrightarrow{G}$ is a planar graph.
2311.17080
Weihao Qiu
Weihao Qiu, George Legrady
Combating the "Sameness" in AI Art: Reflections on the Interactive AI Installation Fencing Hallucination
Paper for NeurIPS 2023 Workshop, Machine Learning for Creativity and Design
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The article summarizes three types of "sameness" issues in Artificial Intelligence(AI) art, each occurring at different stages of development in AI image creation tools. Through the Fencing Hallucination project, the article reflects on the design of AI art production in alleviating the sense of uniformity, maintaining the uniqueness of images from an AI image synthesizer, and enhancing the connection between the artworks and the audience. This paper endeavors to stimulate the creation of distinctive AI art by recounting the efforts and insights derived from the Fencing Hallucination project, all dedicated to addressing the issue of "sameness".
[ { "created": "Tue, 28 Nov 2023 00:00:34 GMT", "version": "v1" } ]
2023-11-30
[ [ "Qiu", "Weihao", "" ], [ "Legrady", "George", "" ] ]
The article summarizes three types of "sameness" issues in Artificial Intelligence(AI) art, each occurring at different stages of development in AI image creation tools. Through the Fencing Hallucination project, the article reflects on the design of AI art production in alleviating the sense of uniformity, maintaining the uniqueness of images from an AI image synthesizer, and enhancing the connection between the artworks and the audience. This paper endeavors to stimulate the creation of distinctive AI art by recounting the efforts and insights derived from the Fencing Hallucination project, all dedicated to addressing the issue of "sameness".
1805.10338
Lierni Sestorain
Lierni Sestorain and Massimiliano Ciaramita and Christian Buck and Thomas Hofmann
Zero-Shot Dual Machine Translation
null
null
null
null
cs.CL cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Machine Translation (NMT) systems rely on large amounts of parallel data. This is a major challenge for low-resource languages. Building on recent work on unsupervised and semi-supervised methods, we present an approach that combines zero-shot and dual learning. The latter relies on reinforcement learning, to exploit the duality of the machine translation task, and requires only monolingual data for the target language pair. Experiments show that a zero-shot dual system, trained on English-French and English-Spanish, outperforms by large margins a standard NMT system in zero-shot translation performance on Spanish-French (both directions). The zero-shot dual method approaches the performance, within 2.2 BLEU points, of a comparable supervised setting. Our method can obtain improvements also on the setting where a small amount of parallel data for the zero-shot language pair is available. Adding Russian, to extend our experiments to jointly modeling 6 zero-shot translation directions, all directions improve between 4 and 15 BLEU points, again, reaching performance near that of the supervised setting.
[ { "created": "Fri, 25 May 2018 19:27:43 GMT", "version": "v1" } ]
2018-05-29
[ [ "Sestorain", "Lierni", "" ], [ "Ciaramita", "Massimiliano", "" ], [ "Buck", "Christian", "" ], [ "Hofmann", "Thomas", "" ] ]
Neural Machine Translation (NMT) systems rely on large amounts of parallel data. This is a major challenge for low-resource languages. Building on recent work on unsupervised and semi-supervised methods, we present an approach that combines zero-shot and dual learning. The latter relies on reinforcement learning, to exploit the duality of the machine translation task, and requires only monolingual data for the target language pair. Experiments show that a zero-shot dual system, trained on English-French and English-Spanish, outperforms by large margins a standard NMT system in zero-shot translation performance on Spanish-French (both directions). The zero-shot dual method approaches the performance, within 2.2 BLEU points, of a comparable supervised setting. Our method can obtain improvements also on the setting where a small amount of parallel data for the zero-shot language pair is available. Adding Russian, to extend our experiments to jointly modeling 6 zero-shot translation directions, all directions improve between 4 and 15 BLEU points, again, reaching performance near that of the supervised setting.
2302.09699
Daogao Liu
Arun Ganesh, Daogao Liu, Sewoong Oh, Abhradeep Thakurta
Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks
null
null
null
null
cs.LG cs.CR math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of minimizing a non-convex objective while preserving the privacy of the examples in the training data. Building upon the previous variance-reduced algorithm SpiderBoost, we introduce a new framework that utilizes two different kinds of gradient oracles. The first kind of oracles can estimate the gradient of one point, and the second kind of oracles, less precise and more cost-effective, can estimate the gradient difference between two points. SpiderBoost uses the first kind periodically, once every few steps, while our framework proposes using the first oracle whenever the total drift has become large and relies on the second oracle otherwise. This new framework ensures the gradient estimations remain accurate all the time, resulting in improved rates for finding second-order stationary points. Moreover, we address a more challenging task of finding the global minima of a non-convex objective using the exponential mechanism. Our findings indicate that the regularized exponential mechanism can closely match previous empirical and population risk bounds, without requiring smoothness assumptions for algorithms with polynomial running time. Furthermore, by disregarding running time considerations, we show that the exponential mechanism can achieve a good population risk bound and provide a nearly matching lower bound.
[ { "created": "Mon, 20 Feb 2023 00:11:19 GMT", "version": "v1" } ]
2023-02-21
[ [ "Ganesh", "Arun", "" ], [ "Liu", "Daogao", "" ], [ "Oh", "Sewoong", "" ], [ "Thakurta", "Abhradeep", "" ] ]
We consider the problem of minimizing a non-convex objective while preserving the privacy of the examples in the training data. Building upon the previous variance-reduced algorithm SpiderBoost, we introduce a new framework that utilizes two different kinds of gradient oracles. The first kind of oracles can estimate the gradient of one point, and the second kind of oracles, less precise and more cost-effective, can estimate the gradient difference between two points. SpiderBoost uses the first kind periodically, once every few steps, while our framework proposes using the first oracle whenever the total drift has become large and relies on the second oracle otherwise. This new framework ensures the gradient estimations remain accurate all the time, resulting in improved rates for finding second-order stationary points. Moreover, we address a more challenging task of finding the global minima of a non-convex objective using the exponential mechanism. Our findings indicate that the regularized exponential mechanism can closely match previous empirical and population risk bounds, without requiring smoothness assumptions for algorithms with polynomial running time. Furthermore, by disregarding running time considerations, we show that the exponential mechanism can achieve a good population risk bound and provide a nearly matching lower bound.
2209.04376
Elnaz Rabieinejad
Lulit Asfaw (College of Computing and Software Engineering, Kennesaw State University, Marietta, GA, USA), Mikael Clemmons (College of Computing and Software Engineering, Kennesaw State University, Marietta, GA, USA), Cody Hayes (College of Computing and Software Engineering, Kennesaw State University, Marietta, GA, USA), Elise Letnaunchyn (College of Computing and Software Engineering, Kennesaw State University, Marietta, GA, USA), Elnaz Rabieinejad (Cyber Science Lab, School of Computer Science, University of Guelph, Ontario, Canada)
Challenges of Implementing Agile Processes in Remote-First Companies
null
null
null
null
cs.SE cs.CY
http://creativecommons.org/licenses/by/4.0/
The trend of remote work, especially in the IT sector, has been on the rise in recent years, and its popularity has especially increased since the COVID-19 pandemic. In addition to adopting remote work, companies also have been migrating toward managing their projects using agile processes. Agile processes promote small and continuous feedback loops powered by effective communication. In this survey, we look to discover the challenges of implementing these processes in a remote setting, specifically focusing on the impact on communication. We examine the role communication plays in an agile setting and look for ways to mitigate the risk remote environments impose on it. Lastly, we present other miscellaneous challenges companies could experience that still carry dangers but are less impactful overall to agile implementation.
[ { "created": "Fri, 9 Sep 2022 16:20:04 GMT", "version": "v1" } ]
2022-09-12
[ [ "Asfaw", "Lulit", "", "College of Computing and Software Engineering, Kennesaw\n State University, Marietta, GA, USA" ], [ "Clemmons", "Mikael", "", "College of Computing\n and Software Engineering, Kennesaw State University, Marietta, GA, USA" ], [ "Hayes", "Cody", "", "College of Computing and Software Engineering, Kennesaw State\n University, Marietta, GA, USA" ], [ "Letnaunchyn", "Elise", "", "College of Computing and\n Software Engineering, Kennesaw State University, Marietta, GA, USA" ], [ "Rabieinejad", "Elnaz", "", "Cyber Science Lab, School of Computer Science, University of\n Guelph, Ontario, Canada" ] ]
The trend of remote work, especially in the IT sector, has been on the rise in recent years, and its popularity has especially increased since the COVID-19 pandemic. In addition to adopting remote work, companies also have been migrating toward managing their projects using agile processes. Agile processes promote small and continuous feedback loops powered by effective communication. In this survey, we look to discover the challenges of implementing these processes in a remote setting, specifically focusing on the impact on communication. We examine the role communication plays in an agile setting and look for ways to mitigate the risk remote environments impose on it. Lastly, we present other miscellaneous challenges companies could experience that still carry dangers but are less impactful overall to agile implementation.
2106.03242
Elahe Arani
Ahmed Badar, Arnav Varma, Adrian Staniec, Mahmoud Gamal, Omar Magdy, Haris Iqbal, Elahe Arani and Bahram Zonooz
Highlighting the Importance of Reducing Research Bias and Carbon Emissions in CNNs
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Convolutional neural networks (CNNs) have become commonplace in addressing major challenges in computer vision. Researchers are not only coming up with new CNN architectures but are also researching different techniques to improve the performance of existing architectures. However, there is a tendency to over-emphasize performance improvement while neglecting certain important variables such as simplicity, versatility, the fairness of comparisons, and energy efficiency. Overlooking these variables in architectural design and evaluation has led to research bias and a significantly negative environmental impact. Furthermore, this can undermine the positive impact of research in using deep learning models to tackle climate change. Here, we perform an extensive and fair empirical study of a number of proposed techniques to gauge the utility of each technique for segmentation and classification. Our findings restate the importance of favoring simplicity over complexity in model design (Occam's Razor). Furthermore, our results indicate that simple standardized practices can lead to a significant reduction in environmental impact with little drop in performance. We highlight that there is a need to rethink the design and evaluation of CNNs to alleviate the issue of research bias and carbon emissions.
[ { "created": "Sun, 6 Jun 2021 20:42:00 GMT", "version": "v1" } ]
2021-06-08
[ [ "Badar", "Ahmed", "" ], [ "Varma", "Arnav", "" ], [ "Staniec", "Adrian", "" ], [ "Gamal", "Mahmoud", "" ], [ "Magdy", "Omar", "" ], [ "Iqbal", "Haris", "" ], [ "Arani", "Elahe", "" ], [ "Zonooz", "Bahram", "" ] ]
Convolutional neural networks (CNNs) have become commonplace in addressing major challenges in computer vision. Researchers are not only coming up with new CNN architectures but are also researching different techniques to improve the performance of existing architectures. However, there is a tendency to over-emphasize performance improvement while neglecting certain important variables such as simplicity, versatility, the fairness of comparisons, and energy efficiency. Overlooking these variables in architectural design and evaluation has led to research bias and a significantly negative environmental impact. Furthermore, this can undermine the positive impact of research in using deep learning models to tackle climate change. Here, we perform an extensive and fair empirical study of a number of proposed techniques to gauge the utility of each technique for segmentation and classification. Our findings restate the importance of favoring simplicity over complexity in model design (Occam's Razor). Furthermore, our results indicate that simple standardized practices can lead to a significant reduction in environmental impact with little drop in performance. We highlight that there is a need to rethink the design and evaluation of CNNs to alleviate the issue of research bias and carbon emissions.
2201.12179
Lukas Struppek
Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, Kristian Kersting
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
Accepted by ICML 2022
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model inversion attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target classifier's private training data by exploiting the model's learned knowledge. Previous research has developed generative MIAs that use generative adversarial networks (GANs) as image priors tailored to a specific target model. This makes the attacks time- and resource-consuming, inflexible, and susceptible to distributional shifts between datasets. To overcome these drawbacks, we present Plug & Play Attacks, which relax the dependency between the target model and image prior, and enable the use of a single GAN to attack a wide range of targets, requiring only minor adjustments to the attack. Moreover, we show that powerful MIAs are possible even with publicly available pre-trained GANs and under strong distributional shifts, for which previous approaches fail to produce meaningful results. Our extensive evaluation confirms the improved robustness and flexibility of Plug & Play Attacks and their ability to create high-quality images revealing sensitive class characteristics.
[ { "created": "Fri, 28 Jan 2022 15:25:50 GMT", "version": "v1" }, { "created": "Wed, 2 Feb 2022 15:21:17 GMT", "version": "v2" }, { "created": "Tue, 7 Jun 2022 16:15:28 GMT", "version": "v3" }, { "created": "Thu, 9 Jun 2022 08:48:08 GMT", "version": "v4" } ]
2022-06-10
[ [ "Struppek", "Lukas", "" ], [ "Hintersdorf", "Dominik", "" ], [ "Correia", "Antonio De Almeida", "" ], [ "Adler", "Antonia", "" ], [ "Kersting", "Kristian", "" ] ]
Model inversion attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target classifier's private training data by exploiting the model's learned knowledge. Previous research has developed generative MIAs that use generative adversarial networks (GANs) as image priors tailored to a specific target model. This makes the attacks time- and resource-consuming, inflexible, and susceptible to distributional shifts between datasets. To overcome these drawbacks, we present Plug & Play Attacks, which relax the dependency between the target model and image prior, and enable the use of a single GAN to attack a wide range of targets, requiring only minor adjustments to the attack. Moreover, we show that powerful MIAs are possible even with publicly available pre-trained GANs and under strong distributional shifts, for which previous approaches fail to produce meaningful results. Our extensive evaluation confirms the improved robustness and flexibility of Plug & Play Attacks and their ability to create high-quality images revealing sensitive class characteristics.
2212.14133
Andrew O'Brien
Andrew O'Brien, Rosina Weber, Edward Kim
Investigating Sindy As a Tool For Causal Discovery In Time Series Signals
null
null
null
null
cs.LG stat.ME
http://creativecommons.org/licenses/by/4.0/
The SINDy algorithm has been successfully used to identify the governing equations of dynamical systems from time series data. In this paper, we argue that this makes SINDy a potentially useful tool for causal discovery and that existing tools for causal discovery can be used to dramatically improve the performance of SINDy as tool for robust sparse modeling and system identification. We then demonstrate empirically that augmenting the SINDy algorithm with tools from causal discovery can provides engineers with a tool for learning causally robust governing equations.
[ { "created": "Thu, 29 Dec 2022 00:32:24 GMT", "version": "v1" } ]
2023-01-02
[ [ "O'Brien", "Andrew", "" ], [ "Weber", "Rosina", "" ], [ "Kim", "Edward", "" ] ]
The SINDy algorithm has been successfully used to identify the governing equations of dynamical systems from time series data. In this paper, we argue that this makes SINDy a potentially useful tool for causal discovery and that existing tools for causal discovery can be used to dramatically improve the performance of SINDy as tool for robust sparse modeling and system identification. We then demonstrate empirically that augmenting the SINDy algorithm with tools from causal discovery can provides engineers with a tool for learning causally robust governing equations.
2112.11190
Omid Ziaee
Omid Ziaee, Mohsen Hamedi
Augmented reality applications in manufacturing and its future scope in Industry 4.0
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Augmented reality technology is one of the leading technologies in the context of Industry 4.0. The promising potential application of augmented reality in industrial production systems has received much attention, which led to the concept of industrial augmented reality. On the one hand, this technology provides a suitable platform that facilitates the registration of information and access to them to help make decisions and allows concurrent training for the user while executing the production processes. This leads to increased work speed and accuracy of the user as a process operator and consequently offers economic benefits to the companies. Moreover, recent advances in the internet of things, smart sensors, and advanced algorithms have increased the possibility of widespread and more effective use of augmented reality. Currently, many research pieces are being done to expand the application of augmented reality and increase its effectiveness in industrial production processes. This research demonstrates the influence of augmented reality in Industry 4.0 while critically reviewing the industrial augmented reality history. Afterward, the paper discusses the critical role of industrial augmented reality by analyzing some use cases and their prospects. With a systematic analysis, this paper discusses the main future directions for industrial augmented reality applications in industry 4.0. The article investigates various areas of application for this technology and its impact on improving production conditions. Finally, the challenges that this technology faces and its research opportunities are discussed.
[ { "created": "Fri, 3 Dec 2021 20:46:50 GMT", "version": "v1" } ]
2021-12-22
[ [ "Ziaee", "Omid", "" ], [ "Hamedi", "Mohsen", "" ] ]
Augmented reality technology is one of the leading technologies in the context of Industry 4.0. The promising potential application of augmented reality in industrial production systems has received much attention, which led to the concept of industrial augmented reality. On the one hand, this technology provides a suitable platform that facilitates the registration of information and access to them to help make decisions and allows concurrent training for the user while executing the production processes. This leads to increased work speed and accuracy of the user as a process operator and consequently offers economic benefits to the companies. Moreover, recent advances in the internet of things, smart sensors, and advanced algorithms have increased the possibility of widespread and more effective use of augmented reality. Currently, many research pieces are being done to expand the application of augmented reality and increase its effectiveness in industrial production processes. This research demonstrates the influence of augmented reality in Industry 4.0 while critically reviewing the industrial augmented reality history. Afterward, the paper discusses the critical role of industrial augmented reality by analyzing some use cases and their prospects. With a systematic analysis, this paper discusses the main future directions for industrial augmented reality applications in industry 4.0. The article investigates various areas of application for this technology and its impact on improving production conditions. Finally, the challenges that this technology faces and its research opportunities are discussed.
1902.00389
Francesco Restuccia
Salvatore D'Oro and Francesco Restuccia and Alessandro Talamonti and Tommaso Melodia
The Slice Is Served: Enforcing Radio Access Network Slicing in Virtualized 5G Systems
Accepted for publication in IEEE INFOCOM 2019
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notions of softwarization and virtualization of the radio access network (RAN) of next-generation (5G) wireless systems are ushering in a vision where applications and services are physically decoupled from devices and network infrastructure. This crucial aspect will ultimately enable the dynamic deployment of heterogeneous services by different network operators over the same physical infrastructure. RAN slicing is a form of 5G virtualization that allows network infrastructure owners to dynamically "slice" and "serve" their network resources (i.e., spectrum, power, antennas, among others) to different mobile virtual network operators (MVNOs), according to their current needs. Once the slicing policy (i.e., the percentage of resources assigned to each MVNO) has been computed, a major challenge is how to allocate spectrum resources to MVNOs in such a way that (i) the slicing policy defined by the network owner is enforced; and (ii) the interference among different MVNOs is minimized. In this article, we mathematically formalize the RAN slicing enforcement problem (RSEP) and demonstrate its NP-hardness. For this reason, we design three approximation algorithms that render the solution scalable as the RSEP increases in size. We extensively evaluate their performance through simulations and experiments on a testbed made up of 8 software-defined radio peripherals. Experimental results reveal that not only do our algorithms enforce the slicing policies, but can also double the total network throughput when intra-MVNO power control policies are used in conjunction.
[ { "created": "Fri, 1 Feb 2019 15:06:28 GMT", "version": "v1" } ]
2019-02-04
[ [ "D'Oro", "Salvatore", "" ], [ "Restuccia", "Francesco", "" ], [ "Talamonti", "Alessandro", "" ], [ "Melodia", "Tommaso", "" ] ]
The notions of softwarization and virtualization of the radio access network (RAN) of next-generation (5G) wireless systems are ushering in a vision where applications and services are physically decoupled from devices and network infrastructure. This crucial aspect will ultimately enable the dynamic deployment of heterogeneous services by different network operators over the same physical infrastructure. RAN slicing is a form of 5G virtualization that allows network infrastructure owners to dynamically "slice" and "serve" their network resources (i.e., spectrum, power, antennas, among others) to different mobile virtual network operators (MVNOs), according to their current needs. Once the slicing policy (i.e., the percentage of resources assigned to each MVNO) has been computed, a major challenge is how to allocate spectrum resources to MVNOs in such a way that (i) the slicing policy defined by the network owner is enforced; and (ii) the interference among different MVNOs is minimized. In this article, we mathematically formalize the RAN slicing enforcement problem (RSEP) and demonstrate its NP-hardness. For this reason, we design three approximation algorithms that render the solution scalable as the RSEP increases in size. We extensively evaluate their performance through simulations and experiments on a testbed made up of 8 software-defined radio peripherals. Experimental results reveal that not only do our algorithms enforce the slicing policies, but can also double the total network throughput when intra-MVNO power control policies are used in conjunction.
2202.01115
Saeed Boor Boor
Saeed Boorboor, Shawn Mathew, Mala Ananth, David Talmage, Lorna W. Role, Arie E. Kaufman
NeuRegenerate: A Framework for Visualizing Neurodegeneration
Accepted for publication in IEEE Transactions on Visualization and Computer Graphics
null
10.1109/TVCG.2021.3127132
null
cs.LG cs.GR cs.HC eess.IV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections is limited to general observations using population analysis. In this paper, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject, for specified age-timepoints.To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (cycleGAN) that translates features of neuronal structures in a region, across age-timepoints, for large brain microscopy volumes. We improve the reconstruction quality of neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss.Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. We show that neuReGANerator has a reconstruction accuracy of 94% in predicting neuronal structures. Finally, to visualize the predicted change in projections, NeuRegenerate offers two modes: (1) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, across the age timepoints, and (2) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes in neuronal fibers within the cholinergic system of the mouse brain between a young and old specimen.
[ { "created": "Wed, 2 Feb 2022 16:21:14 GMT", "version": "v1" } ]
2022-02-03
[ [ "Boorboor", "Saeed", "" ], [ "Mathew", "Shawn", "" ], [ "Ananth", "Mala", "" ], [ "Talmage", "David", "" ], [ "Role", "Lorna W.", "" ], [ "Kaufman", "Arie E.", "" ] ]
Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections is limited to general observations using population analysis. In this paper, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject, for specified age-timepoints.To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (cycleGAN) that translates features of neuronal structures in a region, across age-timepoints, for large brain microscopy volumes. We improve the reconstruction quality of neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss.Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. We show that neuReGANerator has a reconstruction accuracy of 94% in predicting neuronal structures. Finally, to visualize the predicted change in projections, NeuRegenerate offers two modes: (1) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, across the age timepoints, and (2) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes in neuronal fibers within the cholinergic system of the mouse brain between a young and old specimen.
2306.04968
Jun Zhao
Jun Zhao, Yongxin Zhang, Qi Zhang, Tao Gui, Zhongyu Wei, Minlong Peng, Mingming Sun
Actively Supervised Clustering for Open Relation Extraction
Accepted by ACL2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current clustering-based Open Relation Extraction (OpenRE) methods usually adopt a two-stage pipeline. The first stage simultaneously learns relation representations and assignments. The second stage manually labels several instances and thus names the relation for each cluster. However, unsupervised objectives struggle to optimize the model to derive accurate clustering assignments, and the number of clusters has to be supplied in advance. In this paper, we present a novel setting, named actively supervised clustering for OpenRE. Our insight lies in that clustering learning and relation labeling can be alternately performed, providing the necessary guidance for clustering without a significant increase in human effort. The key to the setting is selecting which instances to label. Instead of using classical active labeling strategies designed for fixed known classes, we propose a new strategy, which is applicable to dynamically discover clusters of unknown relations. Experimental results show that our method is able to discover almost all relational clusters in the data and improve the SOTA methods by 10.3\% and 5.2\%, on two datasets respectively.
[ { "created": "Thu, 8 Jun 2023 06:55:02 GMT", "version": "v1" } ]
2023-06-09
[ [ "Zhao", "Jun", "" ], [ "Zhang", "Yongxin", "" ], [ "Zhang", "Qi", "" ], [ "Gui", "Tao", "" ], [ "Wei", "Zhongyu", "" ], [ "Peng", "Minlong", "" ], [ "Sun", "Mingming", "" ] ]
Current clustering-based Open Relation Extraction (OpenRE) methods usually adopt a two-stage pipeline. The first stage simultaneously learns relation representations and assignments. The second stage manually labels several instances and thus names the relation for each cluster. However, unsupervised objectives struggle to optimize the model to derive accurate clustering assignments, and the number of clusters has to be supplied in advance. In this paper, we present a novel setting, named actively supervised clustering for OpenRE. Our insight lies in that clustering learning and relation labeling can be alternately performed, providing the necessary guidance for clustering without a significant increase in human effort. The key to the setting is selecting which instances to label. Instead of using classical active labeling strategies designed for fixed known classes, we propose a new strategy, which is applicable to dynamically discover clusters of unknown relations. Experimental results show that our method is able to discover almost all relational clusters in the data and improve the SOTA methods by 10.3\% and 5.2\%, on two datasets respectively.
1709.05038
Yang Xian
Yang Xian, Yingli Tian
Self-Guiding Multimodal LSTM - when we do not have a perfect training dataset for image captioning
The paper is under consideration at Computer Vision and Image Understanding
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning model is proposed to handle uncontrolled imbalanced real-world image-sentence dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306,165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of m-LSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sg-LSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images.
[ { "created": "Fri, 15 Sep 2017 02:53:16 GMT", "version": "v1" } ]
2017-09-18
[ [ "Xian", "Yang", "" ], [ "Tian", "Yingli", "" ] ]
In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning model is proposed to handle uncontrolled imbalanced real-world image-sentence dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306,165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of m-LSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sg-LSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images.
2109.14065
Subodh Mishra
Subodh Mishra, Armin Parchami, Enrique Corona, Punarjay Chakravarty, Ankit Vora, Devarth Parikh, Gaurav Pandey
Localization of a Smart Infrastructure Fisheye Camera in a Prior Map for Autonomous Vehicles
Submitted to ICRA 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a technique for localization of a smart infrastructure node, consisting of a fisheye camera, in a prior map. These cameras can detect objects that are outside the line of sight of the autonomous vehicles (AV) and send that information to AVs using V2X technology. However, in order for this information to be of any use to the AV, the detected objects should be provided in the reference frame of the prior map that the AV uses for its own navigation. Therefore, it is important to know the accurate pose of the infrastructure camera with respect to the prior map. Here we propose to solve this localization problem in two steps, \textit{(i)} we perform feature matching between perspective projection of fisheye image and bird's eye view (BEV) satellite imagery from the prior map to estimate an initial camera pose, \textit{(ii)} we refine the initialization by maximizing the Mutual Information (MI) between intensity of pixel values of fisheye image and reflectivity of 3D LiDAR points in the map data. We validate our method on simulated data and also present results with real world data.
[ { "created": "Tue, 28 Sep 2021 21:57:35 GMT", "version": "v1" } ]
2021-09-30
[ [ "Mishra", "Subodh", "" ], [ "Parchami", "Armin", "" ], [ "Corona", "Enrique", "" ], [ "Chakravarty", "Punarjay", "" ], [ "Vora", "Ankit", "" ], [ "Parikh", "Devarth", "" ], [ "Pandey", "Gaurav", "" ] ]
This work presents a technique for localization of a smart infrastructure node, consisting of a fisheye camera, in a prior map. These cameras can detect objects that are outside the line of sight of the autonomous vehicles (AV) and send that information to AVs using V2X technology. However, in order for this information to be of any use to the AV, the detected objects should be provided in the reference frame of the prior map that the AV uses for its own navigation. Therefore, it is important to know the accurate pose of the infrastructure camera with respect to the prior map. Here we propose to solve this localization problem in two steps, \textit{(i)} we perform feature matching between perspective projection of fisheye image and bird's eye view (BEV) satellite imagery from the prior map to estimate an initial camera pose, \textit{(ii)} we refine the initialization by maximizing the Mutual Information (MI) between intensity of pixel values of fisheye image and reflectivity of 3D LiDAR points in the map data. We validate our method on simulated data and also present results with real world data.
1211.2287
Jie Xu
Jie Xu, Yu Zhang and Mihaela van der Schaar
Designing Rating Systems to Promote Mutual Security for Interconnected Networks
null
null
null
null
cs.GT cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interconnected autonomous systems often share security risks. However, an autonomous system lacks the incentive to make (sufficient) security investments if the cost exceeds its own benefit even though doing that would be socially beneficial. In this paper, we develop a systematic and rigorous framework for analyzing and significantly improving the mutual security of a collection of ASs that interact frequently over a long period of time. Using this framework, we show that simple incentive schemes based on rating systems can be designed to encourage the autonomous systems' security investments, thereby significantly improving their mutual security.
[ { "created": "Sat, 10 Nov 2012 03:36:00 GMT", "version": "v1" } ]
2012-11-13
[ [ "Xu", "Jie", "" ], [ "Zhang", "Yu", "" ], [ "van der Schaar", "Mihaela", "" ] ]
Interconnected autonomous systems often share security risks. However, an autonomous system lacks the incentive to make (sufficient) security investments if the cost exceeds its own benefit even though doing that would be socially beneficial. In this paper, we develop a systematic and rigorous framework for analyzing and significantly improving the mutual security of a collection of ASs that interact frequently over a long period of time. Using this framework, we show that simple incentive schemes based on rating systems can be designed to encourage the autonomous systems' security investments, thereby significantly improving their mutual security.
2108.05781
Hamed Ahmadi
Hamed Ahmadi, Avishek Nag, Zaheer Khan, Kamran Sayrafian, Susanto Rahadrja
Networked Twins and Twins of Networks: an Overview on the Relationship Between Digital Twins and 6G
Accepted for publication at IEEE Communications Standards Magazine
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Digital Twin (DT) is a promising technology for the new immersive digital life with a variety of applications in areas such as Industry 4.0, aviation, and healthcare. Proliferation of this technology requires higher data rates, reliability, resilience, and lower latency beyond what is currently offered by 5G. Thus, DT can become a major driver for 6G research and development. Alternatively, 6G network development can benefit from Digital Twin technology and its powerful features such as modularity and remote intelligence. Using DT, a 6G network (or some of its components) will have the opportunity to use Artificial Intelligence more proactively in order to enhance its resilience. DT's application in telecommunications is still in its infancy. In this article we highlight some of the most promising research and development directions for this technology.
[ { "created": "Thu, 12 Aug 2021 14:47:34 GMT", "version": "v1" } ]
2021-08-13
[ [ "Ahmadi", "Hamed", "" ], [ "Nag", "Avishek", "" ], [ "Khan", "Zaheer", "" ], [ "Sayrafian", "Kamran", "" ], [ "Rahadrja", "Susanto", "" ] ]
Digital Twin (DT) is a promising technology for the new immersive digital life with a variety of applications in areas such as Industry 4.0, aviation, and healthcare. Proliferation of this technology requires higher data rates, reliability, resilience, and lower latency beyond what is currently offered by 5G. Thus, DT can become a major driver for 6G research and development. Alternatively, 6G network development can benefit from Digital Twin technology and its powerful features such as modularity and remote intelligence. Using DT, a 6G network (or some of its components) will have the opportunity to use Artificial Intelligence more proactively in order to enhance its resilience. DT's application in telecommunications is still in its infancy. In this article we highlight some of the most promising research and development directions for this technology.
1604.05747
Francesco Maria Elia
Francesco Elia
Syntactic and semantic classification of verb arguments using dependency-based and rich semantic features
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Corpus Pattern Analysis (CPA) has been the topic of Semeval 2015 Task 15, aimed at producing a system that can aid lexicographers in their efforts to build a dictionary of meanings for English verbs using the CPA annotation process. CPA parsing is one of the subtasks which this annotation process is made of and it is the focus of this report. A supervised machine-learning approach has been implemented, in which syntactic features derived from parse trees and semantic features derived from WordNet and word embeddings are used. It is shown that this approach performs well, even with the data sparsity issues that characterize the dataset, and can obtain better results than other system by a margin of about 4% f-score.
[ { "created": "Tue, 19 Apr 2016 20:59:32 GMT", "version": "v1" } ]
2016-04-21
[ [ "Elia", "Francesco", "" ] ]
Corpus Pattern Analysis (CPA) has been the topic of Semeval 2015 Task 15, aimed at producing a system that can aid lexicographers in their efforts to build a dictionary of meanings for English verbs using the CPA annotation process. CPA parsing is one of the subtasks which this annotation process is made of and it is the focus of this report. A supervised machine-learning approach has been implemented, in which syntactic features derived from parse trees and semantic features derived from WordNet and word embeddings are used. It is shown that this approach performs well, even with the data sparsity issues that characterize the dataset, and can obtain better results than other system by a margin of about 4% f-score.
2303.04835
Alexandra Bremers
Alexandra Bremers, Maria Teresa Parreira, Xuanyu Fang, Natalie Friedman, Adolfo Ramirez-Aristizabal, Alexandria Pabst, Mirjana Spasojevic, Michael Kuniavsky, Wendy Ju
The Bystander Affect Detection (BAD) Dataset for Failure Detection in HRI
12 pages
null
null
null
cs.RO cs.HC
http://creativecommons.org/licenses/by/4.0/
For a robot to repair its own error, it must first know it has made a mistake. One way that people detect errors is from the implicit reactions from bystanders -- their confusion, smirks, or giggles clue us in that something unexpected occurred. To enable robots to detect and act on bystander responses to task failures, we developed a novel method to elicit bystander responses to human and robot errors. Using 46 different stimulus videos featuring a variety of human and machine task failures, we collected a total of 2452 webcam videos of human reactions from 54 participants. To test the viability of the collected data, we used the bystander reaction dataset as input to a deep-learning model, BADNet, to predict failure occurrence. We tested different data labeling methods and learned how they affect model performance, achieving precisions above 90%. We discuss strategies to model bystander reactions and predict failure and how this approach can be used in real-world robotic deployments to detect errors and improve robot performance. As part of this work, we also contribute with the "Bystander Affect Detection" (BAD) dataset of bystander reactions, supporting the development of better prediction models.
[ { "created": "Wed, 8 Mar 2023 19:13:18 GMT", "version": "v1" } ]
2023-03-10
[ [ "Bremers", "Alexandra", "" ], [ "Parreira", "Maria Teresa", "" ], [ "Fang", "Xuanyu", "" ], [ "Friedman", "Natalie", "" ], [ "Ramirez-Aristizabal", "Adolfo", "" ], [ "Pabst", "Alexandria", "" ], [ "Spasojevic", "Mirjana", "" ], [ "Kuniavsky", "Michael", "" ], [ "Ju", "Wendy", "" ] ]
For a robot to repair its own error, it must first know it has made a mistake. One way that people detect errors is from the implicit reactions from bystanders -- their confusion, smirks, or giggles clue us in that something unexpected occurred. To enable robots to detect and act on bystander responses to task failures, we developed a novel method to elicit bystander responses to human and robot errors. Using 46 different stimulus videos featuring a variety of human and machine task failures, we collected a total of 2452 webcam videos of human reactions from 54 participants. To test the viability of the collected data, we used the bystander reaction dataset as input to a deep-learning model, BADNet, to predict failure occurrence. We tested different data labeling methods and learned how they affect model performance, achieving precisions above 90%. We discuss strategies to model bystander reactions and predict failure and how this approach can be used in real-world robotic deployments to detect errors and improve robot performance. As part of this work, we also contribute with the "Bystander Affect Detection" (BAD) dataset of bystander reactions, supporting the development of better prediction models.
2306.05815
Francesco Tonin
Francesco Tonin, Alex Lambert, Panagiotis Patrinos, Johan A. K. Suykens
Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms
15 pages, ICML 2023
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions. This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix. Particularly, we consider objective functions that can be written as Moreau envelopes, demonstrating how to promote robustness and sparsity within the same framework. The proposed method is evaluated on synthetic and real-world benchmarks, showing significant speedup in KPCA training time as well as highlighting the benefits in terms of robustness and sparsity.
[ { "created": "Fri, 9 Jun 2023 11:27:35 GMT", "version": "v1" } ]
2023-06-12
[ [ "Tonin", "Francesco", "" ], [ "Lambert", "Alex", "" ], [ "Patrinos", "Panagiotis", "" ], [ "Suykens", "Johan A. K.", "" ] ]
The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions. This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix. Particularly, we consider objective functions that can be written as Moreau envelopes, demonstrating how to promote robustness and sparsity within the same framework. The proposed method is evaluated on synthetic and real-world benchmarks, showing significant speedup in KPCA training time as well as highlighting the benefits in terms of robustness and sparsity.
1611.08103
Guangming Lang
Guangming Lang
Double-quantitative $\gamma^{\ast}-$fuzzy coverings approximation operators
It enriches the fuzzy covering rough set theory
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In digital-based information boom, the fuzzy covering rough set model is an important mathematical tool for artificial intelligence, and how to build the bridge between the fuzzy covering rough set theory and Pawlak's model is becoming a hot research topic. In this paper, we first present the $\gamma-$fuzzy covering based probabilistic and grade approximation operators and double-quantitative approximation operators. We also study the relationships among the three types of $\gamma-$fuzzy covering based approximation operators. Second, we propose the $\gamma^{\ast}-$fuzzy coverings based multi-granulation probabilistic and grade lower and upper approximation operators and multi-granulation double-quantitative lower and upper approximation operators. We also investigate the relationships among these types of $\gamma-$fuzzy coverings based approximation operators. Finally, we employ several examples to illustrate how to construct the lower and upper approximations of fuzzy sets with the absolute and relative quantitative information.
[ { "created": "Thu, 24 Nov 2016 09:06:57 GMT", "version": "v1" } ]
2016-11-28
[ [ "Lang", "Guangming", "" ] ]
In digital-based information boom, the fuzzy covering rough set model is an important mathematical tool for artificial intelligence, and how to build the bridge between the fuzzy covering rough set theory and Pawlak's model is becoming a hot research topic. In this paper, we first present the $\gamma-$fuzzy covering based probabilistic and grade approximation operators and double-quantitative approximation operators. We also study the relationships among the three types of $\gamma-$fuzzy covering based approximation operators. Second, we propose the $\gamma^{\ast}-$fuzzy coverings based multi-granulation probabilistic and grade lower and upper approximation operators and multi-granulation double-quantitative lower and upper approximation operators. We also investigate the relationships among these types of $\gamma-$fuzzy coverings based approximation operators. Finally, we employ several examples to illustrate how to construct the lower and upper approximations of fuzzy sets with the absolute and relative quantitative information.
2407.11217
David Saulpic
Max Dupr\'e la Tour, David Saulpic
Almost-linear Time Approximation Algorithm to Euclidean $k$-median and $k$-means
null
null
null
null
cs.DS cs.AI
http://creativecommons.org/licenses/by/4.0/
Clustering is one of the staples of data analysis and unsupervised learning. As such, clustering algorithms are often used on massive data sets, and they need to be extremely fast. We focus on the Euclidean $k$-median and $k$-means problems, two of the standard ways to model the task of clustering. For these, the go-to algorithm is $k$-means++, which yields an $O(\log k)$-approximation in time $\tilde O(nkd)$. While it is possible to improve either the approximation factor [Lattanzi and Sohler, ICML19] or the running time [Cohen-Addad et al., NeurIPS 20], it is unknown how precise a linear-time algorithm can be. In this paper, we almost answer this question by presenting an almost linear-time algorithm to compute a constant-factor approximation.
[ { "created": "Mon, 15 Jul 2024 20:04:06 GMT", "version": "v1" } ]
2024-07-17
[ [ "la Tour", "Max Dupré", "" ], [ "Saulpic", "David", "" ] ]
Clustering is one of the staples of data analysis and unsupervised learning. As such, clustering algorithms are often used on massive data sets, and they need to be extremely fast. We focus on the Euclidean $k$-median and $k$-means problems, two of the standard ways to model the task of clustering. For these, the go-to algorithm is $k$-means++, which yields an $O(\log k)$-approximation in time $\tilde O(nkd)$. While it is possible to improve either the approximation factor [Lattanzi and Sohler, ICML19] or the running time [Cohen-Addad et al., NeurIPS 20], it is unknown how precise a linear-time algorithm can be. In this paper, we almost answer this question by presenting an almost linear-time algorithm to compute a constant-factor approximation.
2006.13063
Gabriel De Souza Pereira Moreira
Gabriel de Souza P. Moreira, Dietmar Jannach, Adilson Marques da Cunha
Hybrid Session-based News Recommendation using Recurrent Neural Networks
From the Proceeding of the LatinX in AI Research (LXAI) at ICML 2020. arXiv admin note: text overlap with arXiv:1904.10367
null
null
null
cs.LG cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a hybrid meta-architecture -- the CHAMELEON -- for session-based news recommendation that is able to leverage a variety of information types using Recurrent Neural Networks. We evaluated our approach on two public datasets, using a temporal evaluation protocol that simulates the dynamics of a news portal in a realistic way. Our results confirm the benefits of modeling the sequence of session clicks with RNNs and leveraging side information about users and articles, resulting in significantly higher recommendation accuracy and catalog coverage than other session-based algorithms.
[ { "created": "Mon, 22 Jun 2020 17:24:43 GMT", "version": "v1" } ]
2020-06-24
[ [ "Moreira", "Gabriel de Souza P.", "" ], [ "Jannach", "Dietmar", "" ], [ "da Cunha", "Adilson Marques", "" ] ]
We describe a hybrid meta-architecture -- the CHAMELEON -- for session-based news recommendation that is able to leverage a variety of information types using Recurrent Neural Networks. We evaluated our approach on two public datasets, using a temporal evaluation protocol that simulates the dynamics of a news portal in a realistic way. Our results confirm the benefits of modeling the sequence of session clicks with RNNs and leveraging side information about users and articles, resulting in significantly higher recommendation accuracy and catalog coverage than other session-based algorithms.
1803.03586
Qi Zhang
Qi Zhang, Jianhui Liu, and Guodong Zhao
Towards 5G Enabled Tactile Robotic Telesurgery
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic telesurgery has a potential to provide extreme and urgent health care services and bring unprecedented opportunities to deliver highly specialized skills globally. It has a significant societal impact and is regarded as one of the appealing use cases of Tactile Internet and 5G applications. However, the performance of robotic telesurgery largely depends on the network performance in terms of latency, jitter and packet loss, especially when telesurgical system is equipped with haptic feedback. This imposes significant challenges to design a reliable and secure but cost-effective communication solution. This article aims to give a better understanding of the characteristics of robotic telesurgical system, and the limiting factors, the possible telesurgery services and the communication quality of service (QoS) requirements of the multi-modal sensory data. Based on this, a viable network architecture enabled by the converged edge and core cloud is presented and the relevant research challenges, open issues and enabling technologies in the 5G communication system are discussed.
[ { "created": "Fri, 9 Mar 2018 16:16:42 GMT", "version": "v1" } ]
2018-03-12
[ [ "Zhang", "Qi", "" ], [ "Liu", "Jianhui", "" ], [ "Zhao", "Guodong", "" ] ]
Robotic telesurgery has a potential to provide extreme and urgent health care services and bring unprecedented opportunities to deliver highly specialized skills globally. It has a significant societal impact and is regarded as one of the appealing use cases of Tactile Internet and 5G applications. However, the performance of robotic telesurgery largely depends on the network performance in terms of latency, jitter and packet loss, especially when telesurgical system is equipped with haptic feedback. This imposes significant challenges to design a reliable and secure but cost-effective communication solution. This article aims to give a better understanding of the characteristics of robotic telesurgical system, and the limiting factors, the possible telesurgery services and the communication quality of service (QoS) requirements of the multi-modal sensory data. Based on this, a viable network architecture enabled by the converged edge and core cloud is presented and the relevant research challenges, open issues and enabling technologies in the 5G communication system are discussed.
2212.06123
Ambra Demontis Ph.D.
Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
null
null
null
null
cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
[ { "created": "Mon, 12 Dec 2022 18:50:49 GMT", "version": "v1" } ]
2022-12-13
[ [ "Demontis", "Ambra", "" ], [ "Pintor", "Maura", "" ], [ "Demetrio", "Luca", "" ], [ "Grosse", "Kathrin", "" ], [ "Lin", "Hsiao-Ying", "" ], [ "Fang", "Chengfang", "" ], [ "Biggio", "Battista", "" ], [ "Roli", "Fabio", "" ] ]
Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
2211.09171
Fabio Saggese
Fabio Saggese, Federico Chiariotti, Kimmo Kansanen, Petar Popovski
Efficient URLLC with a Reconfigurable Intelligent Surface and Imperfect Device Tracking
Submitted to ICC 2023, the copyright may be transferred without further notice
null
null
null
cs.IT math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of Reconfigurable Intelligent Surfaces (RIS) technology to extend coverage and allow for better control of the wireless environment has been proposed in several use cases, including Ultra-Reliable Low-Latency Communications (URLLC), communications. However, the extremely challenging latency constraint makes explicit channel estimation difficult, so positioning information is often used to configure the RIS and illuminate the receiver device. In this work, we analyze the effect of imperfections in the positioning information on the reliability, deriving an upper bound to the outage probability. We then use this bound to perform power control, efficiently finding the minimum power that respects the URLLC constraints under positioning uncertainty. The optimization is conservative, so that all points respect the URLLC constraints, and the bound is relatively tight, with an optimality gap between 1.5 and 4.5~dB.
[ { "created": "Wed, 16 Nov 2022 19:41:15 GMT", "version": "v1" } ]
2022-11-18
[ [ "Saggese", "Fabio", "" ], [ "Chiariotti", "Federico", "" ], [ "Kansanen", "Kimmo", "" ], [ "Popovski", "Petar", "" ] ]
The use of Reconfigurable Intelligent Surfaces (RIS) technology to extend coverage and allow for better control of the wireless environment has been proposed in several use cases, including Ultra-Reliable Low-Latency Communications (URLLC), communications. However, the extremely challenging latency constraint makes explicit channel estimation difficult, so positioning information is often used to configure the RIS and illuminate the receiver device. In this work, we analyze the effect of imperfections in the positioning information on the reliability, deriving an upper bound to the outage probability. We then use this bound to perform power control, efficiently finding the minimum power that respects the URLLC constraints under positioning uncertainty. The optimization is conservative, so that all points respect the URLLC constraints, and the bound is relatively tight, with an optimality gap between 1.5 and 4.5~dB.
1210.2984
Francesca A. Lisi
Francesca A. Lisi
Learning Onto-Relational Rules with Inductive Logic Programming
18 pages. arXiv admin note: text overlap with arXiv:1003.2586
null
null
null
cs.AI cs.DB cs.LG cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rules complement and extend ontologies on the Semantic Web. We refer to these rules as onto-relational since they combine DL-based ontology languages and Knowledge Representation formalisms supporting the relational data model within the tradition of Logic Programming and Deductive Databases. Rule authoring is a very demanding Knowledge Engineering task which can be automated though partially by applying Machine Learning algorithms. In this chapter we show how Inductive Logic Programming (ILP), born at the intersection of Machine Learning and Logic Programming and considered as a major approach to Relational Learning, can be adapted to Onto-Relational Learning. For the sake of illustration, we provide details of a specific Onto-Relational Learning solution to the problem of learning rule-based definitions of DL concepts and roles with ILP.
[ { "created": "Wed, 10 Oct 2012 16:56:41 GMT", "version": "v1" }, { "created": "Mon, 29 Oct 2012 18:25:34 GMT", "version": "v2" } ]
2012-10-30
[ [ "Lisi", "Francesca A.", "" ] ]
Rules complement and extend ontologies on the Semantic Web. We refer to these rules as onto-relational since they combine DL-based ontology languages and Knowledge Representation formalisms supporting the relational data model within the tradition of Logic Programming and Deductive Databases. Rule authoring is a very demanding Knowledge Engineering task which can be automated though partially by applying Machine Learning algorithms. In this chapter we show how Inductive Logic Programming (ILP), born at the intersection of Machine Learning and Logic Programming and considered as a major approach to Relational Learning, can be adapted to Onto-Relational Learning. For the sake of illustration, we provide details of a specific Onto-Relational Learning solution to the problem of learning rule-based definitions of DL concepts and roles with ILP.
1805.02974
Sepehr Assadi
Sepehr Assadi, Xiaorui Sun, Omri Weinstein
Massively Parallel Algorithms for Finding Well-Connected Components in Sparse Graphs
null
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental question that shrouds the emergence of massively parallel computing (MPC) platforms is how can the additional power of the MPC paradigm be leveraged to achieve faster algorithms compared to classical parallel models such as PRAM? Previous research has identified the sparse graph connectivity problem as a major obstacle to such improvement: While classical logarithmic-round PRAM algorithms for finding connected components in any $n$-vertex graph have been known for more than three decades, no $o(\log{n})$-round MPC algorithms are known for this task with truly sublinear in $n$ memory per machine. This problem arises when processing massive yet sparse graphs with $O(n)$ edges, for which the interesting setting of parameters is $n^{1-\Omega(1)}$ memory per machine. It is conjectured that achieving an $o(\log{n})$-round algorithm for connectivity on general sparse graphs with $n^{1-\Omega(1)}$ per-machine memory may not be possible, and this conjecture also forms the basis for multiple conditional hardness results on the round complexity of other problems in the MPC model. We take an opportunistic approach towards the sparse graph connectivity problem, by designing an algorithm with improved performance guarantees in terms of the connectivity structure of the input graph. Formally, we design an algorithm that finds all connected components with spectral gap at least $\lambda$ in a graph in $O(\log\log{n} + \log{(1/\lambda)})$ MPC rounds and $n^{\Omega(1)}$ memory per machine. As such, this algorithm achieves an exponential round reduction on sparse "well-connected" components (i.e., $\lambda \geq 1/\text{polylog}{(n)}$) using only $n^{\Omega(1)}$ memory per machine and $\widetilde{O}(n)$ total memory, and still operates in $o(\log n)$ rounds even when $\lambda = 1/n^{o(1)}$.
[ { "created": "Tue, 8 May 2018 12:29:21 GMT", "version": "v1" } ]
2018-05-09
[ [ "Assadi", "Sepehr", "" ], [ "Sun", "Xiaorui", "" ], [ "Weinstein", "Omri", "" ] ]
A fundamental question that shrouds the emergence of massively parallel computing (MPC) platforms is how can the additional power of the MPC paradigm be leveraged to achieve faster algorithms compared to classical parallel models such as PRAM? Previous research has identified the sparse graph connectivity problem as a major obstacle to such improvement: While classical logarithmic-round PRAM algorithms for finding connected components in any $n$-vertex graph have been known for more than three decades, no $o(\log{n})$-round MPC algorithms are known for this task with truly sublinear in $n$ memory per machine. This problem arises when processing massive yet sparse graphs with $O(n)$ edges, for which the interesting setting of parameters is $n^{1-\Omega(1)}$ memory per machine. It is conjectured that achieving an $o(\log{n})$-round algorithm for connectivity on general sparse graphs with $n^{1-\Omega(1)}$ per-machine memory may not be possible, and this conjecture also forms the basis for multiple conditional hardness results on the round complexity of other problems in the MPC model. We take an opportunistic approach towards the sparse graph connectivity problem, by designing an algorithm with improved performance guarantees in terms of the connectivity structure of the input graph. Formally, we design an algorithm that finds all connected components with spectral gap at least $\lambda$ in a graph in $O(\log\log{n} + \log{(1/\lambda)})$ MPC rounds and $n^{\Omega(1)}$ memory per machine. As such, this algorithm achieves an exponential round reduction on sparse "well-connected" components (i.e., $\lambda \geq 1/\text{polylog}{(n)}$) using only $n^{\Omega(1)}$ memory per machine and $\widetilde{O}(n)$ total memory, and still operates in $o(\log n)$ rounds even when $\lambda = 1/n^{o(1)}$.
2007.07221
Jishan Shaikh
Jishan Shaikh, Adya Sharma, Ankit Chouhan, Avinash Mahawar
Alpha-Net: Architecture, Models, and Applications
13 pages, 8 figures, project paper preprint
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning network training is usually computationally expensive and intuitively complex. We present a novel network architecture for custom training and weight evaluations. We reformulate the layers as ResNet-similar blocks with certain inputs and outputs of their own, the blocks (called Alpha blocks) on their connection configuration form their own network, combined with our novel loss function and normalization function form the complete Alpha-Net architecture. We provided the empirical mathematical formulation of network loss function for more understanding of accuracy estimation and further optimizations. We implemented Alpha-Net with 4 different layer configurations to express the architecture behavior comprehensively. On a custom dataset based on ImageNet benchmark, we evaluate Alpha-Net v1, v2, v3, and v4 for image recognition to give the accuracy of 78.2%, 79.1%, 79.5%, and 78.3% respectively. The Alpha-Net v3 gives improved accuracy of approx. 3% over the last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present an analysis of our dataset with 256, 512, and 1024 layers and different versions of the loss function. Input representation is also crucial for training as initial preprocessing will take only a handful of features to make training less complex than it needs to be. We also compared network behavior with different layer structures, different loss functions, and different normalization functions for better quantitative modeling of Alpha-Net.
[ { "created": "Sat, 27 Jun 2020 05:05:01 GMT", "version": "v1" } ]
2020-07-15
[ [ "Shaikh", "Jishan", "" ], [ "Sharma", "Adya", "" ], [ "Chouhan", "Ankit", "" ], [ "Mahawar", "Avinash", "" ] ]
Deep learning network training is usually computationally expensive and intuitively complex. We present a novel network architecture for custom training and weight evaluations. We reformulate the layers as ResNet-similar blocks with certain inputs and outputs of their own, the blocks (called Alpha blocks) on their connection configuration form their own network, combined with our novel loss function and normalization function form the complete Alpha-Net architecture. We provided the empirical mathematical formulation of network loss function for more understanding of accuracy estimation and further optimizations. We implemented Alpha-Net with 4 different layer configurations to express the architecture behavior comprehensively. On a custom dataset based on ImageNet benchmark, we evaluate Alpha-Net v1, v2, v3, and v4 for image recognition to give the accuracy of 78.2%, 79.1%, 79.5%, and 78.3% respectively. The Alpha-Net v3 gives improved accuracy of approx. 3% over the last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present an analysis of our dataset with 256, 512, and 1024 layers and different versions of the loss function. Input representation is also crucial for training as initial preprocessing will take only a handful of features to make training less complex than it needs to be. We also compared network behavior with different layer structures, different loss functions, and different normalization functions for better quantitative modeling of Alpha-Net.
2101.01713
Naoto Inoue
Naoto Inoue, Toshihiko Yamasaki
Learning from Synthetic Shadows for Shadow Detection and Removal
Accepted to IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), v2: fixed typos
null
10.1109/TCSVT.2020.3047977
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shadow removal is an essential task in computer vision and computer graphics. Recent shadow removal approaches all train convolutional neural networks (CNN) on real paired shadow/shadow-free or shadow/shadow-free/mask image datasets. However, obtaining a large-scale, diverse, and accurate dataset has been a big challenge, and it limits the performance of the learned models on shadow images with unseen shapes/intensities. To overcome this challenge, we present SynShadow, a novel large-scale synthetic shadow/shadow-free/matte image triplets dataset and a pipeline to synthesize it. We extend a physically-grounded shadow illumination model and synthesize a shadow image given an arbitrary combination of a shadow-free image, a matte image, and shadow attenuation parameters. Owing to the diversity, quantity, and quality of SynShadow, we demonstrate that shadow removal models trained on SynShadow perform well in removing shadows with diverse shapes and intensities on some challenging benchmarks. Furthermore, we show that merely fine-tuning from a SynShadow-pre-trained model improves existing shadow detection and removal models. Codes are publicly available at https://github.com/naoto0804/SynShadow.
[ { "created": "Tue, 5 Jan 2021 18:56:34 GMT", "version": "v1" }, { "created": "Sat, 13 Feb 2021 06:40:05 GMT", "version": "v2" } ]
2021-02-16
[ [ "Inoue", "Naoto", "" ], [ "Yamasaki", "Toshihiko", "" ] ]
Shadow removal is an essential task in computer vision and computer graphics. Recent shadow removal approaches all train convolutional neural networks (CNN) on real paired shadow/shadow-free or shadow/shadow-free/mask image datasets. However, obtaining a large-scale, diverse, and accurate dataset has been a big challenge, and it limits the performance of the learned models on shadow images with unseen shapes/intensities. To overcome this challenge, we present SynShadow, a novel large-scale synthetic shadow/shadow-free/matte image triplets dataset and a pipeline to synthesize it. We extend a physically-grounded shadow illumination model and synthesize a shadow image given an arbitrary combination of a shadow-free image, a matte image, and shadow attenuation parameters. Owing to the diversity, quantity, and quality of SynShadow, we demonstrate that shadow removal models trained on SynShadow perform well in removing shadows with diverse shapes and intensities on some challenging benchmarks. Furthermore, we show that merely fine-tuning from a SynShadow-pre-trained model improves existing shadow detection and removal models. Codes are publicly available at https://github.com/naoto0804/SynShadow.